CIOs and IT leaders call it the most disruptive technology yet, and now it’s moving rapidly into the mainstream. Artificial intelligence (AI), an increasingly crucial piece of the technology landscape, has arrived. More than 91 percent of businesses surveyed have ongoing — and increasing — investments in artificial intelligence.

Deploying AI workloads at speed and scale, however, requires software and hardware working in tandem across data centers and edge locations. Foundational IT infrastructure, such as GPU- and CPU-based processors, must provide big capacity and performance leaps to efficiently run AI. Without higher performance levels, AI workloads could take months and years to run. With it, organizations can accelerate AI advancements. 

Dell Technologies’ recent developments in hardware and software solutions mirror AI software capabilities to do just that—advance AI. More specifically, next-gen offerings from Dell Technologies provide 8-10x performance improvements according to MLCommons ®MLPerf™ benchmarks. The upgraded Dell Technologies solution portfolio includes a range of GPU-optimized servers for AI training and CPU-powered servers for enterprise-wide AI inferencing, both of which are essential, co-existing elements of AI deployment.

MLCommons MLPerf Results

For benchmarking, the MLCommons updated version 3.0, MLPerf Inference was used; the latest results are shown here. Benchmarks include categories such as image classification, object detection, natural language processing, speech recognition, recommender systems and medical image segmentation.

While the inference benchmark rules did not change significantly, Dell Technologies expanded its submission with the new generation of Dell PowerEdge servers, including new PowerEdge XE9680, XR7620, and XR5610 servers and new accelerators from its partners. Submissions were made with VMware running on NVIDIA AI Enterprise software and NVIDIA accelerators as well as Intel-based CPU-only results.

The results for Dell Technologies’ next-gen processors are extraordinary for the highly demanding use cases of AI training, generative AI model training and tuning and AI inferencing. Compared to previous generations of hardware, the results show a significant uptick in performance:

GPU-optimized servers produced an 8-10x improvement in performance.

CPU-powered servers generated a 6-8x improvement in performance.

More detailed results can be seen here

AI in Action

AI data center and edge deployments mandate a highly interdependent ecosystem of up-level software and hardware capabilities, including a mix of GPU- and CPU-based processors. Each industry and organization can tailor infrastructure based on unique needs, preferences and requirements.

Consider, for example, a pharmaceutical company using AI modeling and simulation for drug discovery. Modern developments are based on a chemist finding highly active molecules that also test negative for neurotoxicity. There are trillions of compounds to consider and evaluate. Each search takes almost two months and thousands of dollars, limiting the number of searches and tests that can be conducted. Using AI, simulations can examine many more molecules much faster and cheaper, opening a new world of possibilities. To accelerate drug discovery (there are thousands of diseases and only hundreds of cures), pharmaceutical companies need powerful processors to handle large and diverse data sets efficiently and effectively.

Retailers typically use AI differently than pharmaceutical companies. Retail use cases often revolve around video imagery used to enhance security, bolster intrusion detection and support self-check-out capabilities. To boost newfound capabilities in these areas, retailers need more powerful GPU-optimized processors to handle image-based data streams.

Advancing AI

Emerging generative AI use cases, such as digital assistants and co-pilots for software development, are appearing as the next frontier of AI. That’s why at Dell Technologies, innovation never rests.

When it comes to technology infrastructure, Dell Technologies and its partners are constantly innovating to reach new performance levels and help redefine what is possible. The exponential performance increase in NVIDIA GPU-optimized servers and the infusion of AI Inferencing in Intel’s® Xeon®-based servers are creating the required AI foundation. With these results, Dell Technologies can help organizations fuel AI transformations precisely and efficiently, including new AI training and inferencing software, generative AI models, AI DevOps tools and AI applications.

***

Dell Technologies. To help organizations move forward, Dell Technologies is powering the AI journey, including enterprise generative AI. With best-in-class IT infrastructure and solutions to run AI workloads and advisory and support services that roadmap AI initiatives, Dell is enabling organizations to boost their digital transformation and accelerate intelligent outcomes. 

Intel. The compute required for AI models has put a spotlight on performance, cost and energy efficiency as top concerns for enterprises today. Intel’s commitment to the democratization of AI and sustainability will enable broader access to the benefits of AI technology, including generative AI, via an open ecosystem. Intel’s AI hardware accelerators, including new built-in accelerators, provide performance and performance per watt gains to address the escalating performance, price and sustainability needs of AI.

Artificial Intelligence

Anxious to meet international standards, satisfy investors, and profit from a growing array of sustainable products, financial services firms are intensifying their focus on environmental, social, and governance (ESG) goals. While the incentives for ESG are compelling, managing programs and demonstrating success are fraught with challenges. But by adhering to the right standards and using technology to better organize programs, financial firms can gain a clearer vision of their ESG operations and speed up progress toward their goals. 

Doing well by doing good 

Many financial institutions are striving to align ESG programs with the United Nations’ 2030 Agenda for Sustainable Development, which lists 17 goals designed to end global poverty and promote an equitable transition to a sustainable world.  

“Companies across the globe are adopting the 2030 Agenda and UN SDGs Framework to ensure sustainable investments and operations,” says Kishan Changlani, Partner for strategic initiatives – sustainable banking, at Tata Consultancy Services (TCS).  

Financial services firms can use the 2030 Agenda and UN SDGs Framework as a guide for allocating ESG funds, such as creating a “green economy” team dedicated to helping companies that produce environmentally friendly goods and services. Leadership teams are also learning that ESG initiatives can boost business performance. One 2022 study found that organizations placing greater emphasis on ESG over the previous three years saw revenues increased by almost 10%, compared to 4.5% revenue growth from businesses showing a lower commitment to ESG.  

Overcoming data challenges 

Despite their growing commitment to ESG, financial firms have learned the path to sustainability and prosperity can be rocky. 

“ESG data quality is the biggest challenge. Quality at the least is about consistent data across asset classes, effective data for scenario planning, and harmonized ESG ratings amongst other aspects,” Changlani says. However, there are many other challenges as well, including regulatory requirements, human capital, stakeholder engagement, alignment of materiality and performance, and the need to embed ESG into an existing ERM (Enterprise Risk Management) framework. 

“The ESG regulatory landscape resembles an alphabet soup where the number of ESG standard-setters, data aggregators, analysis providers, ESG raters, and indices is increasing,” says Changlani.  

Financial services companies may also find it challenging to keep up with a broad scope of reporting requirements, resulting in a complex set of documents and deliverables that can lead to questions about a program’s validity or perception of greenwashing. 

Technology can help banks and other financial institutions overcome these hurdles. For example, TCS has developed a suite of solutions on Microsoft Cloud to unify and integrate ESG metrics and accurately measure performance. Changlani also recommends that companies limit data vendors to two or three and establish their own ESG benchmarks, instead of relying solely on external providers.  

Emerging technologies will further speed ESG progress. AI and machine learning algorithms can monitor compliance in real time. With natural language processing, organizations can analyze millions of reports quickly, helping them avoid pitfalls associated with greenwashing and other discredited activities. 

Blockchain technology can track assets across the supply chain, promoting transparency and credibility.  

“Technology is the key to helping the financial services industry move toward the greater good,” says Changlani. “It is what will make achieving the UN 2030 agenda possible.”  

Learn more about how TCS and Microsoft are powering the sustainable enterprise.  

Financial Services Industry, Green IT

Conventional wisdom says businesses must balance the cost of security with user experience—implying that security is a tax on digital interactions. Conventional wisdom appears to be outdated.

According to Foundry, the need for improvements in cybersecurity was cited as the No. 1 reason for the increase in tech budgets this year. Further, CEOs’ top priorities for IT in 2023 are:

Strengthening IT and business collaborationUpgrading IT and data security to reduce corporate riskImproving the customer experience 

IT leaders do not have to compromise. Advanced security policies, increased efficiencies, and improvements to the reliability and performance of your applications for a better user experience can be achieved together. It doesn’t need to be a tax or a tradeoff.

The security paradox

To understand the “tradeoff” mentality, let’s review the ‘security paradox.’ Cyberattacks are increasing exponentially every year. According to NETSCOUT, one DDoS attack occurs every three seconds, and MITRE has reported more than 25,000 new common vulnerabilities and exposures (CVEs) in 2022, which is a 24% increase year over year from 2021. For most organizations, it’s not if a cyberattack is going to occur, but when.

As the latest attacks and statistics make headlines, leaders often tend to overcompensate by implementing chains of security solutions, often layering on top of each other in a disjointed fashion to protect against new exploits and prevent service interruption in case of an attack.

A chain is no stronger than its weakest link. These disjointed solutions can add latency and performance bottlenecks between security layers and create single points of failure, which impact the speed and availability of businesses online. Therein lies the security paradox: an organization could inadvertently harm itself while attempting to secure its network and applications.

The cost of a data breach

Beyond the implicit cost of security, what is the actual cost of a data breach when one strikes an organization?  IBM’s annual Data Breach Report revealed that the average data breach cost in 2022 was USD 4.35 million—an all-time high. Gartner has estimated the cost of downtime from DDoS attacks to be $300,000 per hour.

What these numbers don’t include is the potential damage to a brand’s reputation and to its customers. CIO Insight reported 31% of consumers stopped doing business with a company due to a security breach; a significant number of these said they had lost trust in the brand. And certainly, poor performance leads to higher bounce rates and lower conversion rates.

With layers of piecemealed security solutions increasing operational complexity and reducing application performance, coupled with the increased frequency of cyberattacks, it’s no wonder these factors lead to negative impacts on customer experience and their ability to quickly and safely interact with businesses online. 

The good news is that a holistic approach to approach security can detect and mitigate attacks quickly before they hit the bottom line. With the right unified security solutions, performance and customer experience can improve, too. 

Debunking conventional wisdom

As already stated, businesses can indeed increase security while improving performance, operational efficiency, and customer experience. But how can this be achieved without tradeoffs? 

By adopting holistic edge-enabled security solutions built on an extensive, globally distributed platform, businesses can address the latest cybersecurity threats and achieve comprehensive protection across networks and applications without a single point of failure or performance bottleneck. The benefits of an edge-enabled holistic security solution are:

Massive scale and resiliency to ensure uptimeIntelligent rules execution for faster threat detectionIntegration with edge logic and CI/CD workflows to improve operationsAttacks mitigated at the source to improve performance and user experience

Security solutions that provide easy integration and automation can enhance IT workflows and enable quick deployments of security updates to keep up with the evolving threat landscape. Platforms like Edgio’s provide developers with a single pane of glass with visibility and control to manage their application performance and security.

So yes, businesses can, in fact, debunk conventional wisdom when it comes to performance and security, but having the right security solution matters. 

The right security solution can ultimately reduce costs, increase operational efficiency, and improve customer experience, all while protecting your data, your brand, and your bottom line. It’s a win-win for your entire organization.

Turbo-charge web application and API performance with Edgio Security.

Security

Hyper competition, globalization, economic uncertainties — all of it converging to drive a C-suite impetus for the business to become more data-driven. Organizations invest in more data science and analytical staff as they demand faster access to more data. At the same time, they’re forced to deal with more regulations and privacy mandates such as GDPR, CCPA, HIPAA, and numerous others. The outcome? The current methods meant to serve them — usually an overburdened IT team — end up failing, resulting in an alarming amount of friction across the entire organization.

The heart of the friction

Friction across the enterprise ecosystem impacts every part of the value chain. It’s driven by three primary dynamics:

Increasing number of analysts and data scientists asking for data.More regulations and policies required to enforce.A tectonic shift of data processing storage to the cloud.

Analytical demand

Over the last two to three decades, analytics have gone from the domain of IT to business self-service analytics. For the traditional financial and summary type reports, this is easy since data comes from curated and structured data warehouses. The newer self-service demand is for non-curated data for purposes of AI and machine learning.

Regulatory demand

More regulations result in more policies, but the bigger impact is going from passive enforcement to active enforcement. Passive enforcement relies on training people and hoping they’ll follow proper protocol. Active enforcement establishes a posture where systems proactively stop people from hurting themselves or the company. For example, a zero trust framework would assume you should only have access to the data you need and nothing more.

Moving to the cloud

With the move to the cloud, we don’t just move data outside our traditional perimeter defenses. The platforms separate storage from processing or compute with different styles of compute to serve different analytical use cases. The result is an exploding number of policies applied across dozens of data technologies — each with its own mechanism for securing data.

A use case for balanced data democratization

Privacera worked with a major sports apparel manufacturer and retailer on its data-driven journey to the cloud. The client’s on-prem data warehouse and Hadoop environment turned into a massive set of diverse technologies: S3 for storage and a host of compute and pressing services like EMR, Amazon Web Services (AWS), Starburst, Snowflake, Kafka, and Databricks. GDPR and CCPA emerged as critical mandates that had to be enforced actively. Hundreds of analysts excitedly tried to get access to the new data platform, outnumbering the IT support staff. The result was more than 1 million policies, and they only managed to get around 15 percent of their data into the business’s hands.

The solution: Centralized policy management and enforcement for their entire data estate. Here are the elements of their centralized data security governance:

Real-time sensitive data discovery, classification, and tagging to identify sensitive data in newly onboarded data sets from trading partners.Build once, enforce everywhere. Policies are built centrally in an easy to use, intuitive manner. Those policies are then synchronized to each underlying data service where the policy is natively enforced.Built-in advanced attribute, role, resource or tag-based policies, masking and encryption to define fine-grained controls versus the previous coarse-grained model.Real-time auditing of access events, monitoring, and alerting on suspicious events.

The result: The client reduced the number of policies by 1,000-fold, onboarded new data 95 percent faster, and got 100 percent of the data into the business’ hands. 

The new way forward

Gartner’s State of Data and Analytics Governance suggests that by 2025, 80 percent of analytical initiatives will be unsuccessful because they fail to modernize their data governance processes. The challenge for CIOs and data and privacy leaders is these mandates are often not owned by a single person. CISOs often feel they own the security posture but not the enforcement. The data leader focuses on the analytical output and insights. The CIO is often left holding the bag and needs to pull it all together. In its recent Hype Cycle for Data Security 2022, Gartner suggests 70 percent of the investment in the data security category will be toward broad-based data security platforms that can help organizations centralize data access and policy enforcement across their diverse data estate.

Learn more about balancing performance and compliance with powerful data democratization. Get your free copy of the Gartner Hype Cycle for Data Security 2022.

Data and Information Security

Here’s a proposition to consider: among the ranks of large enterprises, commercial success increasingly relies on digital transformation. In turn, digital transformation relies on modernized enterprise networks that deliver flexibility, performance and availability from the edge to the cloud. Intuitively, this hypothesis makes a lot of sense.

In many enterprises, it’s also increasingly becoming the subject of painstaking debate. After two years of quick-fix digitalization on top of pre-COVID-era network technologies, the limits of the status quo are becoming evident. All too often, legacy networks limit the potential for digital transformation. In many organizations, it’s way past time to address the fundamentals.

If this debate sounds familiar to you, it’s worth looking at the 2022-23 Global Network Report from NTT, a new piece of research that offers an intriguing view of how enterprises around the world are managing their networks.

Among other things, NTT’s survey suggests a strong correlation between a willingness to invest in modernizing networks and high levels of commercial performance. At the other end of the spectrum, NTT’s survey confirms many enterprise networks suffer from long-term underinvestment and increasing levels of technical debt. The distance between these two different approaches feels substantial.

NTT’s report – based on responses from over 1,300 network specialists and IT and business decision-makers worldwide – defines high levels of commercial performance using straightforward criteria. To qualify as a “top-performer”, organizations in the survey needed to have generated year-on-year revenue growth of over 10%. They also needed to have generated operating margins of over 15% in the last financial year.

In network terms, what do these organizations look like? It’s here that a willingness to invest in modern network technologies starts to look like an indispensable ingredient for high performance in commercial terms.

Nine out of 10 top-performing organizations are increasing network investment to support digital transformation. Many are spending over 2% of their annual revenues – a significant sum – on their networks, deploying technologies designed to enable rapid transformation, provide greater availability and flexibility, and support not just today’s requirements, but tomorrow’s requirements as well.

Eight out of 10 high-performing organizations say their network strategy is aligned with their business goals. In practice, this involves a clear understanding that the quality of the network directly affects their ability to address the most pressing business and digital transformation challenges. (By contrast, only 42% of underperformers share this sentiment.)

The underperformers in NTT’s survey are a mirror image of these overachieving organizations. Most CIOs and CTOs at these companies agree that networks play a vital role in delivering revenue growth. They also recognize business demands for increased speed, agility and innovation can only be satisfied by new operating models. And yet these organizations typically suffer from delayed upgrades, high levels of technical debt and poor visibility across the network.

The older the network is, the greater the chance of negative impacts on service delivery, customer satisfaction and the employee experience. Some 69% of the CIOs and CTOs surveyed by NTT say technical debt continues to accumulate. Asked to identify the risks generated by underinvestment, respondents most frequently pointed to classic effects of technical debt: inflated IT operational costs and limited availability of new services required for digital transformation.

For these enterprises, networks threaten to become a cross between a millstone and a minefield (slowing down progress and continually threatening to blow up in the face of network professionals).

In this hybrid and hyperconnected world where organizations need to deliver great employee and customer experiences, the network provides the fabric of the digital organization. NTT’s intelligent and secure Network as a Service enables a complete edge-to-cloud strategy, delivering a wide array of benefits: increased agility, reduced risk, greater flexibility, scalability, automation, predictability and control.

Given today’s high-performance hybrid environment, Matthew Allen, Vice President, Service Offer Management – Networking at NTT, suspects that the status quo is time-limited for underperforming enterprises.

“You can start to transform your business on the networks you have. However, as this business transformation drives a distribution of applications and business functions across many, diverse locations (SaaS, PaaS, IaaS, private cloud, etc.), a legacy network solution will not be able to keep pace with this change – it will become increasingly difficult for distributed applications and workloads to communicate effectively and securely, at the speed the business requires.”

NTT’s survey suggests organizations that delay network modernization run the risk of ending up in an unsustainable position – technical debt will continue to accumulate, downtime will occur as networks fail, and the increased operational complexity of stitching together and maintaining networks to support distributed workloads will eventually cause something to slip. Certainly, the commercial implications look unpleasant.

On this basis alone, it’s worth looking at NTT’s survey. It’s also worth asking yourself about your organization’s network strategy. Does it look like the strategy of a top-performing organization or an underperforming one? NTT’s analysis suggests that the difference between the two is more important than we might imagine. To learn more, read the 2022–23 Global Network Report from NTT – you can view the key findings infographic or download the complete report with access to the full data set.

Networking

In their rush to the cloud, companies can easily end up with significant waste by taking a “best efforts” approach to aligning cloud instance types and sizes to workloads.

Businesses, particularly those that are relatively new to the cloud, often overprovision resources to ensure performance or avoid running out of capacity. The result is that their workloads may consume a fraction of the resources being paid for. Even organizations experienced with cloud infrastructure can waste 20% to 30% of their cloud spending on capacity that simply isn’t needed.

Compounding the challenge is the fact that the major cloud service providers (CSPs) offer as many as 600 different service options based on factors such as processor type, memory configuration, storage, networking, hypervisor, and other variables. Understanding all these options is impractical – if not impossible – for humans, let alone determine the best fit for a given workload, especially at scale. What’s more, the cloud options, and workloads being hosted, change all the time.

Complexity is amplified by the fact that 90% of enterprises use multiple clouds, according to IDC.[1] Relying on people to manually select the right cloud instances is a risky proposition as even small mistakes can add up to big unanticipated costs. Analytics that take the guesswork out by determining the best selections, and ultimately automating instance configuration, is key. IDC research shows that capacity optimization has emerged as a top priority (alongside cost management) within cloud-based organizations.

Although the major CSPs all offer free onboarding and optimization functionality and services, they are typically quite basic with respect to analytics and focus on purchase plans and billing optimizations rather than configuration management. The free services also lack granular controls and detailed policies, and don’t explain how particular recommendations are reached.

Reducing costs is also more than just a matter of choosing instance types. By leveraging features within the hardware, customers can achieve higher performance and reduce the sizes of their instances, or reduce the number of instances required, or avoid paying for them entirely. For example, container images that are optimized to leverage specific processor features can be used to significantly improve throughput in containerized environments, without the need for additional CPU power.

Intel® Cloud Optimizer (ICO) by Densify illustrates how automation can be applied to cloud instance choice and configuration to achieve savings at all levels. It is a powerful matching engine that chooses which provider instances are the best choices for the customer’s workloads as well as optimal hardware and software configurations for each instance.

Configurable policies mean that ICO can be tuned to the characteristics of each unique workload. For example, when a company wants to optimize for cost in a development environment but optimize for performance in production. The software enables this fine level of management based on utilization-level targets specified by the customer.

Optimization is even more important for organizations that promote distributed decision-making, enabling staff like developers to make their own choices about which cloud instance types to use. The emerging discipline of FinOps, which is a management practice that promotes shared responsibility for cloud computing infrastructure and costs, brings discipline to this practice while cloud optimization tools make detailed tracking and accountability possible. This lets staff make choices quickly and deploy functionality for the business, while the organization can have the confidence that analytics will show them where optimization can happen after the fact.

IDC research[2] found that 59% of IT automation projects pay off in less than 12 months. Given that the research firm also found that CEOs were more concerned with controlling IT costs than any other C-level executive, applying automation to cloud resource management just makes sense.

To learn more, listen in as Intel’s Jon Slusser, IDC’s Jevin Jensen, and Andrew Hillier from Densify explore the challenges of optimizing for price and performance in the cloud.

Notices and Disclaimers
Intel technologies may require enabled hardware, software, or service activation. No product or component can be absolutely secure. Your costs and results may vary. © Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. Other names and brands may be claimed as the property of others.

[1] Source: Future Enterprise Resiliency & Spending Survey – Wave 12, IDC, January 2022

[2] Source: Future Enterprise Resiliency & Spending Survey – Wave 12, IDC January 2022

Cloud Management

The benefits of analyzing vast amounts of data, long-term or in real-time, has captured the attention of businesses of all sizes. Big data analytics has moved beyond the rarified domain of government and university research environments equipped with supercomputers to include businesses of all kinds that are using modern high performance computing (HPC) solutions to get their analytics jobs done. Its big data meets HPC ― otherwise known as high performance data analytics. 

Bigger, Faster, More Compute-intensive Data Analytics

Big data analytics has relied on HPC infrastructure for many years to handle data mining processes. Today, parallel processing solutions handle massive amounts of data and run powerful analytics software that uses artificial intelligence (AI) and machine learning (ML) for highly demanding jobs.

A report by Intersect360 Research found that “Traditionally, most HPC applications have been deterministic; given a set of inputs, the computer program performs calculations to determine an answer. Machine learning represents another type of applications that is experiential; the application makes predictions about new or current data based on patterns seen in the past.”

This shift to AI, ML, large data sets, and more compute-intensive analytical calculations has contributed to the growth of the global high performance data analytics market, which was valued at $48.28 billion in 2020 and is projected to grow to $187.57 billion in 2026, according to research by Mordor Intelligence. “Analytics and AI require immensely powerful processes across compute, networking and storage,” the report explained. “As a result, more companies are increasingly using HPC solutions for AI-enabled innovation and productivity.”

Benefits and ROI

Millions of businesses need to deploy advanced analytics at the speed of events. A subset of these organizations will require high performance data analytics solutions. Those HPC solutions and architectures will benefit from the integration of diverse datasets from on-premise to edge to cloud. The use of new sources of data from the Internet of Things to empower customer interactions and other departments will provide a further competitive advantage to many businesses. Simplified analytics platforms that are user-friendly resources open to every employee, customer, and partner will change the responsibilities and roles of countless professions.

How does a business calculate the return on investment (ROI) of high performance data analytics? It varies with different use cases.

For analytics used to help increase operational efficiency, key performance indicators (KPIs) contributing to ROI may include downtime, cost savings, time-to-market, and production volume. For sales and marketing, KPIs may include sales volume, average deal size, revenue by campaign, and churn rate. For analytics used to detect fraud, KPIs may include number of fraud attempts, chargebacks, and order approval rates. In a healthcare environment, analytics used to improve patient outcomes might include key performance indicators that track cost of care, emergency room wait times, hospital readmissions, and billing errors.

Customer Success Stories

Combining data analytics with HPC:

A technology firm applies AI, machine learning, and data analytics to client drug diversion data from acute, specialty, and long-term care facilities and delivers insights within five minutes of receiving new data while maintaining a HPC environment with 99.99% uptime to comply with service level agreements (SLAs).A research university was able to tap into 2 petabytes of data across two HPC clusters with 13,080 cores to create a mathematical model to predict behavior during the COVID-19 pandemic.A technology services provider is able to inspect 124 moving railcars ― a 120% reduction in inspection time ― and transmit results in eight minutes, based on processing and analyzing 1.31 terabytes of data per day.A race car designer is able to process and analyze 100,000 data points per second per car ― one billion in a two-hour race ― that are used by digital twins running hundreds of different race scenarios to inform design modifications and racing strategy.  Scientists at a university research center are able to utilize hundreds of terabytes of data, processed at I/O speeds of 200 Gbps, to conduct cosmological research into the origins of the universe.

Data Scientists are Part of the Equation

High performance data analytics is gaining stature as more and more data is being collected.  Beyond the data and HPC systems, it takes expertise to recognize and champion the value of this data. According to Datamation, “The rise of chief data officers and chief analytics officers is the clearest indication that analytics has moved from the backroom to the boardroom, and more and more often it’s data experts that are setting strategy.” 

No wonder skilled data analysts continue to be among the most in-demand professionals in the world. The U.S. Bureau of Labor Statistics predicts that the field will be among the fastest-growing occupations for the next decade, with 11.5 million new jobs by 2026. 

For more information read “Unleash data-driven insights and opportunities with analytics: How organizations are unlocking the value of their data capital from edge to core to cloud” from Dell Technologies. 

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

Data Management

As the broader economy and business environment continues to recover and rebound, there is an opportunity for IT leaders to leverage increased budgets to strategically invest in and prepare for new approaches to enterprise technology.

As noted by Spiceworks, the majority of businesses are increasing IT spending, with a particular focus on modernisation. “The hybrid work era, which coincided with the standardisation of the cloud as the backbone for enterprise operations, has put in motion renewed demand for software and services to modernise organisations’ technology infrastructure,” the report notes.

In other words, leaders are taking the opportunity of increased spending in IT to prepare their environments for new ways of working, that better integrate cloud and on-premises computing. With hardware, this means a renewed focus on three areas: efficiency, performance, and security. To help businesses capitalise on that opportunity, Intel has designed its vPro platform to deliver meaningful gains to enterprises across all three priorities.

Performance

Hybrid work environments require higher levels of performance, as remote employees rely more on video collaboration, and the organisation looks to more intensive applications like AI, and edge deployments.

Because the vPro platform is powered by the 12th generation of Intel Core processors, it delivers the significantly improved performance needed to make these environments seamless. Intel’s stats show that the mobile processors deliver up to 27 per cent faster application performance, the desktop processes achieve 21 per cent faster application performance, and, for example, this translates to a 23 per cent faster application performance while using Microsoft Excel during a Zoom video conference call.

Efficiency and Sustainability

Last year, Deloitte research found that more than half of consumers now expect companies to take meaningful steps towards reducing carbon emissions and improving sustainability. With IT contributing a considerable amount to the power draw of the typical company, finding efficiencies through the IT refreshes becomes an effective way that the organisation to show proactive steps towards sustainability.

One of the key features in the vPro platform is the Intel Active Management Technology (Intel AMT) that allows organisation to remotely manage the energy consumption of devices by shutting inactive devices down, and reduce the need for remote callouts or deskside visits. Intel estimates that this saves around one million kilowatt hours of energy consumption for every 20,000 devices managed this way, and additionally, by eliminating the need for separate energy management software, saves organisations by $25 – $75 per device.

Security

Adapting a hybrid and flexible approach to work changes the security dynamic. No longer is the traditional “perimeter” defence going to be sufficient, as organisations house data in clouds off-site, and employees work remotely and outside of the boundaries of the organisation. Addressing this security challenge requires a renewed focus on endpoint security.

Intel provides new security features thought vPro which are designed to renew that endpoint security layer. Intel Threat Detection Technology (Intel TDT) provides ransomware detection at the hardware level, allowing for the immediate response before the infection can spread.

The vPro platform also has features to detect living-off-the-land and supply chain-style attacks with a “zero trust” approach, in which AI automatically detects anomalies when applications are behaving unusually.

The vPro Suite Explained

There are four key branches to the Intel vPro solution, designed to bring the functionality of the suite across all platforms, and for all verticals:

Intel vPro Enterprise for Windows has been designed for enterprises and managed service providers that are looking after large fleets of devices.Intel vPro Essentials is the solution that Intel uses to provide SMEs with the core security and device management capabilities available to enterprises. Intel vPro Enterprise for Chrome OS has been designed to bring the efficiency of Chromebooks to enterprise environments by boosting their performance, stability, and security features.Intel vPro, An Evo Design combines two of Intel’s flagship products to deliver the company’s vision around security, performance, and management to mobile business environments.

Combined, Intel is pushing the vPro solution to be the ideal opportunity for a refresh of the IT environment, delivering the performance required of the modern mobile, edge and hybrid-driven organisations, while also being mindful of the changing dynamics around efficiency and security.

For more information on vPro, click here.

Business Process Management, CPUs and Processors

As the broader economy and business environment continues to recover and rebound, there is an opportunity for IT leaders to leverage increased budgets to strategically invest in and prepare for new approaches to enterprise technology.

As noted by Spiceworks, the majority of businesses are increasing IT spending, with a particular focus on modernisation. “The hybrid work era, which coincided with the standardisation of the cloud as the backbone for enterprise operations, has put in motion renewed demand for software and services to modernise organisations’ technology infrastructure,” the report notes.

In other words, leaders are taking the opportunity of increased spending in IT to prepare their environments for new ways of working, that better integrate cloud and on-premises computing. With hardware, this means a renewed focus on three areas: efficiency, performance, and security. To help businesses capitalise on that opportunity, Intel has designed its vPro platform to deliver meaningful gains to enterprises across all three priorities.

Performance

Hybrid work environments require higher levels of performance, as remote employees rely more on video collaboration, and the organisation looks to more intensive applications like AI, and edge deployments.

Because the vPro platform is powered by the 12th generation of Intel Core processors, it delivers the significantly improved performance needed to make these environments seamless. Intel’s stats show that the mobile processors deliver up to 27 per cent faster application performance, the desktop processes achieve 21 per cent faster application performance, and, for example, this translates to a 23 per cent faster application performance while using Microsoft Excel during a Zoom video conference call.

Efficiency and Sustainability

Last year, Deloitte research found that more than half of consumers now expect companies to take meaningful steps towards reducing carbon emissions and improving sustainability. With IT contributing a considerable amount to the power draw of the typical company, finding efficiencies through the IT refreshes becomes an effective way that the organisation to show proactive steps towards sustainability.

One of the key features in the vPro platform is the Intel Active Management Technology (Intel AMT) that allows organisation to remotely manage the energy consumption of devices by shutting inactive devices down, and reduce the need for remote callouts or deskside visits. Intel estimates that this saves around one million kilowatt hours of energy consumption for every 20,000 devices managed this way, and additionally, by eliminating the need for separate energy management software, saves organisations by $25 – $75 per device.

Security

Adapting a hybrid and flexible approach to work changes the security dynamic. No longer is the traditional “perimeter” defence going to be sufficient, as organisations house data in clouds off-site, and employees work remotely and outside of the boundaries of the organisation. Addressing this security challenge requires a renewed focus on endpoint security.

Intel provides new security features thought vPro which are designed to renew that endpoint security layer. Intel Threat Detection Technology (Intel TDT) provides ransomware detection at the hardware level, allowing for the immediate response before the infection can spread.

The vPro platform also has features to detect living-off-the-land and supply chain-style attacks with a “zero trust” approach, in which AI automatically detects anomalies when applications are behaving unusually.

The vPro Suite Explained

There are four key branches to the Intel vPro solution, designed to bring the functionality of the suite across all platforms, and for all verticals:

Intel vPro Enterprise for Windows has been designed for enterprises and managed service providers that are looking after large fleets of devices.Intel vPro Essentials is the solution that Intel uses to provide SMEs with the core security and device management capabilities available to enterprises. Intel vPro Enterprise for Chrome OS has been designed to bring the efficiency of Chromebooks to enterprise environments by boosting their performance, stability, and security features.Intel vPro, An Evo Design combines two of Intel’s flagship products to deliver the company’s vision around security, performance, and management to mobile business environments.

Combined, Intel is pushing the vPro solution to be the ideal opportunity for a refresh of the IT environment, delivering the performance required of the modern mobile, edge and hybrid-driven organisations, while also being mindful of the changing dynamics around efficiency and security.

For more information on vPro, click here.

Business Process Management, CPUs and Processors

CIOs of large enterprises have pain points that are complex, underscoring the need for suppliers to listen intently and understand their predicaments. The challenges of managing data, the lifeblood of any enterprise, are continuously evolving and require attention because ignoring them only makes the “pain points” worse.

CIOs and their teams look to the tech industry to solve their problems, develop new, cost-effective technology solutions, and make implementation of new solutions smooth and easy, with built-in flexibility. This article explores three examples of how listening to the concerns, and changing the requirements and needs of CIOs, has resulted in viable technological solutions that are now widely in demand.

The need to improve cybersecurity by increasing cyber resilienceThe need for the lowest latency, while delivering the highest real-world application performanceThe need to incorporate AI operations (AIOps) and development operations (DevOps) as part of a modern IT strategy

As the chief marketing officer of Infinidat, I continually hear customer input and feedback, which feed into a strong cycle of continuous improvement. Product strategy must align with not only today’s needs but the anticipated, evolving needs of the future. A new product must help address or eliminate one or more pain points. Otherwise, what is its value?  This is the story of Infinidat’s comprehensive enterprise product platforms of data storage and cyber-resilient solutions, including the recently launched InfiniBox™ SSA II as well as InfiniGuard®, taking on and knocking down three pain points that are meaningful for a broad swath of enterprises.     

The need to improve cybersecurity by increasing cyber resilience

Cyber resilience is among the most important and highly demanded requirements of enterprises today to ensure exceptional cybersecurity and combat cyberattacks across the entire storage estate and data infrastructure. In comprehensive surveys by Fortune and KPMG in the last 12 months, cybersecurity has been cited as the No. 1 concern of CEOs. The continuous attempts at comprehensive theft and hostage-taking of valuable corporate data can be overwhelming. 

This naturally puts immense pressure on CIOs and CISOs to deal with the rapidly expanding threat landscape – and it’s much more than securing network connections. It now extends to the people at their desks or at the edges of the company network, creating weak points. Industry data confirms the average dwell time for an enterprise-level cyberattack is up to 287 days. The C-suite is rightly concerned about this shroud of secrecy and how eerily “patient” cyber criminals are, taking systematic approaches and looking for the tiniest of cracks to exploit. 

Cyber resilience must be part of an enterprise’s overall corporate cybersecurity strategy. One example of cyber resilience is the ability to recover known good copies of the enterprise’s data. When you’re able to do it – and do it quickly – then the leverage that the cyber attackers thought they had is dramatically reduced, if not completely eliminated. To have end-to-end resilience, an enterprise needs to build it into primary storage for the most critical apps and workloads, as well as secondary storage to protect backup copies of data. 

Infinidat added cyber resilience on its InfiniGuard® secondary storage system during the past year and, at the end of April 2022, across its primary storage platforms with the InfiniSafe Reference Architecture, encompassing Infinidat’s complete portfolio. InfiniSafe combines immutable snapshots of data, logical air gapping, a fenced forensic environment, and virtually instantaneous data recovery, and is now extended into the InfiniBox SSA II, as well as the entire InfiniBox family. 

“With the InfiniSafe Cyber Resiliency Technology extending into the InfiniBox portfolio, we’re able to provide our customers the peace of mind they need in a time filled with cyberattacks and data breaches,” said Trent Widtfeldt, Chief of Engineering, Technologent, a female-owned global IT solutions provider. “Technologent is known for partnering with the best technology vendors to ensure we bring the most efficient solutions to our customers, and Infinidat has always been a key partner in this area.”

The need for the lowest latency while delivering the highest application performance

CIOs and storage administrators have asked whether a performance void in enterprise data infrastructure could be filled – a void that no storage vendor had been able to meet to their satisfaction. It is the ability to provide consistent, ultra-low latency, super-fast response times for virtually every I/O that they process, not just great latency for an overall average of their I/Os. If this could be delivered, they said, it would provide them with valuable competitive differentiation for their real-world applications and workloads.

Although CIOs already know, for the most part, that most storage vendors can meet or exceed their requirements for bandwidth and IOPs, what they are really pointing to is the “new” storage performance battleground, which is latency. They articulate to anyone who will listen, and in a position to make it happen, how they want consistent ultra-low latency.

To address this customer demand, Infinidat developed the InfiniBox SSA II, delivering unprecedented latency. Enterprises have seen real-world workloads hit performance as fast as only 35 microseconds for storage performance. This is not an artificial “hero” number that no real application has ever seen, but an observed performance from real, live, customer applications. This enhancement allows customers to not only have optimal application and workload performance, but also allows for substantial storage consolidation, dramatically transforming storage performance, increasing efficiency, and reducing total cost. 

“Infinidat is squarely targeting this market segment with its InfiniBox SSA, and the vendor’s updated capabilities, including in particular the ability to deliver latencies as low as 35 microseconds and the InfiniBox SSA II’s new InfiniSafe cyber resilience support, make it an excellent fit for tier 0 workloads in the enterprise,” said Eric Burgener, Research Vice President, Infrastructure Systems, Platforms and Technologies Group, IDC.

The need to incorporate AIOps and DevOps as part of a modern IT strategy

CIOs have conveyed a common reality that they are under pressure to deliver nonstop operations within budget constraints, limited headcount, and short-term deadlines. A strategy to manage such converging forces cannot be cookie-cutter. Each enterprise has its own unique operating requirements. The challenge is for the IT team to deliver new capabilities that are tightly aligned with the organization’s specific needs – and to do it rapidly and with low risk. 

A smart move for CIOs, and other IT executives, is to exploit the underlying capabilities of the installed infrastructure. This has led IT leaders to demand that their infrastructures have the highest levels of autonomous automation and intelligence, along with proven extensions to enable further operational integration. This integration includes both interoperability with incumbent IT consoles as well as simple, trusted access to unique functionality and the creation of new capabilities. Additionally, it is critical that outdated fly-by-wire management controls are replaced with infrastructure intelligence, automation, and proven solutions.

Earlier this year, Infinidat introduced InfiniOps™, a collection of extensive software capabilities that exploit world-class AIOps functionality and expedite DevOps activities. By harnessing the unique operational awareness of InfiniVerse, IT teams have streamlined storage oversight and management to unprecedented levels of set-it-and-forget-it simplicity at their local site and across the globe. Infinidat also works closely with data center AIOps vendors, such as ServiceNow and VMware, so that Infinidat’s storage platforms are integrated into their cross-data center AIOps toolsets. Additionally, a proven set of IT tools are available to further integrate InfiniBox capabilities into IT operations for standard and container application deployment environments – at no additional cost.

IT must build upon a foundation of the highest performing, most available, and most intelligent infrastructure. InfiniBox delivers on these requirements with 100% availability, microsecond latency, multi-petabyte scale, and its Neural Cache.

Newly introduced InfiniOps technologies include InfiniVerse, a solution that delivers application to storage insights as a secure, cloud-based service. IT staff can see their entire storage infrastructure across multiple sites, including key indicators such as system health, rate of capacity consumption, and SAN/WAN performance compared to internal latency measurements. InfiniOps also offers a wide variety of tools to streamline IT operations, accelerate solution deployment, and reduce internal solution development risks.

Our customers are looking for enterprise storage solutions that deliver the utmost in availability, reliability, and performance. With the InfiniBox SSA II, Infinidat has done that and more. The InfiniBox SSA II has added sophisticated AIOps technology and comprehensive cyber resilience to the solution. At the same time, the InfiniBox SSA II continued Infinidat’s powerful “set-it-and-forget-it” ease-of-use architecture. 

All of this provides our customers with a highly differentiated enterprise storage platform that provides not only strong technical values, but critical business value as well.

For more information, visit Infinidat here

Security