Six out of ten organizations today are using a mix of infrastructures, including private cloud, public cloud, multi-cloud, on-premises, and hosted data centers, according to the 5th Annual Nutanix Enterprise Cloud Index. Managing applications and data, especially when they’re moving across these environments, is extremely challenging. Only 40% of IT decision-makers said that they have complete visibility into where their data resides, and 85% have issues managing cloud costs. Addressing these challenges will require simplification, so it’s no surprise that essentially everyone (94%) wants a single, unified place to manage data and applications in mixed environments.

In particular, there are three big challenges that rise to the top when it comes to managing data across multiple environments. The first is data protection.

“Because we can’t go faster than the speed of light, if you want to recover data, unless you already have the snapshots and copies where that recovered data is needed, it’ll take some time,” said Induprakas Keri, SVP of Engineering for Nutanix Cloud Infrastructure. “It’s much faster to spin up a backup where the data is rather than moving it, but that requires moving backups or snapshots ahead of time to where they will be spun up, and developers don’t want to think about things like that. IT needs an automated solution.”

Another huge problem is managing cost—so much so that 46% of organizations are thinking about repatriating cloud applications to on-premises, which would have been unthinkable just a few years ago.

“I’m familiar with a young company whose R&D spend was $18 million and the cloud spend was $23 million, with utilization of just 11%,” Keri said. “This wasn’t as much of a concern when money was free, but those days are over, and increasingly, organizations are looking to get their cloud spend under control.”

Cloud data management is complex, and without keeping an eye on it, costs can quickly get out of control.

The final big problem is moving workloads between infrastructures. It’s especially hard moving legacy applications to the cloud because of all the refactoring, and it’s easy for that effort to get far out of scope. Keri has experienced this issue firsthand many times in his career. 

“What we often see with customers at Nutanix is that the journey of moving applications to the cloud, especially legacy applications, is one that many had underestimated,” Keri said. “For example, while at Intuit as CISO, I was part of the team that moved TurboTax onto AWS, which took us several years to complete and involved several hundred developers.”

Nutanix provides a unified infrastructure layer that enables IT to seamlessly run applications on a single underlying platform, whether it’s on-premises, in the cloud, or even a hybrid environment. And data protection and security are integral parts of the platform, so IT doesn’t have to worry about whether data will be local for recovery or whether data is secure—the platform takes care of it.

“Whether you’re moving apps which need to be run on a platform or whether you’re building net-new applications, Nutanix provides an easy way to move them back and forth,” Keri said. “If you start with a legacy application on prem, we provide the tools to move it into the public cloud. If you want to start in the cloud with containerized apps and then want to move them on-prem or to another cloud service provider, we provide the tools to do that. Plus, our underlying platform offers data protection and security, so you don’t have to worry about mundane things like where your data needs to be. We can take the pain away from developers.”

For more information on how Nutanix can help your organization control costs, gain agility, and simplify management of apps and data across multiple environments, visit Nutanix here.

Data Management

IT Operations management (ITOM) – a framework that gives IT teams the tools to centrally monitor and manage applications and infrastructure across multi-premise environments – has been the foundation of enterprise IT infrastructure and applications for the last 30 years. It has been the backbone that ensures technology stacks are operating optimally to provide timely business value and keep employees engaged and productive by maintaining the availability of core applications. But the recent acceleration of digital transformation across global industries and emergence of multi-cloud environments has introduced a new level of complexity.

While flexibility, elasticity, and ease of use are benefits that make starting with the cloud an enticing prospect, ongoing operation and management can quickly become difficult to oversee. As the business scales across infrastructure and applications deployments in a multi-cloud environment, so does the complexity inherent in diverse cloud operating models and tools, and distributed application architecture and deployment. Many IT professionals also lack technical or management skills in these areas.

According to a recent VMware survey of tech executives and their priorities for digital momentum, 73% reported a push to standardize multi-cloud architectures. The digital transformation and transition to multi-cloud environments to bring the best technology stacks that unlock business value will be a journey for most global enterprises in the coming years. The key is how to empower cloud-focused or cloud-native technology teams to realize the full potential of their transformational investments in multi-cloud environments.

This prompts a key question: is ITOM still valid when it comes to managing enterprise technology stacks that are increasingly categorized as multi-cloud environments?

IT operators for years had their favorite tools to manage infrastructure, applications, databases, networks, and more to support the needs of their business. But with the growing workload migration to multi-cloud environments, IT pros are now scrambling between siloed operations tools and cloud-specific tools provided by the key hyperscalers – AWS, Azure, or Google – for specific use cases. This tool sprawl is often exacerbated by the enterprises’ desire to pick the best cloud platform based on the problem that needs to be resolved. For a .NET based application migration to the cloud, for example, Azure might be the better choice, while an AI/ML large data lake analysis could be best suited to run on Google Cloud.

This is where ITOM is evolving to bring a holistic and comprehensive view across multiple disciplines of multi-cloud infrastructure and applications, integrating best practices from cost, operations, and automation management principles along with connected data. ITOM has been leveraged for decades in enterprise on-premises environments to bring disparate, federated, and integrated tools together to operate in a secure way. The key to success in accelerating digital transformation in multi-cloud consumption era is to have complete visibility of the environment, automation of mundane tasks, and proactive operations that utilize connected data from multiple domains in near real-time to drive preventive and proactive insights. This is possible to achieve with an integrated platform that brings different domains of operations, costs, and automation on a single integrated data platform.

There are different ITOM incumbents trying to “cloudify” their current solutions by tucking AIOps into the mix and integrating with application performance management (APM) offerings. On the other hand, there are also new market disrupters joining the race to provide a single integrated solution for the enterprises that unify multiple domains and data together to provide visibility, operations, cost, optimization, and automation across multi-cloud environments. All these scenarios suggest ITOM will solidify itself as even more relevant in cloud and modern IT operations management now and in the future – but to maximize its value, IT teams must move from cloud chaos to cloud smart management.

There are three primary characteristics of a cloud-smart IT operations management solution enterprises should look for:

Platform and API-based Solutions: Look for solutions that bring a set of common and integrated services together to monitor, observe, manage, optimize, and automate across infrastructure and apps. By using solutions that take a platform and API-first approach, IT teams are empowered to technology-proof their investments from a state of continuous invention and reinvention of the enterprise’s technology stacks and solutions. These types of offerings also help teams to connect the old legacy technology with more modern solutions as they progress along their digital transformation journeys while maintaining and growing their business.Integrated Data-driven Operations: A good ITOM solution should provide data-driven intelligence across multiple data domains to inform proactive decisions that leverage AIOps 2.0 principles. – AIOps must take a data-driven automation-first and self-service approach to truly provide value that frees resources to drive value-based development and delivery instead of chasing reactive problems. Global digital businesses are operating in multi-cloud environments, at the edge, and everywhere in between alongside the people, processes, and things that provide contextual customer experience. CloudOps will further provide rich diverse data to turn contextual connected data into business insights. It can’t manage an old school events-based command center but instead provides context across distributed and connected layers of technology by processing diverse volumes and variety of data that can be observed through business KPIs to drive actions to resolution. This will enable the modern digital businesses to constantly optimize the network and make informed decisions eliminating mundane manual tasks to improve productivity and innovation.Continuous Consumptions, Agility, and Control: We are moving from a static on-premises environment to dynamic cloud environments in the digital business, leveraging ephemeral workflows. This is where the right tools will drive automation of repetitive mundane tasks, enable governance and controls on cost, usage, and policy for the ever-changing business needs that demand elastic resources, data driven process accommodation, dynamic configurations, and consumption pricing.

The world of multi-cloud, edge, and on-premises are here to stay to drive the digital transformation journey of the enterprise. However, the pendulum of workloads moving between those discrete environments will continue to shift as business, compliance, and governance requirements change. ITOM and ITOps approaches are more relevant than ever in the world of multi-cloud hybrid environments with distributed ephemeral workloads (as they once were on on-premises environments). Still, it’s imperative that these operations management frameworks evolve with changing needs of business to ensure they’re able to simplify complex distributed technology stacks and cumbersome manual processes.

The goal is to drive contextual observable insights that lead to optimized usage and consumptions-based cultures by connecting different functions of business users, technology developers, and operators while enhancing complete end-to-end visibility of technology architectures. Only then can an organization benefit from modern ITOM that enables continuous change, compliance, and optimization to support a vibrant global business and its customers.

To learn more, visit us here.

Cloud Computing, IT Leadership

By Milan Shetti, CEO Rocket Software

In today’s volatile markets, agile and adaptable business operations have become a necessity to keep up with constantly evolving customer and industry demands. To remain resilient to change and deliver innovative experiences and offerings fast, organizations have introduced DevOps testing into their infrastructures. DevOps environments give development teams the flexibility and structure needed to drive productivity and implement early and often “shift left” testing to ensure application optimization.

While DevOps testing ecosystems require cloud technology, DevOps modernization software has allowed businesses that utilize mainframe infrastructure to successfully implement DevOps testing processes into their multi-code environments. However, introducing DevOps to mainframe infrastructure can be nearly impossible for companies that do not adequately standardize and automate testing processes before implementation.

The problem with unstructured manual testing processes

The benefits of DevOps testing revolve around increased speed and flexibility. In order to reach the full potential of these benefits and ensure a successful DevOps adoption, organizations should work to unify testing operations and eliminate any threats to productivity long before implementation begins. 

While it is important to equip developers with tools they are comfortable with, businesses working within multi-code environments must shift away from processes that require multiple vendors or lack integration. Operations that force development teams to jump from software to software to perform tasks create a complicated testing environment that can slow processes and create a disconnect between teams and departments. 

Manual testing also creates barriers to optimizing DevOps. While manual processes will still play an essential role in Quality Assurance (QA) testing, the potential for human error and the tedious, time-consuming tasks that come with manual testing make it impossible to create the speed and accuracy required for DevOps testing. And, if your testing is done using a specific developer script, you’re likely not capturing key metrics to improve your software development lifecycle, such as how the code changes the database. DevOps and true “shift left” testing environments demand structure and flexibility throughout operations that can only be achieved through standardization and automation.

Elevating testing with standardized and automated processes

To ensure successful DevOps implementation, businesses must start with an entire audit of their current operations and value stream — which is all the activities required to turn a customer request or need into a product or service. In doing so, teams can determine which software or processes create disconnects or slow operations and where automation can be integrated to enhance speed and accuracy.

Opting for vendors that offer user-friendly, code-agnostic and highly comprehensive DevOps platforms enable teams to create a central point of visibility, reporting and collaboration for processes. This standardized approach eliminates silos between teams, minimizes onboarding and allows teams a common means to rapidly commit, document and test changes to code and applications. Integrating systems and operations into a unified DevOps environment allows development and QA teams to track and schedule testing times between departments effortlessly.

From there, development teams should look to automate as many testing processes as possible. Leveraging automation in testing allows teams to implement automatic, continuous testing that eliminates human error and ensures all bugs are squashed before production. Teams can create multiple test environments and processes like unit testing, integration testing and regression testing. Standardization allows multi-code testing to be done with greater predictability and by different people — reducing the reliance on a few gifted developers and creating a more stable production phase.

Development teams can also create knowledge bases of automated testing templates to quickly pull and use or adjust to fit new and evolving testing needs. And, by leveraging automated DevOps tools, teams can configure software with controls that automatically test and vet any new coding introduced into the environment to quickly identify and address any bugs in the code or changes to the application.

The future of the mainframe and DevOps testing

A recent Rocket survey of over 500 U.S. IT professional businesses showed that the mainframe is here to stay, with more than half of the companies (56%) stating the mainframe still makes up the majority of its IT infrastructure due to its security and reliability. Thanks to highly integrative and intuitive DevOps modernization software, multi-code environments can reap the benefits of increased productivity and enhanced innovation through continuous “shift left” testing methods.

Just as the mainframe continues to modernize, so too does DevOps modernization software. Future DevOps testing software looks to leverage Artificial Intelligence (AI) and Machine Learning (ML) technology to further strengthen and streamline testing environments. Organizations like Rocket Software are working to develop technologies that use AI to study testing processes to help teams identify where testing is required and what needs to be tested more accurately. ML software will be used to track relationships in testing environments to identify patterns that help teams predict future testing needs and take a more proactive approach.

As agility and speed become more important in today’s digital market, the ability of teams working within multi-code environments to implement DevOps testing into operations will become a greater necessity. Businesses that standardize processes and utilize automation throughout testing will set their teams up for success. By creating structured and flexible DevOps testing environments, teams will enhance innovation and increase speed to market to help their business pull ahead and stay ahead of the competition.

To learn more about Rocket Software’s DevOps tools and solutions, visit the Rocket DevOps product page.

Software Development

The benefits of analyzing vast amounts of data, long-term or in real-time, has captured the attention of businesses of all sizes. Big data analytics has moved beyond the rarified domain of government and university research environments equipped with supercomputers to include businesses of all kinds that are using modern high performance computing (HPC) solutions to get their analytics jobs done. Its big data meets HPC ― otherwise known as high performance data analytics. 

Bigger, Faster, More Compute-intensive Data Analytics

Big data analytics has relied on HPC infrastructure for many years to handle data mining processes. Today, parallel processing solutions handle massive amounts of data and run powerful analytics software that uses artificial intelligence (AI) and machine learning (ML) for highly demanding jobs.

A report by Intersect360 Research found that “Traditionally, most HPC applications have been deterministic; given a set of inputs, the computer program performs calculations to determine an answer. Machine learning represents another type of applications that is experiential; the application makes predictions about new or current data based on patterns seen in the past.”

This shift to AI, ML, large data sets, and more compute-intensive analytical calculations has contributed to the growth of the global high performance data analytics market, which was valued at $48.28 billion in 2020 and is projected to grow to $187.57 billion in 2026, according to research by Mordor Intelligence. “Analytics and AI require immensely powerful processes across compute, networking and storage,” the report explained. “As a result, more companies are increasingly using HPC solutions for AI-enabled innovation and productivity.”

Benefits and ROI

Millions of businesses need to deploy advanced analytics at the speed of events. A subset of these organizations will require high performance data analytics solutions. Those HPC solutions and architectures will benefit from the integration of diverse datasets from on-premise to edge to cloud. The use of new sources of data from the Internet of Things to empower customer interactions and other departments will provide a further competitive advantage to many businesses. Simplified analytics platforms that are user-friendly resources open to every employee, customer, and partner will change the responsibilities and roles of countless professions.

How does a business calculate the return on investment (ROI) of high performance data analytics? It varies with different use cases.

For analytics used to help increase operational efficiency, key performance indicators (KPIs) contributing to ROI may include downtime, cost savings, time-to-market, and production volume. For sales and marketing, KPIs may include sales volume, average deal size, revenue by campaign, and churn rate. For analytics used to detect fraud, KPIs may include number of fraud attempts, chargebacks, and order approval rates. In a healthcare environment, analytics used to improve patient outcomes might include key performance indicators that track cost of care, emergency room wait times, hospital readmissions, and billing errors.

Customer Success Stories

Combining data analytics with HPC:

A technology firm applies AI, machine learning, and data analytics to client drug diversion data from acute, specialty, and long-term care facilities and delivers insights within five minutes of receiving new data while maintaining a HPC environment with 99.99% uptime to comply with service level agreements (SLAs).A research university was able to tap into 2 petabytes of data across two HPC clusters with 13,080 cores to create a mathematical model to predict behavior during the COVID-19 pandemic.A technology services provider is able to inspect 124 moving railcars ― a 120% reduction in inspection time ― and transmit results in eight minutes, based on processing and analyzing 1.31 terabytes of data per day.A race car designer is able to process and analyze 100,000 data points per second per car ― one billion in a two-hour race ― that are used by digital twins running hundreds of different race scenarios to inform design modifications and racing strategy.  Scientists at a university research center are able to utilize hundreds of terabytes of data, processed at I/O speeds of 200 Gbps, to conduct cosmological research into the origins of the universe.

Data Scientists are Part of the Equation

High performance data analytics is gaining stature as more and more data is being collected.  Beyond the data and HPC systems, it takes expertise to recognize and champion the value of this data. According to Datamation, “The rise of chief data officers and chief analytics officers is the clearest indication that analytics has moved from the backroom to the boardroom, and more and more often it’s data experts that are setting strategy.” 

No wonder skilled data analysts continue to be among the most in-demand professionals in the world. The U.S. Bureau of Labor Statistics predicts that the field will be among the fastest-growing occupations for the next decade, with 11.5 million new jobs by 2026. 

For more information read “Unleash data-driven insights and opportunities with analytics: How organizations are unlocking the value of their data capital from edge to core to cloud” from Dell Technologies. 

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

Data Management

By Liia Sarjakoski, Principal Product Marketing Manager, 5G Security, for Palo Alto Network Security

Governments, organizations, and businesses are readily embracing transformation at the edge of mobile networks these days. Mobile edge – with its distributed support for low latency, capacity for rapid delivery of massive data amounts, and scalable cloud-native architectures – enables mission critical industrial and logistic applications and creates richer experiences across remote working, education, retail, and entertainment. Bringing resources closer to the user enables a better user experience, serving mission critical applications and taking advantage of improved economics.

But, mobile edge, including Multi-access Edge Computing (MEC), requires a new kind of approach to cybersecurity. It is a new environment where the network, applications, and services are not only distributed geographically but also across organizational boundaries. Service providers’ 5G infrastructure and enterprise networks will be deeply intertwined. Further, the mobile edge will be highly adaptive. It will dynamically scale to meet demands of new applications and changing usage patterns.

Effective 5G edge security is best achieved through a platform approach that combines the protection of diverse mobile edge environments under one umbrella. A platform approach not only provides visibility for advanced, network-wide threat detection but it provides the necessary foundation for security automation. Automation is vital for security to keep up with the dynamically changing 5G environment.

We can think of 5G networks to include four types of edge environments. Effective edge security spans across all of these environments.

Palo Alto Networks

Regional data centers — protect distributed core network with distributed security

Driven by the explosion of mobile data and improving customer experience, service providers are distributing core network functions — e.g., Session Management Function (SMF) and Access and Mobility Management Function (AMF) — closer to the users to regional data centers. Service providers are able to improve user traffic latency and can optimize their transport architecture for cost savings.

As network functions — e.g., SMF and AMF — are brought to the edge of the network, securing them needs to take place there, as well. Instead of providing protection at one to three national data centers, it now needs to be implemented at five to 10 regional data centers. The key interfaces to protect are N2 and N4. Unprotected N2 interfaces can be vulnerable to Radio Access Network (RAN) based threats from gNodeB base stations (gNBs). Unprotected N4 interfaces can be vulnerable to Packet Forwarding Control Protocol (PFCP) threats between distributed user plane function (UPF) — e.g. located in a MEC environment — and the core network.

Additionally, SMF, AMF, and other network function workloads need protection in this typically cloud-native container-based environment.

The key for protecting the regional data center environment is a cloud-native security platform that can be automatically scaled to changing traffic or topology demands. At the same time, many of the threats are telco specific and preventing them requires built-in support for the telco protocols.

Public MEC — support user experience with cloud-native security

Public MEC is part of the public 5G network and typically serves consumer and IoT use cases. It integrates applications as part of the 5G network and brings them closer to the user. This improves the user experience while also optimizing the cost by deploying resources where they are needed. Public MEC is built into the service provider’s network by utilizing a distributed user plane function (UPF) to directly break out traffic to edge applications. Many service providers are partnering with cloud service providers (CSPs) on building these application environments as the CSP platforms have become the standard.

As third party applications become an integral part of the 5G networks, protecting and monitoring the application workloads and protecting UPF with microsegmentation promotes stopping any lateral movement of attacks.

Edge applications are also integral to the 5G user experience. Smoothly working applications for example, video content, AR/VR, and gaming promote service providers’ customer retention rate.

Securing the public MEC calls for a cloud-native, multi-cloud approach for cloud workloads and microsegmentation.

Private MEC – empower enterprises with full control over 5G traffic

Private MEC is deployed at an enterprise customer’s premises and is often set up alongside with a private 5G or LTE network, serving mission-critical enterprise applications. It utilizes a local UPF to break out user plane traffic to the enterprise network. The traffic is further routed to a low-latency edge application or deeper into the enterprise network. A key driver for private MEC adoption is privacy of the traffic — an enterprise has full control of its 5G traffic, which never leaves its environment.

Private MEC carries the enterprise customer’s data. In today’s distributed world with eroded security perimeters, many enterprises rely on the Zero Trust approach to protect their users, applications and infrastructure. A critical building block for implementing a Zero Trust Enterprise is the ability to enforce granular security policies and security services across all network segments — including 5G traffic. Service providers need to find ways to empower enterprise customers with full control over the 5G traffic.

At the same time, the service provider needs to securely expose interfaces from the customer’s premises to their core network — namely the N4 interface to SMF for PFCP signaling traffic originating from the private MEC.

Private MEC security requires a flexible approach to bring security to heterogeneous private MEC environments across appliance, virtual, and cloud environments. Many enterprises will choose to leverage partners for turnkey private MEC solutions and they will be requiring built-in security. Also cloud service providers are going after the private MEC market, and the ability to provide cloud-native security will be critical.

Mobile devices — most effectively protected with network-based security solutions

Accelerated by rapidly increasing IoT devices, the number of mobile devices is massive. The devices are heterogeneous across a multitude of software and hardware platforms. The limited computing and battery capacity of these devices often forces the device vendors to make compromises in security capabilities, making mobile devices a soft target. Infected devices can compromise organizations’ business critical data and disturb mission-critical operations. They also pose a risk to the mobile network itself, especially in case of botnet-originated massive, coordinated DDoS attacks.

The combination of limited device resources, heterogeneous device types and device vendors’ tight control of the platforms makes it difficult to implement device-based security solutions in scale. Network-based security, on the other hand, is a highly effective method to protect mobile devices at scale. When supported with granular visibility to user (SUPI) and device (PEI) level traffic flows, network-based security can see and stop advanced threats in real time. Organizations are able to protect their mobile devices across attack vectors including vulnerability exploits, ransomware, malware, phishing, and data theft.

Network-based security can be deployed as part of any of the edge environments or the service provider’s core network.

Staying on top of privacy in distributed 5G networks

Protecting private information is more important than ever. Handling of private information is heavily regulated and breaches can result in public backlash. As the mobile core network becomes more distributed, the service providers need to double down on protecting Customer Proprietary Network Information (CPNI) that is now often carried in the signaling traffic (e.g., N4) between MEC sites and regional and national data centers. Service providers often use encryption to protect CPNI.

Conclusion

Securing the 5G edge requires a zero trust approach that can scale across multiple different environments. The distributed 5G network no longer has a clear perimeter. Service providers’,  enterprises’, and CSPs’ assets and workloads are intertwined. Only through visibility and control across the whole system, can service providers and enterprises detect breaches, lateral movement, and stop the kill chains.

The new mobile networks are complex, but securing them doesn’t need to be. The key for simple 5G edge security is a platform approach that manages protecting the key 5G interfaces under a single umbrella — no matter if they are distributed across private and public telco clouds and data centers.

Learn more about Palo Alto Networks 5G-Native Security for protecting 5G interfaces, user traffic, network function workloads and more. Our ML-Powered NGFW for 5G provides deep visibility to all key 5G interfaces and can be deployed across data center (PA-Series), virtual (VM-Series), and container-based (CN-Series) environments. Our Prisma Cloud Compute provides cloud-native protection for container-based network function (CNF) workloads.

About Liia Sarjakoski:
Liia is the Principal Product Marketing Manager, 5G Security, for Palo Alto Network Security

IT Leadership, Zero Trust