Pandemic-era ransomware attacks have highlighted the need for robust cybersecurity safeguards. Now, leading organizations are going further, embracing a cyberresilience paradigm designed to bring agility to incident response while ensuring sustainable business operations, whatever the event or impact.

Cyberresilience, as defined by the Ponemon Institute, is an enterprise’s capacity for maintaining its core business in the face of cyberattacks. NIST defines cyberresilience as “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on systems that use or are enabled by cyber resources.”

The practice brings together formerly separate disciplines of information security, business continuity, and disaster response (BC/DR) deployed to meet common goals. Although traditional cybersecurity practices were designed to keep cybercriminals out and BC/DR focused on recoverability, cyberresilience aligns the strategies, tactics, and planning of these traditionally siloed disciplines. The goal: a more holistic approach than what’s possible by addressing each individually.

At the same time, improving cyberresilience challenges organizations to think differently about their approach to cybersecurity. Instead of focusing efforts solely on protection, enterprises must assume that cyberevents will occur. Adopting practices and frameworks designed to sustain IT capabilities as well as system-wide business operations is essential.

“The traditional approach to cybersecurity was about having a good lock on the front door and locks on all the windows, with the idea that if my security controls were strong enough, it would keep hackers out,” says Simon Leech, HPE’s deputy director, Global Security Center of Excellence. Pandemic-era changes, including the shift to remote work and accelerated use of cloud, coupled with new and evolving threat vectors, mean that traditional approaches are no longer sufficient.

“Cyberresilience is about being able to anticipate an unforeseen event, withstand that event, recover, and adapt to what we’ve learned,” Leech says. “What cyberresilience really focuses us on is protecting critical services so we can deal with business risks in the most effective way. It’s about making sure there are regular test exercises that ensure that the data backup is going to be useful if worse comes to worst.”

A Cyberresilience Road Map

With a risk-based approach to cyberresilience, organizations evolve practices and design security to be business-aware. The first step is to perform a holistic risk assessment across the IT estate to understand where risk exists and to identify and prioritize the most critical systems based on business intelligence. “The only way to ensure 100% security is to give business users the confidence they can perform business securely and allow them to take risks, but do so in a secure manner,” Leech explains.

Adopting a cybersecurity architecture that embraces modern constructs such as zero trust and that incorporates agile concepts such as continuous improvement is another requisite. It is also necessary to formulate and institute time-tested incident response plans that detail the roles and responsibilities of all stakeholders, so they are adequately prepared to respond to a cyberincident.

Leech outlines several other recommended actions:

Be a partner to the business. IT needs to fully understand business requirements and work in conjunction with key business stakeholders, not serve primarily as a cybersecurity enforcer. “Enable the business to take risk; don’t prevent them from being efficient,” he advises.Remember that preparation is everything. Cyberresilience teams need to evaluate existing architecture documentation and assess the environment, either by scanning the environment for vulnerabilities, performing penetration tests, or running tabletop exercises. This checks that systems have the appropriate levels of protections to remain operational in the event of a cyberincident. As part of this exercise, organizations need to prepare adequate response plans and enforce the requisite best practices to bring the business back online.Shore up a data protection strategy. Different applications have different recovery-time-objective (RTO) and recovery-point-objective (RPO) requirements, both of which will impact backup and cyberresilience strategies. “It’s not a one-size-fits-all approach,” Leech says. “Organizations can’t just think about backup but [also about] how to do recovery as well. It’s about making sure you have the right strategy for the right application.”

The HPE GreenLake Advantage

The HPE GreenLake edge-to-cloud platform is designed with zero-trust principles and scalable security as a cornerstone of its architecture. The platform leverages common security building blocks, from silicon to the cloud, to continuously protect infrastructure, workloads, and data while adapting to increasingly complex threats.

HPE GreenLake for Data Protection delivers a family of services that reduces cybersecurity risks across distributed multicloud environments, helping prevent ransomware attacks, ensure recovery from disruption, and protect data and virtual machine (VM) workloads across on-premises and hybrid cloud environments. As part of the HPE GreenLake for Data Protection portfolio, HPE offers access to next-generation as-a-service data protection cloud services, including a disaster recovery service based on Zerto and HPE Backup and Recovery Service. This offering enables customers to easily manage hybrid cloud backup through a SaaS console along with providing policy-based orchestration and automation functionality.

To help organizations transition from traditional cybersecurity to more robust and holistic cyberresilience practices, HPE’s cybersecurity consulting team offers a variety of advisory and professional services. Among them are access to workshops, road maps, and architectural design advisory services, all focused on promoting organizational resilience and delivering on zero-trust security practices.

HPE GreenLake for Data Protection also aids in the cyberresilience journey because it removes up-front costs and overprovisioning risks. “Because you’re paying for use, HPE GreenLake for Data Protection will scale with the business and you don’t have to worry [about whether] you have enough backup capacity to deal with an application that is growing at a rate that wasn’t forecasted,” Leech says.

For more information, click here.

Cloud Security

High performance computing (HPC) is becoming mainstream for organizations, spurred on by their increasing use of artificial intelligence (AI) and data analytics. A 2021 study by Insersect360 Research found that 81% of organizations that use HPC reported they are running AI and machine learning or are planning to implement them soon. It’s happening globally and contributing to worldwide spending on HPC that is poised to exceed $59.65 billion in 2025, according to Grandview Research.

Simultaneously, the intersection of HPC, AI, and analytics workflows are putting pressure on systems administrators to support ever more complex environments. Admins are being asked to complete time-consuming manual configurations and reconfigurations of servers, storage and networking as they move nodes between clusters to provide the resources required for different workload demands. The resulting cluster sprawl consumes inordinate amounts of information technology (IT) resources. 

The answer? For many organizations, it’s a greater reliance on open-source software.

Reaping the Benefits of Open-Source Software & Communities

Developers at some organizations have found that open-source software is an effective way to advance the HPC software stack beyond the limitations of any one vendor. Examples of open-source software used for HPC include Apache Ignite, Open MPI, OpenSFS, OpenFOAM, and OpenStack. Almost all major original equipment manufacturers (OEMs) participate in the OpenHPC community, along with key HPC independent software vendors (ISVs) and top 

HPC sites. 

Organizations like Arizona State University Research Computing have turned to open-source software like Omnia, a set of tools for automating the deployment of open source or publicly available Slurm and Kubernetes workload management along with libraries, frameworks, operators, services, platforms and applications.

The Omnia software stack was created to help simplify and speed the process of building environments for mixed workloads by abstracting away the manual steps that can slow provisioning and lead to configuration errors. It was designed to speed and simplify the process of deploying and managing environments for mixed workloads, including simulation, high throughput computing, machine learning, deep learning and data analytics.

Members of the open-source software community contribute code and documentation updates to feature requests and bug reports. They also provide open forums for conversations about feature ideas and potential implementation solutions. As the open-source project grows and expands, so does the technical governance committee, with representation from top contributors and stakeholders.

“We have ASU engineers on my team working directly with the Dell engineers on the Omnia team,” said Douglas Jennewein, senior director of Arizona State University (ASU) Research Computing. “We’re working on code and providing feedback and direction on what we should look at next. It’s been a very rewarding effort… We’re paving not just the path for ASU but the path for advanced computing.”

ASU teams also use Open OnDemand, an open source HPC portal that allows users to log in to a HPC cluster via a traditional Secure Shell Protocol (SSH) terminal or via a web-based interface that uses Open OnDemand. Once connected, they can upload and download files; create, edit, submit and monitor jobs; run applications; and more via a web browser in a cloud-like experience with no client software to install and configure

Some Hot New Features of Open-Source Software for HPC  

Here is a sampling of some of the latest features in open-source software available to HPC application developers.

Dynamically change a user’s environment by adding or removing directories to the PATH environment variable. This makes it easier to run specific software in specific folders without updating the PATH environment variable and rebooting. It’s especially useful when third-party applications point to conflicting versions of the same libraries or objects.Choice of host operating system (OS) provisioned on bare metal. The speed and accuracy of applications are inherently affected by the host OS installed on the compute node. This provides bare metal options of different operating systems in the lab to be able to choose the one working optimally at any given time and best suited for an HPC application.Provide low-cost block storage that natively uses Network File System (NFS).  This adds flexible scalability and is ideal for persistent, long-term storage. Use telemetry and visualization on Red Hat Enterprise Linux. Users of Red Hat Enterprise Linux can take advantage of telemetry and visualization features to view power consumption, temperatures, and other operational metrics. BOSS RAID controller support. Redundant array of independent disks (RAID) arrays use multiple drives to split the I/O load, and are often preferred by HPC developers. 

The benefits of open-source software for HPC are significant. They include the ability to deploy faster, leverage fluid pools of resources, and integrate complete lifecycle management for unified data analytics, AI and HPC clusters.

For more information on and to contribute to the Omnia community, which includes Dell, Intel, university research environments, and many others, visit the Omnia github.

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

IT Leadership

As the threat of climate change looms, organizations across every sector are focused on driving sustainable progress and innovation. Most of these organizations are measuring success based on their stated goals in environmental, social, and governance (ESG) initiatives and results. 

Now there is a way to quantify and verify those achievements. It’s called Project Alvarium and its mission is to create a framework and open APIs that help organizations reliably and securely quantify trust in data collected and analyzed near the point of conception. This secure data can deliver near real-time insights into an operations’ carbon footprint, thus increasing transparency and accuracy in reporting. 

Why Valid Sustainability Measurement Matters

As part of the imperative to slow or reverse global warming, some governments are regulating emission levels over time. Public pressure on companies and industries to reduce their carbon footprints is also having a major impact, especially among investors who want to align their hopes for environmental repair and renewal with where they invest their money.

Many companies claim that they pursue environmentally sustainable best practices when it comes to energy usage and pollution, yet few regularly report their results. In June, Bloomberg reported that a financial institution was fined by the U.S. Security and  Exchange Commission for falsely stating that some of the firm’s mutual funds had undergone ESG quality reviews. These regulatory efforts are an attempt to combat “greenwashing,” or incorrect reporting and reimbursements related to fraudulent or unsubstantiated claims about the environmentally responsible practices of companies. 

This begs the question: How can the carbon footprint of companies ― from employees to devices, materials, and processes ― be measured and quantified?

Project Alvarium

Available for use by any industry, Project Alvarium includes tools for monitoring, reporting, and verifying metrics in data confidence fabrics (DCFs) that quantify trust in data delivered from devices to applications. This open-source trust framework and software development kit (SDK), hosted by the Linux Foundation and announced in 2021, is the culmination a four-year collaboration among Dell, Intel, Arm, VMware, ZEDEDA, the IOTA Foundation, and ClimateCHECK. Trust in data, the applications and infrastructure used are quantified in a confidence score. The dashboard can be customized to include specific algorithms and indices related to different industries. Trust fabrics also make it easier to scale data and network security compliance requirements and to monetize data.

The project represents a collaborative effort to unify open source and commercial trust insertion technologies in a standardized environment. There is no single data confidence fabric. Instead, each organization can build their own with preferred technologies using the Alvarium framework. 

In a home environment, for example, there are many different Internet of Things (IoT) devices, from TVs and laptops to smartphones, cars, digital assistants, security cameras, and kitchen appliances. All are supported by intersecting trust fabrics from different vendors. Project Alvarium’s data confidence fabric can be adapted to a home environment to facilitate scalable, trusted, secure collaboration across heterogenous ecosystems of applications and services connected to an open, interoperable edge. Most recently, Alvarium has been put to work in helping to define what data confidence looks like in the climate industry.

Measuring and Quantifying Environmental Impacts

Recently the Project Alvarium framework was used to adapt an automated measurement, reporting and verification (MRV) solution at a biodigestion energy and composting facility at a winery in Chile. It processes data from sensors measuring water, solids, gasses, and anaerobic digestion processes to provide real-time insights into the facility’s carbon footprint. 

Deployed at the edge, it is available as a blockchain solution with high levels of trust, transparency, and security. The solution at the facility has enabled the local utility in Chile, Bio Energía, to replace manual process reviews with continuous, real-time, trustworthy monitoring and reporting that provides a much more accurate understanding of how different innovations impact carbon emissions. 

This type of trustworthy sustainability reporting provides the public with validated information on company practices. It can lower barriers to carbon credit issuance and lure more investors to fund businesses that are introducing new innovations to mitigate the effects of climate change.

Learn more about Project Alvarium and edge computing solutions at Dell Technologies.

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

IT Leadership