Data about who owes how much to whom is at the core of any bank’s business. At Bank of New York Mellon, that focus on data shows up in the org chart too. Chief Data Officer Eric Hirschhorn reports directly to the bank’s CIO and head of engineering, Bridget Engle, who also oversees CIOs for each of the bank’s business lines.

“It’s very purposeful because a lot of the opportunities for us around data require tight integration with our technology,” says Hirschhorn. “I’m a peer to the divisional CIOs of the firm, and we work hand-in-glove because you can’t separate it out: I can make a policy, but that alone doesn’t get the job done.”

Hirschhorn, who joined the bank in late 2020, has worked in financial services for over three decades, during which the finance industry’s concerns about data have changed significantly.

“Twenty years ago, we were trying to make sure our systems didn’t fall over,” he says. “Ten years ago, we were worried about systemic importance, and contagion. When you solve some of the more structural concerns, it all gets back to the data. We are incredibly bullish on building advanced capabilities to understand the interconnectedness of the world around us from a data perspective.”

One key to that endeavor is being able to identify all the data related to an individual customer, and to identify the relationships that link that customer with others. Banks have a regulatory requirement to know who they’re dealing with — often referred to as KYC or “know your customer” — to meet anti-money-laundering and other obligations.

“The initial problem we were looking to solve is a long-standing issue in financial markets and regulated industries with large datasets,” Hirschhorn says, “and that was really around entity resolution or record disambiguation,” or identifying and linking records that refer to the same customer.

Being able to identify which of many loans have been made to the same person or company is also important for banks to manage their risk exposure. The problem is not unique to banks, as a wide range of companies can benefit from better understanding their exposure to individual suppliers or customers.

Defining a customer with data

But to know your customers, you must first define what exactly constitutes a customer. “We took a very methodical view,” says Hirschhorn. “We went through the enterprise and asked, ‘What is a customer?’”

Initially, there were differences between divisions about the number of fields and type of data needed to define a customer, but they ended up agreeing on a common policy.

Recognizing that divisions already had their own spending priorities, the bank set aside a central budget that each division could draw on to hire developers to ensure they all had the resources to implement this customer master. The message was, “You hire the developers and we will pay for them to get on with it,” Hirschhorn says.

With the work of harmonizing customer definitions out of the way, the bank could focus on eliminating duplicates. If it has a hundred records for a John Doe, for example, then it needs to figure out, based on tax ID numbers, addresses, and other data, which of those relate to the same person and how many different John Does there really are.

BNY Mellon wasn’t starting from scratch. “We actually had built some pretty sophisticated software ourselves to disambiguate our own customer database,” he says. There was some automation around the process, but the software still required manual intervention to resolve some cases, and the bank needed something better.

Improving the in-house solution would have been time consuming, he says. “It wasn’t a core capability, and we found smarter people in the market.”

Among those people were the team at Quantexa, a British software developer that uses machine learning and multiple public data sources to enhance the entity resolution process.

The vendor delivered an initial proof of concept to BNY Mellon just before Hirschhorn joined, so one of his first steps was to move on to a month-long proof of value, providing the vendor with an existing dataset to see how its performance compared with that of the in-house tool.

The result was a greater number of records flagged as potentially relating to the same people — and a higher proportion of them resolved automatically.

“There’s a level of confidence when you do correlations like this, and we were looking for high confidence because we wanted to drive automation of certain things,” he says.

After taking some time to set up the infrastructure and sort out the data workflow for a full deployment, BNY Mellon then moved on to a full implementation, which involved staff from the software developer and three groups at the bank: the technology team, the data subject matter experts, and the KYC center of excellence. “They’re the ones with the opportunity to make sure we do this well from a regulatory perspective,” he says.

Quantexa’s software platform doesn’t just do entity resolution: It can also map networks of connections in the data — who trades with whom, who shares an address, and so on.

The challenge, for now, may be in knowing when to stop. “You correlate customer records with external data sources, and then you say, let’s correlate that with our own activity, and let’s add transaction monitoring and sanctions,” he says. “We’re now doing a proof of concept to add more datasets to the complex, as once you start getting the value of correlating these data sets, you think of more outcomes that can be driven. I just want to throw every use case in.”

Investing in technology suppliers

BNY Mellon isn’t just a customer of Quantexa, it’s also one of its investors. It first took a stake in September 2021, after working with the company for a year.

“We wanted to have input in how products developed, and we wanted to be on the advisory board,” says Hirschhorn.

The investment in Quantexa isn’t an isolated phenomenon. Among the bank’s other technology suppliers it has invested in are specialist portfolio management tools Optimal Asset Management, BondIT, and Conquest Planning; low-code application development platform Genesis Global; and, in April 2023, IT asset management platform Entrio.

The roles of customer and investor don’t always go together, though. “We don’t think this strategy is applicable to every new technology company we use,” he says.

While some companies may buy a stake in a key supplier to stop competitors taking advantage of it, that’s not BNY’s goal with its investment in Quantexa’s entity resolution technology, Hirschhorn says.

“This isn’t proprietary; we need everybody to be great at this,” he says. “People are getting more sophisticated in how they perpetrate financial crimes. Keeping pace, and helping the industry keep pace, is really important to the health of the financial markets.”

So when Quantexa sought new investment in April 2023, BNY Mellon was there again—this time joined by two other banks: ABN AMRO and HSBC.

Artificial Intelligence, Chief Data Officer, CIO, IT Leadership

You heard about a nightmare scenario playing out for peers at other companies and hope it doesn’t affect yours. Trouble tickets are rolling in, and there’s a lack of qualified people to address security alerts and help desk issues right when customer demand, supply shortages, and potential threats are at their peak.

Even with flexible remote work policies, the most seasoned employees in roles such as customer support, data science, business analysis, and DevSecOps move on to greener pastures and leave—just when they finally seemed to figure out how everything works.

Why is an exodus of skilled knowledge workers becoming a recurring pattern in customer-oriented organizations, and what can IT leaders do to improve their digital employee experience (DEX) to convince them to stay?

The great hybrid office migration

A few lucky “born on the web” companies were built on the premise of 100% remote work. The pandemic of 2020 forced the rest of the world to move knowledge workers out of the office into fully or partially remote work models. 

Migratory employees in technology roles appreciated the newfound ability to work from home in sweatpants and avoid the daily commute. Many idealistically vowed never to return to work for an employer that required them to come back to the office.

Employers benefitted too, releasing some of their real estate for savings on facility costs and reducing travel expenses. Less scrupulous bosses took it a step further, capturing additional hours in the workday by implementing draconian attention monitoring tools or letting employees stay on duty beyond typical office hours.

Now that the pandemic has become endemic, some companies are reversing their position on remote work and asking employees to come back into the office, at least some of the time. We’re settling on a hybrid model of digital work. In 2023, 58% of knowledge workers in the United States will continue to be able to work remotely at least one day a week, while 38% will continue as full-time remote workers.

Despite the initial novelty of having pets and kids hilariously interrupting Zoom calls, this new normal of blurring the lines between work and home life has not turned out to be all unicorns and rainbows for digital employees.

Dealing with digital work friction

Employers used to be able to tell teams to stay late in the office to fulfill a rush of customer orders, or be on-call to respond to issues on weekends. The signs of employee burnout were easy to predict even before “the great resignation” of the 2020s. 

CIOs built or bought applications to allow virtual work, which allowed more team members to be available online to respond to requests through remote access, without coming into the office. This was helpful, but unfortunately the burnout rate has only increased for today’s digital worker who may have lost separation between work and home life, and staffing still couldn’t keep up with workload. 

A Gartner HR study recently estimated that 24% of workers would likely shift to a new job in 2022–and this turnover rate is especially true of knowledge workers who must interact daily with the company’s systems. Compared to pre-pandemic employee sentiment, 20% more respondents cited their digital work experienceas a significant contributing factor to job satisfaction.

Even with some arbitrary job cuts happening at larger companies, skilled team members can find work elsewhere if they are frustrated, and unfilled roles in customer service, SecOps, and engineering positions are still common. 

Potential recruits can check any number of salary disclosure sites to figure out what they are worth on the market, and they can also look on Glassdoor to see why employees are dissatisfied working at a company. In a hybrid work world, a bad employee experience is not always about low pay, long hours or “mean bosses” anymore–it’s about digital work friction that inhibits their ability to deliver meaningful value.

Employee expectations of DEX

All employees want to work for employers with fundamentals, like fair compensation, a harassment-free workplace, and work/life balance. In specific, digital employees have a unique set of concerns about the technology environment they must work within, since in many cases it is their only connection to co-workers and customers.

This is why CIOs spend so much of their time researching the digital tools employees use and spinning up new projects to upgrade that experience.

A successful DEX technology suite can positively impact employee sentiment if it delivers for them on three dimensions:

Engagement: Are employees using the company’s suite of productivity tools, issue tracking, collaboration, and system monitoring tools on a daily basis? Individuals want self-service platforms that will work on their target workstation or devices, but they also need education, documentation, and expert support from the organization to maintain successful adoption.

Companies can measure improved engagement through monitoring and visibility into organizational, team, and individual usage patterns, but more importantly, they should offer mechanisms for a positive feedback loop, so employees can register their preferences and concerns about the suite.

Empowerment: Are individuals, teams and regions authorized for just the analytic, management, and problem-solving tools and data they need without unnecessary friction or distractions? Employee empowerment is a continuous struggle for many companies to deliver, as permissions for analytics, user data, work items, and access privileges are usually highly customized to meet overlapping work, customer requirements, and regulatory regimes.  

Empowered employees proactively identify emerging demands and roadblocks, and effectively take action to collaborate with the right team members to find solutions. 

Efficiency: Intelligent automation triages and prioritizes important customer issues for teams, and helps individuals filter through irrelevant alerts from disparate systems and services. Employees progress through tasks with fewer interruptions, spend less time on pointless root cause analysis, and remediate resolutions with automated actions.

All employees want to make progress on goals. The upside of efficiency is almost limitless because as one productivity constraint is removed, another bottleneck will appear upstream or downstream.

Enterprise expectations of DEX

From the CIO’s perspective, DEX is best thought of as an enterprise-wide transformational initiative that increases the value of critical talent over time, rather than as a project that delivers short-term gains.

The customer still comes first. But let’s face it, there are already enough customer-facing performance metrics in the world. 

DEX turns measurement and metrics inward, then captures even more value from the intentional feedback and non-verbal cues provided by employees.

This virtuous cycle of continuous feedback and improvement of the ‘three E’s’ of DEX will fuel engagement, empowerment, and efficiency for employees and executives–and better performance, not just on meeting revenue and cost targets, but in terms of employee satisfaction and higher retention rates.

The Intellyx Take

Work has changed forever. 

From a morale perspective, remote workers might miss something about the camaraderie of an office: the exciting pre-launch demo, an in-person standup, an informal desk visit, or a coffee break to share ideas about a particular issue with colleagues. But that doesn’t mean we can’t make DEX the best it can be, wherever the team is located.

Therefore, every organization will need to define a digital employee experience that engages and empowers employees, making every working minute a more efficient use of time, including taking some well-earned time off to unplug from the digital world.

©2023 Intellyx LLC. At the time of writing, Tanium is an Intellyx subscriber. No AI chatbots were used to write any part of this article.

Digital Transformation

When the world’s largest healthcare company by revenue went looking for a technology solution that could improve quality of care while reducing costs, the search took ten years. What they found—an innovative way to model healthcare data—is saving the company an estimated $150M annually and enabling its medical professionals to provide accurate and effective care path recommendations in real time. It’s a remedy with important implications for the future of healthcare. 

This same solution, graph databases and graph analytics, proved crucial at the height of the Covid-19 pandemic. A testament to its potential, the market for graph technology is projected to reach $11.25B by 2030.[1]

Graph technology isn’t new. It’s what social networking applications use to store and process vast amounts of “connected” data. It turns out graphs can do much more than connect people to their high school friends. They are also perfect for storing and visualizing large healthcare data models so it can be quickly processed and analyzed. Graphs can make previously unavailable connections from disparate data spread across many different platforms. One example would be making connections between data collected from a patient’s various doctors and pharmacies. 

Why Graph Analytics is Important for Healthcare

Hospitals deal with stockpiles of data. Every touchpoint is stored in a hospital’s electronic health record including visits, prescriptions, operations, and immunizations. Too much data can be a challenge, making it difficult to access and analyze information when and where it’s needed.

Hence the business case for graph databases. Data that’s represented in the form of a graph rather than a table enables quick analysis and faster time to insights. For healthcare professionals, sophisticated graph algorithms can return specific results, and graph visualization tools allow analysts to make useful connections and identify patterns that help solve problems.

Graph analytics is an ideal technology to help to tackle the challenges caused by large, disparate, datasets since it becomes more impactful as the volume, velocity and variety of data expands.[2] Storing and accessing this data alone is not enough. As a tool set, graph analytics prioritizes the relationships between the data—an arena where relational databases fall short.

Data scientists and leaders in the healthcare industry can use the most advanced graph analytics, known as native parallel graphs, to link datasets across multiple domains. This would allow the system to find frequent patterns and suggest the next best action. Ultimately, medical professionals would be able to rely on the most accurate data to provide patients with beneficial, real-time recommendations. 

“In the past, when somebody called into our call center, we would have had to log into 15 different systems to get a view of this member’s activity. Now users log into just one screen and have a beautiful timeline view of every touchpoint we’ve had with members,” said a distinguished engineer from a major healthcare company that recently deployed graph technology.

The Impact of Graph Technology on Covid-19

A graph-based approach to community tracing and risk detection was essential in 2020 as government officials and healthcare professionals worked overtime to understand and prevent the spread of Covid-19. For government agencies, graph technology led to agile and evidence-based emergency management and improved public health emergency response. 

Because graph analytics can sift through thousands of data sources and find relationships, even with complex and varying inputs, it was an excellent way to answer complicated questions related to the spread of disease. These capabilities helped with contact tracing used to identify, locate, and notify people who had been exposed to the virus. 

The technology also recognized relationships between data points—for example, common symptoms of people more likely to have a serious case of Covid based on pre-existing conditions. Armed with this insight, healthcare providers could warn patients when they were at higher risk. 

Future Implications for Healthcare and Beyond

As the healthcare industry moves beyond the pandemic, it emerges more prepared to respond to a wide variety of situations—from widespread health crises and everyday patient care. Healthcare companies already applying graph databases and graph analytics are experiencing the benefits. The technology supports their work to help members embrace healthier lifestyles, avoid costly pharmaceuticals, recover faster from medical procedures and more. Essentially, healthcare companies using graph technology are better equipped to provide quality care while controlling costs.

For data-centric companies looking to implement these solutions, a graph database running on Dell PowerEdge servers is the optimal offering in terms of performance, efficiency, and scale. To learn more about the business benefits of connected data, read this brief and visit to learn about solutions for analytics.



IT Leadership

Digitalization is a double-edged sword for banks, especially when it comes to security. A massive shift to cloud and API-based ways of working has made the sector become more agile and innovative, but it has also opened the floodgates for identity theft. As interactions and transactions become more interconnected, even the simplest processes like opening a new account or making a balance transfer become riddled with security concerns.

As financial services become more digital in nature, it’s important that banks think differently when using data analytics, security tools, and education to improve identity authentication and customer data privacy. Avaya’s research report reveals three critical ways to do so.

1. Make the Most of the Powerful Tool in Your Customers’ Hands

Almost every customer owns a smartphone, and they use that device to call into the contact center when they need to resolve an issue or complicated matter. Have you thought about what can be done with this device to enhance identity authentication? Older security methods like Knowledge-based Authentication (KBA) only prove what a person knows. By leveraging the sensors in a customer’s connected device, banks can go one step further to prove who someone is — and that makes all the difference.

These sensors, which include location services, cameras, and QR code scanning, make a customer’s smart device a valuable source of a vast amount of information and inputs that help banks create a trusted identity template for customers. Once this identity template is established, all transactions are tied directly to a customer’s verified identity. This allows simple but risky transactions like requesting a new debit card, ordering checks, or updating an address to be done simply, quickly, and with far lower risk to the bank and its customers.

2. Shield Sensitive Data from Agents Using Zero Knowledge Proof

When a customer calls into the contact center, all of that person’s information is made visible to the agent who needs to verify them: their address, their driver’s license number, their social security number, etc. What’s stopping an agent from using their cellphone to take a picture of a customer’s personally identifiable information? It’s a scary thought, especially with so many customer service jobs now offsite out of supervisors’ views. Customer service workers don’t need so much visibility into this data.

Zero Knowledge Proof is an advanced cryptographic technique that makes it possible for organizations to verify sensitive or personally identifiable information without revealing that data to workers. The agent doesn’t need to see the data to verify its accuracy or authenticity and will therefore have no knowledge of it — hence, “zero knowledge proof.” All employees will see are the results that matter to them (whether a payment went through, whether a document was signed, that a customer’s SSN checks out) with a green checkmark verifying its approval from whichever third-party company verified it.

3. Outbound Notifications for Fraud Protection

In a sea of scam callers, most customers immediately send unknown numbers to voicemail. This is a major challenge for banks trying to reach customers to perform a number of legitimate tasks and build relationships. By securely sending notifications across the channel of a customer’s choice (SMS, in-app message if the company offers a mobile app), banks can reach customers faster and with high veracity authentication. In this way, customers will receive a notification via text or in-app message before an incoming call asking them to “tap” and log in. They will be instantly authenticated and, if desired, can schedule the call for a convenient time.

These notifications can also be used to simplify routine interactions like checking an account balance or bill pay. For example, a customer can click on the link in a text message their bank sends them reminding them that a payment is due for their credit card. Notifications can be sent for non-payment interactions as well, such as post contact surveys and new customer eForms.  All of this can be done with full PCI compliance. In fact, banks can take their contact center out of the scope of compliance altogether.

Learn more from Avaya’s research about what banks should consider to digitally evolve. View the full report, Five Recent Trends Shaping the Banking Industry.

IT Leadership

Pandemic-era ransomware attacks have highlighted the need for robust cybersecurity safeguards. Now, leading organizations are going further, embracing a cyberresilience paradigm designed to bring agility to incident response while ensuring sustainable business operations, whatever the event or impact.

Cyberresilience, as defined by the Ponemon Institute, is an enterprise’s capacity for maintaining its core business in the face of cyberattacks. NIST defines cyberresilience as “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on systems that use or are enabled by cyber resources.”

The practice brings together formerly separate disciplines of information security, business continuity, and disaster response (BC/DR) deployed to meet common goals. Although traditional cybersecurity practices were designed to keep cybercriminals out and BC/DR focused on recoverability, cyberresilience aligns the strategies, tactics, and planning of these traditionally siloed disciplines. The goal: a more holistic approach than what’s possible by addressing each individually.

At the same time, improving cyberresilience challenges organizations to think differently about their approach to cybersecurity. Instead of focusing efforts solely on protection, enterprises must assume that cyberevents will occur. Adopting practices and frameworks designed to sustain IT capabilities as well as system-wide business operations is essential.

“The traditional approach to cybersecurity was about having a good lock on the front door and locks on all the windows, with the idea that if my security controls were strong enough, it would keep hackers out,” says Simon Leech, HPE’s deputy director, Global Security Center of Excellence. Pandemic-era changes, including the shift to remote work and accelerated use of cloud, coupled with new and evolving threat vectors, mean that traditional approaches are no longer sufficient.

“Cyberresilience is about being able to anticipate an unforeseen event, withstand that event, recover, and adapt to what we’ve learned,” Leech says. “What cyberresilience really focuses us on is protecting critical services so we can deal with business risks in the most effective way. It’s about making sure there are regular test exercises that ensure that the data backup is going to be useful if worse comes to worst.”

A Cyberresilience Road Map

With a risk-based approach to cyberresilience, organizations evolve practices and design security to be business-aware. The first step is to perform a holistic risk assessment across the IT estate to understand where risk exists and to identify and prioritize the most critical systems based on business intelligence. “The only way to ensure 100% security is to give business users the confidence they can perform business securely and allow them to take risks, but do so in a secure manner,” Leech explains.

Adopting a cybersecurity architecture that embraces modern constructs such as zero trust and that incorporates agile concepts such as continuous improvement is another requisite. It is also necessary to formulate and institute time-tested incident response plans that detail the roles and responsibilities of all stakeholders, so they are adequately prepared to respond to a cyberincident.

Leech outlines several other recommended actions:

Be a partner to the business. IT needs to fully understand business requirements and work in conjunction with key business stakeholders, not serve primarily as a cybersecurity enforcer. “Enable the business to take risk; don’t prevent them from being efficient,” he advises.Remember that preparation is everything. Cyberresilience teams need to evaluate existing architecture documentation and assess the environment, either by scanning the environment for vulnerabilities, performing penetration tests, or running tabletop exercises. This checks that systems have the appropriate levels of protections to remain operational in the event of a cyberincident. As part of this exercise, organizations need to prepare adequate response plans and enforce the requisite best practices to bring the business back online.Shore up a data protection strategy. Different applications have different recovery-time-objective (RTO) and recovery-point-objective (RPO) requirements, both of which will impact backup and cyberresilience strategies. “It’s not a one-size-fits-all approach,” Leech says. “Organizations can’t just think about backup but [also about] how to do recovery as well. It’s about making sure you have the right strategy for the right application.”

The HPE GreenLake Advantage

The HPE GreenLake edge-to-cloud platform is designed with zero-trust principles and scalable security as a cornerstone of its architecture. The platform leverages common security building blocks, from silicon to the cloud, to continuously protect infrastructure, workloads, and data while adapting to increasingly complex threats.

HPE GreenLake for Data Protection delivers a family of services that reduces cybersecurity risks across distributed multicloud environments, helping prevent ransomware attacks, ensure recovery from disruption, and protect data and virtual machine (VM) workloads across on-premises and hybrid cloud environments. As part of the HPE GreenLake for Data Protection portfolio, HPE offers access to next-generation as-a-service data protection cloud services, including a disaster recovery service based on Zerto and HPE Backup and Recovery Service. This offering enables customers to easily manage hybrid cloud backup through a SaaS console along with providing policy-based orchestration and automation functionality.

To help organizations transition from traditional cybersecurity to more robust and holistic cyberresilience practices, HPE’s cybersecurity consulting team offers a variety of advisory and professional services. Among them are access to workshops, road maps, and architectural design advisory services, all focused on promoting organizational resilience and delivering on zero-trust security practices.

HPE GreenLake for Data Protection also aids in the cyberresilience journey because it removes up-front costs and overprovisioning risks. “Because you’re paying for use, HPE GreenLake for Data Protection will scale with the business and you don’t have to worry [about whether] you have enough backup capacity to deal with an application that is growing at a rate that wasn’t forecasted,” Leech says.

For more information, click here.

Cloud Security

High performance computing (HPC) is becoming mainstream for organizations, spurred on by their increasing use of artificial intelligence (AI) and data analytics. A 2021 study by Insersect360 Research found that 81% of organizations that use HPC reported they are running AI and machine learning or are planning to implement them soon. It’s happening globally and contributing to worldwide spending on HPC that is poised to exceed $59.65 billion in 2025, according to Grandview Research.

Simultaneously, the intersection of HPC, AI, and analytics workflows are putting pressure on systems administrators to support ever more complex environments. Admins are being asked to complete time-consuming manual configurations and reconfigurations of servers, storage and networking as they move nodes between clusters to provide the resources required for different workload demands. The resulting cluster sprawl consumes inordinate amounts of information technology (IT) resources. 

The answer? For many organizations, it’s a greater reliance on open-source software.

Reaping the Benefits of Open-Source Software & Communities

Developers at some organizations have found that open-source software is an effective way to advance the HPC software stack beyond the limitations of any one vendor. Examples of open-source software used for HPC include Apache Ignite, Open MPI, OpenSFS, OpenFOAM, and OpenStack. Almost all major original equipment manufacturers (OEMs) participate in the OpenHPC community, along with key HPC independent software vendors (ISVs) and top 

HPC sites. 

Organizations like Arizona State University Research Computing have turned to open-source software like Omnia, a set of tools for automating the deployment of open source or publicly available Slurm and Kubernetes workload management along with libraries, frameworks, operators, services, platforms and applications.

The Omnia software stack was created to help simplify and speed the process of building environments for mixed workloads by abstracting away the manual steps that can slow provisioning and lead to configuration errors. It was designed to speed and simplify the process of deploying and managing environments for mixed workloads, including simulation, high throughput computing, machine learning, deep learning and data analytics.

Members of the open-source software community contribute code and documentation updates to feature requests and bug reports. They also provide open forums for conversations about feature ideas and potential implementation solutions. As the open-source project grows and expands, so does the technical governance committee, with representation from top contributors and stakeholders.

“We have ASU engineers on my team working directly with the Dell engineers on the Omnia team,” said Douglas Jennewein, senior director of Arizona State University (ASU) Research Computing. “We’re working on code and providing feedback and direction on what we should look at next. It’s been a very rewarding effort… We’re paving not just the path for ASU but the path for advanced computing.”

ASU teams also use Open OnDemand, an open source HPC portal that allows users to log in to a HPC cluster via a traditional Secure Shell Protocol (SSH) terminal or via a web-based interface that uses Open OnDemand. Once connected, they can upload and download files; create, edit, submit and monitor jobs; run applications; and more via a web browser in a cloud-like experience with no client software to install and configure

Some Hot New Features of Open-Source Software for HPC  

Here is a sampling of some of the latest features in open-source software available to HPC application developers.

Dynamically change a user’s environment by adding or removing directories to the PATH environment variable. This makes it easier to run specific software in specific folders without updating the PATH environment variable and rebooting. It’s especially useful when third-party applications point to conflicting versions of the same libraries or objects.Choice of host operating system (OS) provisioned on bare metal. The speed and accuracy of applications are inherently affected by the host OS installed on the compute node. This provides bare metal options of different operating systems in the lab to be able to choose the one working optimally at any given time and best suited for an HPC application.Provide low-cost block storage that natively uses Network File System (NFS).  This adds flexible scalability and is ideal for persistent, long-term storage. Use telemetry and visualization on Red Hat Enterprise Linux. Users of Red Hat Enterprise Linux can take advantage of telemetry and visualization features to view power consumption, temperatures, and other operational metrics. BOSS RAID controller support. Redundant array of independent disks (RAID) arrays use multiple drives to split the I/O load, and are often preferred by HPC developers. 

The benefits of open-source software for HPC are significant. They include the ability to deploy faster, leverage fluid pools of resources, and integrate complete lifecycle management for unified data analytics, AI and HPC clusters.

For more information on and to contribute to the Omnia community, which includes Dell, Intel, university research environments, and many others, visit the Omnia github.


Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

IT Leadership

How do attackers exploit applications? Simply put, they look for entry points not expected by the developer. By expecting as many potential entry points as possible, developers can build with security in mind and plan appropriate countermeasures.

This is called threat modeling. It’s an important activity in the design phase of applications, as it shapes the entire delivery pipeline. In this article, we’ll cover some basics of how to use threat modeling during development and beyond to protect cloud services.

Integrating threat modeling into the development processes

In any agile development methodology, when business teams start creating a user story, they should include security as a key requirement and appoint a security champion. Some planning factors to consider are the presence of private data, business-critical assets, confidential information, users, and critical functions. Integrating security tools in the continuous integration/continuous development (CI/CD) pipeline automates the security code review process that examines the application’s attack surface. This code review might include Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Infrastructure as a Code (IaC) scanning tools.

All these inputs should be shared with the security champion, who would then identify the potential security threats and their mitigations and add them to the user story. With this information, the developers can build in the right security controls.

This information also can help testers focus on the most critical threats. Finally, the monitoring team can build capabilities that keep a close watch on these threats. This has the added benefit of measuring the effectiveness of the security controls built by the developers.

Applying threat modeling in AWS

After the development phase, threat modeling is still an important activity. Let’s take an example of the initial access tactic from the MITRE ATT&CK framework, which addresses methods attackers use to gain access to a target network or systems. Customers may have internet-facing web applications or servers hosted in AWS cloud, which may be vulnerable to attacks like DDoS (Distributed Denial of Service), XSS (Cross-Site Scripting), or SQL injection. In addition, remote services like SSH (Secure Shell), RDP (Remote Desktop Protocol), SNMP (Simple Network Management Protocol), and SMB (Server Message Block) can be leveraged to gain unauthorized remote access.

Considering the risks, security teams should review their security architecture to ensure sufficient logging of activities, which would help identify threats.

Security teams can use the security pillar of AWS Well-Architected Framework, which will help identify any gaps in security best practices. Conducting such a self-assessment exercise will measure the security posture of the application across various security pillars – namely, Identity Access Management – to ensure there is no provision for unauthorized access, data security, networking, and infrastructure.

Although next-gen firewalls may provide some level of visibility to those who are accessing the applications from source IP, application security can be enhanced by leveraging AWS WAF and AWS CloudFront. These services would limit exposure and prevent potential exploits from reaching the subsequent layers.

Network architecture should also be assessed to apply network segmentation principles. This will reduce the impact of a cyberattack in the event one of its external applications is compromised.

As a final layer of protection against initial access tactic methods, security teams should regularly audit AWS accounts to ensure no administrator privileges are granted to AWS resources and no administrator accounts are being used for day-to-day activities.

When used throughout the process, threat modeling reduces the number of threats and vulnerabilities that the business needs to address. This way, the security team can focus on the risks that are most likely, and thus be more effective – while allowing the business to focus on truly unlocking the potential of AWS.

Author Bio


Ph: +91 9176292448


Raji Krishnamoorthy leads the AWS Security and Compliance practice at Tata Consultancy Services. Raji helps enterprises create cloud security transformation roadmap, build solutions to uplift security posture, and design policies and compliance controls to minimize business risks. Raji, along with her team, enables organizations to strengthen security around identity access management, data, applications, infrastructure, and network. With more than 19 years of experience in the IT industry, Raji has held a variety of roles at TCS which include CoE lead for Public Cloud platforms and Enterprise Collaboration Platforms.

To learn more, visit us here.

Internet Security

As organizations brace for challenging economic conditions, they will need to be strategic and flexible on where they spend their resources to maintain business resilience. Proactive intelligence and automation tools will be essential as organizations enter “survival mode,” focusing on sustaining growth and efficiency. More importantly, organizations should ensure that even with a limited workforce and tightened budgets, the value and services they deliver to customers aren’t impacted.

However, monitoring and maintaining the myriad of infrastructure and application platforms that support business services is difficult when only using traditional methods. Investing in a solution that automatically and securely collects, aggregates, and analyzes data can enable teams with proactive intelligence to help organizations achieve quick time to value and be more productive.

With proactive intelligence, businesses can get ahead of potential issues and reduce both downtime and time to resolution so teams can focus on key priorities that maintain critical operations. This has a critical impact on businesses: one hour of IT downtime can often exceed one million to over five million dollars for mid-size and enterprise companies according to ITIC’s 2021 Hourly Cost of Downtime Survey. In addition, strategic investments in automation can help teams proactively identify and prevent problems while increasing security, reliability, and productivity. Rather than spending time firefighting, teams can focus on tasks that bring value to the business.

Automated Issue Avoidance

Between the move to the cloud, remote work, and the accelerated adoption of new technologies – IT complexity continues to grow with workforce attention already spread thin. Solutions that enable proactive intelligence services can help reduce pressure on IT teams by helping identify the problematic issues that cause downtime. Through AI/ML, more quickly through automated collection and analysis of product usage data. These capabilities provide a more effective mechanism for identifying potential problems, guiding how to remediate, and ultimately avoiding challenging service requests.

A large part of the support process today is dedicated to identifying the problem and determining its underlying cause. Without proactive support tools, companies are leaving value on the table. Expecting the unexpected in your IT environment means your business is solving problems that are broken – not just symptoms of problems – and avoiding issues before they occur.

Automate Common Workflows with APIs

APIs (Application Programming Interfaces) can be a powerful tool in automating common support workflows. APIs are a highly technical yet important aspect of a business’s underlying IT infrastructure – they are integral to bridging systems and enabling seamless transfer of information and connectivity. APIs enable different systems, applications, and platforms to connect and share data with one another and perform varied types of functions. Without APIs, enterprise tools and their benefits could become siloed – resulting in a reduced bottom line.

As organizations scale their environments, APIs are key to improving the developer experience as they facilitate collaboration and reusability. A better developer experience means better DevSecOps productivity which translates into immediate business value. Creating a software development culture that optimizes the developer experience allows teams to invest less time in internal processes and more time in creating something valuable for their customers. By automating common tasks and eliminating manual intervention, APIs can help organizations foster better developer productivity while significantly reducing costs.

Improved Productivity

The process of identifying a problem, determining its root cause, and troubleshooting can be time-consuming, and requiring the customer administrator to communicate and contextualize information for every support request logged further adds to this time. Proactive intelligence capabilities can help arm customers with holistic visibility into their environment fostering a faster, smarter, and easier way to maintain a healthy and productive environment. Intelligence tools like VMware Skyline can help to empower teams with the insights to solve issues on their own, and enable those organizations to move from reactive, fire-fighting mode to a proactive, predictive, and prescriptive posture.

When enterprises have tools that empower proactive responses and automate issue resolution, teams can increase productivity and dedicate more time to other business priorities. In addition, by improving overall security posture and environmental health, businesses can realize performance improvements to translate to greater operational efficiencies.

Succeeding in today’s business environment requires innovative approaches that lead to greater business operational agility. Break/Fix support is not enough anymore to monitor and support the extensive infrastructures enterprises have today that can span on-premises, remote sites, and the cloud.

Proactive intelligence and machine learning tools allow organizations to embrace an automated approach for troubleshooting, pinpointing root/cause analysis, guiding remediation and when needed an improved support experience that translates to more productivity for teams and better visibility into systems.

To learn more, visit us here.

Build Automation, IT Leadership

Increasing margins is critical to achieving sustained success in the retail industry.To maximize margins, leaders consider how to run the store more efficiently, how to deliver the best services to customers and how to grow new services. Traditionally, they have used rear-view mirror data to help accomplish these goals—that is, examining historical data from months prior and coming up with a plan. 

Today, retailers are relying more on proactive and contextual data in real-time. For instance, what are the online shopper’s preferences? Do they tend to buy button-down shirts and khakis or jeans and t-shirts? How does a brick-and-mortar store’s layout affect purchasing decisions? Context involves gathering data about human behavior throughout the customer journey to figure out why they buy what they buy. But how do you capture human emotions and activities in the moment, and then turn that data into useful information? How do you account for changes in behavior and preferences over time?

Obtaining the right insights, consistently, at a micro level about the consumer is key to delivering a more meaningful and personalized customer experience. Combining consumer shopping preferences with historical data can give you a contextually-rich, action-reaction paradigm. To accomplish this, retailers are turning to computer vision complemented by artficial intelligence.

Watch the videoReimagine the Future of Retail

Computer vision provides video and audio for additional context, complementing other types of data. Together, these data points become part of an analytics workflow delivering a tangible outcome. Using a federated approach, data can be analyzed where it is collected, producing insights used to make decisions in real time. 

This federated approach to analytics enables forward-thinking retailers to incorporate new approaches to using and orchestrating their data, using computer vision systems that grow as they grow. New use cases are more achievable, and IT can leverage these technologies to scale and drive further processes that enhance their momentum towards achieving the digitally-driven store of the future.

Let’s look at how computer vision is impacting the customer experience, store security and operations, revenue growth and sustainability today and what that means going forward. 

Continuing to address a top priority for the retail industry by improving safety and security 

Most retail establishments started their computer vision journey years ago when they brought in video camera systems for security purposes, providing them with a foundation to build on. Now it’s paying off.

When tied to a computer vision system, the visual data, historical data and AI can offer real-time situational awareness. Analysis occurs mainly on-site, at the edge. It’s quick and accurate, reducing staff response time. For example, a maintenance crew member can react almost immediately to spilled substances that could cause an accident. Anomaly detection can enhance a store’s loss prevention processes such as alerting security personnel to people who are concealing stolen items, and a real-time video analytics platform can even help with finding missing children.

Tackling current and future operational efficiency challenges 

The conventional store, where you build a structure and stock it with products and displays, is being transformed by customer’s buying patterns. The Intelligent Store (see Figure 1) consists of processes around employees (scheduling and reduction of effort), inventory and customers that can be constantly monitored and improved in real-time. With the intelligent store, retailers can, transform, adapt and respond to its customer’s needs and their beahiour with context and personalization 

With accurate data, managers today can utilize hyper-personalization to drive more sales, demand forecasting to maintain inventories and optimized route planning to cut costs. For this, you need real-time insights using sensors and cameras, and a strategy that aligns operations with the customer experience, autonomous retail and a host of integrated technologies to make it all happen.


Figure 1. The Intelligent Store extends across all facets of the retail industry to deliver benefits including real-time operational improvements, hyper-personalization and automation, scalability and security.

One goal of an Intelligent Store is to empower customers by reducing friction in the buying experience. That means touchless checkout, where items are “rung up” automatically as customers leave the store. For staffed checkouts, computer vision can monitor customer lines and move staff where needed in real time. Video-based inventory tracking ensures items are always in stock and enables traceability, as well as optimized picking for fulfilling ecommerce grocery orders. And curbside delivery is improved by combining visual data such as number plate and/or vehicle recognition, and sensor data so staff begin preparing to deliver groceries as soon as a customer drives into the lot.

The digital twin is another technology that boosts operational efficiency. Using software models, a retailer can run simulations of a real-world environment before committing to expensive changes. Imagine a designer creating a store planogram or distribution center in 3D, and using AI to determine the freshness of perishable items (to reduce spoilage), to optimize customer flow and merchandising, and for predictive analysis. A digital twin can be rendered on-site without the need to exchange huge amounts of data with a data center as the processing occurs at the edge.

Watch the video: Edge and computer vision are enabling better Retail

Enhancing the customer experience while increasing revenues

Happy customers inevitably buy more, so it is up to retailers to provide the right product with the right value. And by investing in the customer experience, revenues will automatically be maximized.

Consider virtual try-on, which combines computer vision, AI and augmented reality to allow shoppers to try on glasses, clothing and other items using their mobile device’s camera, or an in-store digital kiosk or mirror. “See it in your room” for furniture and electronics is similar. Virtual try-on is both immersive and a time-saver for customers, potentially resulting in higher per-session sales. 

Computer vision systems linked to inventory management systems is also a boon for the customer experience and optimization of revenue. Where cameras are used to scan existing inventory and update records, stock level checks are more accurate, helping to ensure the customer’s item isn’t backordered. Automatic updates to inventory after sales are completed saves on back-house time. From a merchandising perspective, computer vision can identify which areas of a store gets the most foot traffic and target hot spots where product should be placed.  

On the flip side is how to avoid losing revenue. Shrinkage in the global retail sector accounts for a staggering $100 billion USD* in annual losses, creating demand for technology and/or processes to prevent theft and fraud, and to better secure transactions. Many grocery stores now use cameras mounted at checkout stations to watch for sweetheart checking, prevent or detect item swapping and identify inaccurate scanning and payments.

Read the IDC Whitepaper “Future Loss Prevention: Advancing Fraud Detection Capabilities at self-checkout and throughout the retail store

Becoming environmental stewards and following sustainability practices

Many corporations today support initiatives to conserve resources and reduce waste. Computer vision is helping stores, malls, distribution centers and the like accomplish their sustainability goals.

The retail industry has several avenues to sustainability. Two of the most constructive are reducing energy consumption and using modern inventory management techniques.

Most of us are familiar with refrigerated cases with motion sensors that turn the lights on when a door is opened. Entire facilities can use the same principles, like smart HVAC, overhead and outdoor lighting to minimize power consumption.

Reducing food waste is another way to save money while having a positive impact on the community and environment. According to RTS, about 30% of the food in U.S. grocery stores is thrown away every year. Optimized cold chain management reduces spoilage as well as the energy needed to maintain perishables from the loading dock to the freezer case or produce bin. Proactive restocking, based on historical data and AI, further ensures that items are available when needed and in sellable quantities for a particular store.

Although the pandemic boosted online and curbside pickup sales, the resulting supply chain issues have left customers somewhat disillusioned and wondering which important item will become hard to get. Customers will accept some inconvenience due to a worldwide event, however, retailers need to be prepared for the near-term future shopper who has high expectations and whose loyalty may be harder to keep. That can be done through a data-driven approach using computer vision and AI.

Retail organizations can build on the safety and security infrastructure already deployed in their stores and at a pace that’s right for their business. Digital transformation is an on-going process and many retailers are already engaged with Dell Technologies in developing the right framework to guide them through their journey, while enabling their business to remain agile and innovate.

For an overview of computer vision and its impact on retail, read the Solution Brief, “Protecting retails assets and unlocking the potential of your data with AI-driven Computer Vision.”

Learn more about how computer vision is positively impacting other industries: 

The Future Is Computer Vision – Real-Time Situational Awareness, Better Quality and Faster InsightsComputer Vision Is Transforming the Transportation Industry, Making It Safer, More Efficient and Improving the Bottom LineHow Computer Vision is revolutionizing the Manufacturing Supply ChainHow the Sports and Entertainment Industry Is Reinventing the Fan Experience and Enhancing Revenues with Computer Vision

* Sensormatic Global Shrink Index:

Artificial Intelligence