When the world’s largest healthcare company by revenue went looking for a technology solution that could improve quality of care while reducing costs, the search took ten years. What they found—an innovative way to model healthcare data—is saving the company an estimated $150M annually and enabling its medical professionals to provide accurate and effective care path recommendations in real time. It’s a remedy with important implications for the future of healthcare. 

This same solution, graph databases and graph analytics, proved crucial at the height of the Covid-19 pandemic. A testament to its potential, the market for graph technology is projected to reach $11.25B by 2030.[1]

Graph technology isn’t new. It’s what social networking applications use to store and process vast amounts of “connected” data. It turns out graphs can do much more than connect people to their high school friends. They are also perfect for storing and visualizing large healthcare data models so it can be quickly processed and analyzed. Graphs can make previously unavailable connections from disparate data spread across many different platforms. One example would be making connections between data collected from a patient’s various doctors and pharmacies. 

Why Graph Analytics is Important for Healthcare

Hospitals deal with stockpiles of data. Every touchpoint is stored in a hospital’s electronic health record including visits, prescriptions, operations, and immunizations. Too much data can be a challenge, making it difficult to access and analyze information when and where it’s needed.

Hence the business case for graph databases. Data that’s represented in the form of a graph rather than a table enables quick analysis and faster time to insights. For healthcare professionals, sophisticated graph algorithms can return specific results, and graph visualization tools allow analysts to make useful connections and identify patterns that help solve problems.

Graph analytics is an ideal technology to help to tackle the challenges caused by large, disparate, datasets since it becomes more impactful as the volume, velocity and variety of data expands.[2] Storing and accessing this data alone is not enough. As a tool set, graph analytics prioritizes the relationships between the data—an arena where relational databases fall short.

Data scientists and leaders in the healthcare industry can use the most advanced graph analytics, known as native parallel graphs, to link datasets across multiple domains. This would allow the system to find frequent patterns and suggest the next best action. Ultimately, medical professionals would be able to rely on the most accurate data to provide patients with beneficial, real-time recommendations. 

“In the past, when somebody called into our call center, we would have had to log into 15 different systems to get a view of this member’s activity. Now users log into just one screen and have a beautiful timeline view of every touchpoint we’ve had with members,” said a distinguished engineer from a major healthcare company that recently deployed graph technology.

The Impact of Graph Technology on Covid-19

A graph-based approach to community tracing and risk detection was essential in 2020 as government officials and healthcare professionals worked overtime to understand and prevent the spread of Covid-19. For government agencies, graph technology led to agile and evidence-based emergency management and improved public health emergency response. 

Because graph analytics can sift through thousands of data sources and find relationships, even with complex and varying inputs, it was an excellent way to answer complicated questions related to the spread of disease. These capabilities helped with contact tracing used to identify, locate, and notify people who had been exposed to the virus. 

The technology also recognized relationships between data points—for example, common symptoms of people more likely to have a serious case of Covid based on pre-existing conditions. Armed with this insight, healthcare providers could warn patients when they were at higher risk. 

Future Implications for Healthcare and Beyond

As the healthcare industry moves beyond the pandemic, it emerges more prepared to respond to a wide variety of situations—from widespread health crises and everyday patient care. Healthcare companies already applying graph databases and graph analytics are experiencing the benefits. The technology supports their work to help members embrace healthier lifestyles, avoid costly pharmaceuticals, recover faster from medical procedures and more. Essentially, healthcare companies using graph technology are better equipped to provide quality care while controlling costs.

For data-centric companies looking to implement these solutions, a graph database running on Dell PowerEdge servers is the optimal offering in terms of performance, efficiency, and scale. To learn more about the business benefits of connected data, read this brief and visit Dell.com/Analytics to learn about solutions for analytics.

[1] https://www.prnewswire.com/news-releases/graph-database-market-size-to-reach-usd-11-25-billion-in-2030–increasing-demand-for-flexible-online-schema-environments-is-a-key-factor-driving-industry-demand-says-emergen-research-301478726.html

[2] https://www.youtube.com/watch?v=A3Ppx01Bon4

IT Leadership

Digitalization is a double-edged sword for banks, especially when it comes to security. A massive shift to cloud and API-based ways of working has made the sector become more agile and innovative, but it has also opened the floodgates for identity theft. As interactions and transactions become more interconnected, even the simplest processes like opening a new account or making a balance transfer become riddled with security concerns.

As financial services become more digital in nature, it’s important that banks think differently when using data analytics, security tools, and education to improve identity authentication and customer data privacy. Avaya’s research report reveals three critical ways to do so.

1. Make the Most of the Powerful Tool in Your Customers’ Hands

Almost every customer owns a smartphone, and they use that device to call into the contact center when they need to resolve an issue or complicated matter. Have you thought about what can be done with this device to enhance identity authentication? Older security methods like Knowledge-based Authentication (KBA) only prove what a person knows. By leveraging the sensors in a customer’s connected device, banks can go one step further to prove who someone is — and that makes all the difference.

These sensors, which include location services, cameras, and QR code scanning, make a customer’s smart device a valuable source of a vast amount of information and inputs that help banks create a trusted identity template for customers. Once this identity template is established, all transactions are tied directly to a customer’s verified identity. This allows simple but risky transactions like requesting a new debit card, ordering checks, or updating an address to be done simply, quickly, and with far lower risk to the bank and its customers.

2. Shield Sensitive Data from Agents Using Zero Knowledge Proof

When a customer calls into the contact center, all of that person’s information is made visible to the agent who needs to verify them: their address, their driver’s license number, their social security number, etc. What’s stopping an agent from using their cellphone to take a picture of a customer’s personally identifiable information? It’s a scary thought, especially with so many customer service jobs now offsite out of supervisors’ views. Customer service workers don’t need so much visibility into this data.

Zero Knowledge Proof is an advanced cryptographic technique that makes it possible for organizations to verify sensitive or personally identifiable information without revealing that data to workers. The agent doesn’t need to see the data to verify its accuracy or authenticity and will therefore have no knowledge of it — hence, “zero knowledge proof.” All employees will see are the results that matter to them (whether a payment went through, whether a document was signed, that a customer’s SSN checks out) with a green checkmark verifying its approval from whichever third-party company verified it.

3. Outbound Notifications for Fraud Protection

In a sea of scam callers, most customers immediately send unknown numbers to voicemail. This is a major challenge for banks trying to reach customers to perform a number of legitimate tasks and build relationships. By securely sending notifications across the channel of a customer’s choice (SMS, in-app message if the company offers a mobile app), banks can reach customers faster and with high veracity authentication. In this way, customers will receive a notification via text or in-app message before an incoming call asking them to “tap” and log in. They will be instantly authenticated and, if desired, can schedule the call for a convenient time.

These notifications can also be used to simplify routine interactions like checking an account balance or bill pay. For example, a customer can click on the link in a text message their bank sends them reminding them that a payment is due for their credit card. Notifications can be sent for non-payment interactions as well, such as post contact surveys and new customer eForms.  All of this can be done with full PCI compliance. In fact, banks can take their contact center out of the scope of compliance altogether.

Learn more from Avaya’s research about what banks should consider to digitally evolve. View the full report, Five Recent Trends Shaping the Banking Industry.

IT Leadership

Pandemic-era ransomware attacks have highlighted the need for robust cybersecurity safeguards. Now, leading organizations are going further, embracing a cyberresilience paradigm designed to bring agility to incident response while ensuring sustainable business operations, whatever the event or impact.

Cyberresilience, as defined by the Ponemon Institute, is an enterprise’s capacity for maintaining its core business in the face of cyberattacks. NIST defines cyberresilience as “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on systems that use or are enabled by cyber resources.”

The practice brings together formerly separate disciplines of information security, business continuity, and disaster response (BC/DR) deployed to meet common goals. Although traditional cybersecurity practices were designed to keep cybercriminals out and BC/DR focused on recoverability, cyberresilience aligns the strategies, tactics, and planning of these traditionally siloed disciplines. The goal: a more holistic approach than what’s possible by addressing each individually.

At the same time, improving cyberresilience challenges organizations to think differently about their approach to cybersecurity. Instead of focusing efforts solely on protection, enterprises must assume that cyberevents will occur. Adopting practices and frameworks designed to sustain IT capabilities as well as system-wide business operations is essential.

“The traditional approach to cybersecurity was about having a good lock on the front door and locks on all the windows, with the idea that if my security controls were strong enough, it would keep hackers out,” says Simon Leech, HPE’s deputy director, Global Security Center of Excellence. Pandemic-era changes, including the shift to remote work and accelerated use of cloud, coupled with new and evolving threat vectors, mean that traditional approaches are no longer sufficient.

“Cyberresilience is about being able to anticipate an unforeseen event, withstand that event, recover, and adapt to what we’ve learned,” Leech says. “What cyberresilience really focuses us on is protecting critical services so we can deal with business risks in the most effective way. It’s about making sure there are regular test exercises that ensure that the data backup is going to be useful if worse comes to worst.”

A Cyberresilience Road Map

With a risk-based approach to cyberresilience, organizations evolve practices and design security to be business-aware. The first step is to perform a holistic risk assessment across the IT estate to understand where risk exists and to identify and prioritize the most critical systems based on business intelligence. “The only way to ensure 100% security is to give business users the confidence they can perform business securely and allow them to take risks, but do so in a secure manner,” Leech explains.

Adopting a cybersecurity architecture that embraces modern constructs such as zero trust and that incorporates agile concepts such as continuous improvement is another requisite. It is also necessary to formulate and institute time-tested incident response plans that detail the roles and responsibilities of all stakeholders, so they are adequately prepared to respond to a cyberincident.

Leech outlines several other recommended actions:

Be a partner to the business. IT needs to fully understand business requirements and work in conjunction with key business stakeholders, not serve primarily as a cybersecurity enforcer. “Enable the business to take risk; don’t prevent them from being efficient,” he advises.Remember that preparation is everything. Cyberresilience teams need to evaluate existing architecture documentation and assess the environment, either by scanning the environment for vulnerabilities, performing penetration tests, or running tabletop exercises. This checks that systems have the appropriate levels of protections to remain operational in the event of a cyberincident. As part of this exercise, organizations need to prepare adequate response plans and enforce the requisite best practices to bring the business back online.Shore up a data protection strategy. Different applications have different recovery-time-objective (RTO) and recovery-point-objective (RPO) requirements, both of which will impact backup and cyberresilience strategies. “It’s not a one-size-fits-all approach,” Leech says. “Organizations can’t just think about backup but [also about] how to do recovery as well. It’s about making sure you have the right strategy for the right application.”

The HPE GreenLake Advantage

The HPE GreenLake edge-to-cloud platform is designed with zero-trust principles and scalable security as a cornerstone of its architecture. The platform leverages common security building blocks, from silicon to the cloud, to continuously protect infrastructure, workloads, and data while adapting to increasingly complex threats.

HPE GreenLake for Data Protection delivers a family of services that reduces cybersecurity risks across distributed multicloud environments, helping prevent ransomware attacks, ensure recovery from disruption, and protect data and virtual machine (VM) workloads across on-premises and hybrid cloud environments. As part of the HPE GreenLake for Data Protection portfolio, HPE offers access to next-generation as-a-service data protection cloud services, including a disaster recovery service based on Zerto and HPE Backup and Recovery Service. This offering enables customers to easily manage hybrid cloud backup through a SaaS console along with providing policy-based orchestration and automation functionality.

To help organizations transition from traditional cybersecurity to more robust and holistic cyberresilience practices, HPE’s cybersecurity consulting team offers a variety of advisory and professional services. Among them are access to workshops, road maps, and architectural design advisory services, all focused on promoting organizational resilience and delivering on zero-trust security practices.

HPE GreenLake for Data Protection also aids in the cyberresilience journey because it removes up-front costs and overprovisioning risks. “Because you’re paying for use, HPE GreenLake for Data Protection will scale with the business and you don’t have to worry [about whether] you have enough backup capacity to deal with an application that is growing at a rate that wasn’t forecasted,” Leech says.

For more information, click here.

Cloud Security

High performance computing (HPC) is becoming mainstream for organizations, spurred on by their increasing use of artificial intelligence (AI) and data analytics. A 2021 study by Insersect360 Research found that 81% of organizations that use HPC reported they are running AI and machine learning or are planning to implement them soon. It’s happening globally and contributing to worldwide spending on HPC that is poised to exceed $59.65 billion in 2025, according to Grandview Research.

Simultaneously, the intersection of HPC, AI, and analytics workflows are putting pressure on systems administrators to support ever more complex environments. Admins are being asked to complete time-consuming manual configurations and reconfigurations of servers, storage and networking as they move nodes between clusters to provide the resources required for different workload demands. The resulting cluster sprawl consumes inordinate amounts of information technology (IT) resources. 

The answer? For many organizations, it’s a greater reliance on open-source software.

Reaping the Benefits of Open-Source Software & Communities

Developers at some organizations have found that open-source software is an effective way to advance the HPC software stack beyond the limitations of any one vendor. Examples of open-source software used for HPC include Apache Ignite, Open MPI, OpenSFS, OpenFOAM, and OpenStack. Almost all major original equipment manufacturers (OEMs) participate in the OpenHPC community, along with key HPC independent software vendors (ISVs) and top 

HPC sites. 

Organizations like Arizona State University Research Computing have turned to open-source software like Omnia, a set of tools for automating the deployment of open source or publicly available Slurm and Kubernetes workload management along with libraries, frameworks, operators, services, platforms and applications.

The Omnia software stack was created to help simplify and speed the process of building environments for mixed workloads by abstracting away the manual steps that can slow provisioning and lead to configuration errors. It was designed to speed and simplify the process of deploying and managing environments for mixed workloads, including simulation, high throughput computing, machine learning, deep learning and data analytics.

Members of the open-source software community contribute code and documentation updates to feature requests and bug reports. They also provide open forums for conversations about feature ideas and potential implementation solutions. As the open-source project grows and expands, so does the technical governance committee, with representation from top contributors and stakeholders.

“We have ASU engineers on my team working directly with the Dell engineers on the Omnia team,” said Douglas Jennewein, senior director of Arizona State University (ASU) Research Computing. “We’re working on code and providing feedback and direction on what we should look at next. It’s been a very rewarding effort… We’re paving not just the path for ASU but the path for advanced computing.”

ASU teams also use Open OnDemand, an open source HPC portal that allows users to log in to a HPC cluster via a traditional Secure Shell Protocol (SSH) terminal or via a web-based interface that uses Open OnDemand. Once connected, they can upload and download files; create, edit, submit and monitor jobs; run applications; and more via a web browser in a cloud-like experience with no client software to install and configure

Some Hot New Features of Open-Source Software for HPC  

Here is a sampling of some of the latest features in open-source software available to HPC application developers.

Dynamically change a user’s environment by adding or removing directories to the PATH environment variable. This makes it easier to run specific software in specific folders without updating the PATH environment variable and rebooting. It’s especially useful when third-party applications point to conflicting versions of the same libraries or objects.Choice of host operating system (OS) provisioned on bare metal. The speed and accuracy of applications are inherently affected by the host OS installed on the compute node. This provides bare metal options of different operating systems in the lab to be able to choose the one working optimally at any given time and best suited for an HPC application.Provide low-cost block storage that natively uses Network File System (NFS).  This adds flexible scalability and is ideal for persistent, long-term storage. Use telemetry and visualization on Red Hat Enterprise Linux. Users of Red Hat Enterprise Linux can take advantage of telemetry and visualization features to view power consumption, temperatures, and other operational metrics. BOSS RAID controller support. Redundant array of independent disks (RAID) arrays use multiple drives to split the I/O load, and are often preferred by HPC developers. 

The benefits of open-source software for HPC are significant. They include the ability to deploy faster, leverage fluid pools of resources, and integrate complete lifecycle management for unified data analytics, AI and HPC clusters.

For more information on and to contribute to the Omnia community, which includes Dell, Intel, university research environments, and many others, visit the Omnia github.

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

IT Leadership

How do attackers exploit applications? Simply put, they look for entry points not expected by the developer. By expecting as many potential entry points as possible, developers can build with security in mind and plan appropriate countermeasures.

This is called threat modeling. It’s an important activity in the design phase of applications, as it shapes the entire delivery pipeline. In this article, we’ll cover some basics of how to use threat modeling during development and beyond to protect cloud services.

Integrating threat modeling into the development processes

In any agile development methodology, when business teams start creating a user story, they should include security as a key requirement and appoint a security champion. Some planning factors to consider are the presence of private data, business-critical assets, confidential information, users, and critical functions. Integrating security tools in the continuous integration/continuous development (CI/CD) pipeline automates the security code review process that examines the application’s attack surface. This code review might include Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Infrastructure as a Code (IaC) scanning tools.

All these inputs should be shared with the security champion, who would then identify the potential security threats and their mitigations and add them to the user story. With this information, the developers can build in the right security controls.

This information also can help testers focus on the most critical threats. Finally, the monitoring team can build capabilities that keep a close watch on these threats. This has the added benefit of measuring the effectiveness of the security controls built by the developers.

Applying threat modeling in AWS

After the development phase, threat modeling is still an important activity. Let’s take an example of the initial access tactic from the MITRE ATT&CK framework, which addresses methods attackers use to gain access to a target network or systems. Customers may have internet-facing web applications or servers hosted in AWS cloud, which may be vulnerable to attacks like DDoS (Distributed Denial of Service), XSS (Cross-Site Scripting), or SQL injection. In addition, remote services like SSH (Secure Shell), RDP (Remote Desktop Protocol), SNMP (Simple Network Management Protocol), and SMB (Server Message Block) can be leveraged to gain unauthorized remote access.

Considering the risks, security teams should review their security architecture to ensure sufficient logging of activities, which would help identify threats.

Security teams can use the security pillar of AWS Well-Architected Framework, which will help identify any gaps in security best practices. Conducting such a self-assessment exercise will measure the security posture of the application across various security pillars – namely, Identity Access Management – to ensure there is no provision for unauthorized access, data security, networking, and infrastructure.

Although next-gen firewalls may provide some level of visibility to those who are accessing the applications from source IP, application security can be enhanced by leveraging AWS WAF and AWS CloudFront. These services would limit exposure and prevent potential exploits from reaching the subsequent layers.

Network architecture should also be assessed to apply network segmentation principles. This will reduce the impact of a cyberattack in the event one of its external applications is compromised.

As a final layer of protection against initial access tactic methods, security teams should regularly audit AWS accounts to ensure no administrator privileges are granted to AWS resources and no administrator accounts are being used for day-to-day activities.

When used throughout the process, threat modeling reduces the number of threats and vulnerabilities that the business needs to address. This way, the security team can focus on the risks that are most likely, and thus be more effective – while allowing the business to focus on truly unlocking the potential of AWS.

Author Bio

TCS

Ph: +91 9176292448

E-mail: raji.krishnamoorthy@tcs.com

Raji Krishnamoorthy leads the AWS Security and Compliance practice at Tata Consultancy Services. Raji helps enterprises create cloud security transformation roadmap, build solutions to uplift security posture, and design policies and compliance controls to minimize business risks. Raji, along with her team, enables organizations to strengthen security around identity access management, data, applications, infrastructure, and network. With more than 19 years of experience in the IT industry, Raji has held a variety of roles at TCS which include CoE lead for Public Cloud platforms and Enterprise Collaboration Platforms.

To learn more, visit us here.

Internet Security

As organizations brace for challenging economic conditions, they will need to be strategic and flexible on where they spend their resources to maintain business resilience. Proactive intelligence and automation tools will be essential as organizations enter “survival mode,” focusing on sustaining growth and efficiency. More importantly, organizations should ensure that even with a limited workforce and tightened budgets, the value and services they deliver to customers aren’t impacted.

However, monitoring and maintaining the myriad of infrastructure and application platforms that support business services is difficult when only using traditional methods. Investing in a solution that automatically and securely collects, aggregates, and analyzes data can enable teams with proactive intelligence to help organizations achieve quick time to value and be more productive.

With proactive intelligence, businesses can get ahead of potential issues and reduce both downtime and time to resolution so teams can focus on key priorities that maintain critical operations. This has a critical impact on businesses: one hour of IT downtime can often exceed one million to over five million dollars for mid-size and enterprise companies according to ITIC’s 2021 Hourly Cost of Downtime Survey. In addition, strategic investments in automation can help teams proactively identify and prevent problems while increasing security, reliability, and productivity. Rather than spending time firefighting, teams can focus on tasks that bring value to the business.

Automated Issue Avoidance

Between the move to the cloud, remote work, and the accelerated adoption of new technologies – IT complexity continues to grow with workforce attention already spread thin. Solutions that enable proactive intelligence services can help reduce pressure on IT teams by helping identify the problematic issues that cause downtime. Through AI/ML, more quickly through automated collection and analysis of product usage data. These capabilities provide a more effective mechanism for identifying potential problems, guiding how to remediate, and ultimately avoiding challenging service requests.

A large part of the support process today is dedicated to identifying the problem and determining its underlying cause. Without proactive support tools, companies are leaving value on the table. Expecting the unexpected in your IT environment means your business is solving problems that are broken – not just symptoms of problems – and avoiding issues before they occur.

Automate Common Workflows with APIs

APIs (Application Programming Interfaces) can be a powerful tool in automating common support workflows. APIs are a highly technical yet important aspect of a business’s underlying IT infrastructure – they are integral to bridging systems and enabling seamless transfer of information and connectivity. APIs enable different systems, applications, and platforms to connect and share data with one another and perform varied types of functions. Without APIs, enterprise tools and their benefits could become siloed – resulting in a reduced bottom line.

As organizations scale their environments, APIs are key to improving the developer experience as they facilitate collaboration and reusability. A better developer experience means better DevSecOps productivity which translates into immediate business value. Creating a software development culture that optimizes the developer experience allows teams to invest less time in internal processes and more time in creating something valuable for their customers. By automating common tasks and eliminating manual intervention, APIs can help organizations foster better developer productivity while significantly reducing costs.

Improved Productivity

The process of identifying a problem, determining its root cause, and troubleshooting can be time-consuming, and requiring the customer administrator to communicate and contextualize information for every support request logged further adds to this time. Proactive intelligence capabilities can help arm customers with holistic visibility into their environment fostering a faster, smarter, and easier way to maintain a healthy and productive environment. Intelligence tools like VMware Skyline can help to empower teams with the insights to solve issues on their own, and enable those organizations to move from reactive, fire-fighting mode to a proactive, predictive, and prescriptive posture.

When enterprises have tools that empower proactive responses and automate issue resolution, teams can increase productivity and dedicate more time to other business priorities. In addition, by improving overall security posture and environmental health, businesses can realize performance improvements to translate to greater operational efficiencies.

Succeeding in today’s business environment requires innovative approaches that lead to greater business operational agility. Break/Fix support is not enough anymore to monitor and support the extensive infrastructures enterprises have today that can span on-premises, remote sites, and the cloud.

Proactive intelligence and machine learning tools allow organizations to embrace an automated approach for troubleshooting, pinpointing root/cause analysis, guiding remediation and when needed an improved support experience that translates to more productivity for teams and better visibility into systems.

To learn more, visit us here.

Build Automation, IT Leadership

Increasing margins is critical to achieving sustained success in the retail industry.To maximize margins, leaders consider how to run the store more efficiently, how to deliver the best services to customers and how to grow new services. Traditionally, they have used rear-view mirror data to help accomplish these goals—that is, examining historical data from months prior and coming up with a plan. 

Today, retailers are relying more on proactive and contextual data in real-time. For instance, what are the online shopper’s preferences? Do they tend to buy button-down shirts and khakis or jeans and t-shirts? How does a brick-and-mortar store’s layout affect purchasing decisions? Context involves gathering data about human behavior throughout the customer journey to figure out why they buy what they buy. But how do you capture human emotions and activities in the moment, and then turn that data into useful information? How do you account for changes in behavior and preferences over time?

Obtaining the right insights, consistently, at a micro level about the consumer is key to delivering a more meaningful and personalized customer experience. Combining consumer shopping preferences with historical data can give you a contextually-rich, action-reaction paradigm. To accomplish this, retailers are turning to computer vision complemented by artficial intelligence.

Watch the videoReimagine the Future of Retail

Computer vision provides video and audio for additional context, complementing other types of data. Together, these data points become part of an analytics workflow delivering a tangible outcome. Using a federated approach, data can be analyzed where it is collected, producing insights used to make decisions in real time. 

This federated approach to analytics enables forward-thinking retailers to incorporate new approaches to using and orchestrating their data, using computer vision systems that grow as they grow. New use cases are more achievable, and IT can leverage these technologies to scale and drive further processes that enhance their momentum towards achieving the digitally-driven store of the future.

Let’s look at how computer vision is impacting the customer experience, store security and operations, revenue growth and sustainability today and what that means going forward. 

Continuing to address a top priority for the retail industry by improving safety and security 

Most retail establishments started their computer vision journey years ago when they brought in video camera systems for security purposes, providing them with a foundation to build on. Now it’s paying off.

When tied to a computer vision system, the visual data, historical data and AI can offer real-time situational awareness. Analysis occurs mainly on-site, at the edge. It’s quick and accurate, reducing staff response time. For example, a maintenance crew member can react almost immediately to spilled substances that could cause an accident. Anomaly detection can enhance a store’s loss prevention processes such as alerting security personnel to people who are concealing stolen items, and a real-time video analytics platform can even help with finding missing children.

Tackling current and future operational efficiency challenges 

The conventional store, where you build a structure and stock it with products and displays, is being transformed by customer’s buying patterns. The Intelligent Store (see Figure 1) consists of processes around employees (scheduling and reduction of effort), inventory and customers that can be constantly monitored and improved in real-time. With the intelligent store, retailers can, transform, adapt and respond to its customer’s needs and their beahiour with context and personalization 

With accurate data, managers today can utilize hyper-personalization to drive more sales, demand forecasting to maintain inventories and optimized route planning to cut costs. For this, you need real-time insights using sensors and cameras, and a strategy that aligns operations with the customer experience, autonomous retail and a host of integrated technologies to make it all happen.

Dell

Figure 1. The Intelligent Store extends across all facets of the retail industry to deliver benefits including real-time operational improvements, hyper-personalization and automation, scalability and security.

One goal of an Intelligent Store is to empower customers by reducing friction in the buying experience. That means touchless checkout, where items are “rung up” automatically as customers leave the store. For staffed checkouts, computer vision can monitor customer lines and move staff where needed in real time. Video-based inventory tracking ensures items are always in stock and enables traceability, as well as optimized picking for fulfilling ecommerce grocery orders. And curbside delivery is improved by combining visual data such as number plate and/or vehicle recognition, and sensor data so staff begin preparing to deliver groceries as soon as a customer drives into the lot.

The digital twin is another technology that boosts operational efficiency. Using software models, a retailer can run simulations of a real-world environment before committing to expensive changes. Imagine a designer creating a store planogram or distribution center in 3D, and using AI to determine the freshness of perishable items (to reduce spoilage), to optimize customer flow and merchandising, and for predictive analysis. A digital twin can be rendered on-site without the need to exchange huge amounts of data with a data center as the processing occurs at the edge.

Watch the video: Edge and computer vision are enabling better Retail

Enhancing the customer experience while increasing revenues

Happy customers inevitably buy more, so it is up to retailers to provide the right product with the right value. And by investing in the customer experience, revenues will automatically be maximized.

Consider virtual try-on, which combines computer vision, AI and augmented reality to allow shoppers to try on glasses, clothing and other items using their mobile device’s camera, or an in-store digital kiosk or mirror. “See it in your room” for furniture and electronics is similar. Virtual try-on is both immersive and a time-saver for customers, potentially resulting in higher per-session sales. 

Computer vision systems linked to inventory management systems is also a boon for the customer experience and optimization of revenue. Where cameras are used to scan existing inventory and update records, stock level checks are more accurate, helping to ensure the customer’s item isn’t backordered. Automatic updates to inventory after sales are completed saves on back-house time. From a merchandising perspective, computer vision can identify which areas of a store gets the most foot traffic and target hot spots where product should be placed.  

On the flip side is how to avoid losing revenue. Shrinkage in the global retail sector accounts for a staggering $100 billion USD* in annual losses, creating demand for technology and/or processes to prevent theft and fraud, and to better secure transactions. Many grocery stores now use cameras mounted at checkout stations to watch for sweetheart checking, prevent or detect item swapping and identify inaccurate scanning and payments.

Read the IDC Whitepaper “Future Loss Prevention: Advancing Fraud Detection Capabilities at self-checkout and throughout the retail store

Becoming environmental stewards and following sustainability practices

Many corporations today support initiatives to conserve resources and reduce waste. Computer vision is helping stores, malls, distribution centers and the like accomplish their sustainability goals.

The retail industry has several avenues to sustainability. Two of the most constructive are reducing energy consumption and using modern inventory management techniques.

Most of us are familiar with refrigerated cases with motion sensors that turn the lights on when a door is opened. Entire facilities can use the same principles, like smart HVAC, overhead and outdoor lighting to minimize power consumption.

Reducing food waste is another way to save money while having a positive impact on the community and environment. According to RTS, about 30% of the food in U.S. grocery stores is thrown away every year. Optimized cold chain management reduces spoilage as well as the energy needed to maintain perishables from the loading dock to the freezer case or produce bin. Proactive restocking, based on historical data and AI, further ensures that items are available when needed and in sellable quantities for a particular store.

Although the pandemic boosted online and curbside pickup sales, the resulting supply chain issues have left customers somewhat disillusioned and wondering which important item will become hard to get. Customers will accept some inconvenience due to a worldwide event, however, retailers need to be prepared for the near-term future shopper who has high expectations and whose loyalty may be harder to keep. That can be done through a data-driven approach using computer vision and AI.

Retail organizations can build on the safety and security infrastructure already deployed in their stores and at a pace that’s right for their business. Digital transformation is an on-going process and many retailers are already engaged with Dell Technologies in developing the right framework to guide them through their journey, while enabling their business to remain agile and innovate.

For an overview of computer vision and its impact on retail, read the Solution Brief, “Protecting retails assets and unlocking the potential of your data with AI-driven Computer Vision.”

Learn more about how computer vision is positively impacting other industries: 

The Future Is Computer Vision – Real-Time Situational Awareness, Better Quality and Faster InsightsComputer Vision Is Transforming the Transportation Industry, Making It Safer, More Efficient and Improving the Bottom LineHow Computer Vision is revolutionizing the Manufacturing Supply ChainHow the Sports and Entertainment Industry Is Reinventing the Fan Experience and Enhancing Revenues with Computer Vision

* Sensormatic Global Shrink Index: https://www.sensormatic.com/landing/shrink-index-sensormatic

Artificial Intelligence

For decades, organizations have tried to unlock the collective knowledge contained within their people and systems. And the challenge is getting harder, since every year, massive amounts of additional information are created for people to share. We’ve reached a point at which individuals are unable consume, understand, or even find half the information that is available to them.

1.

What is business analytics?

Business analytics is the practical application of statistical analysis and technologies on business data to identify and anticipate trends and predict business outcomes. Research firm Gartner defines business analytics as “solutions used to build analysis models and simulations to create scenarios, understand realities, and predict future states.”

While quantitative analysis, operational analysis, and data visualizations are key components of business analytics, the goal is to use the insights gained to shape business decisions. The discipline is a key facet of the business analyst role.

Wake Forest University School of Business notes that key business analytics activities include:

Identifying new patterns and relationships with data miningUsing quantitative and statistical analysis to design business modelsConducting A/B and multivariable testing based on findingsForecasting future business needs, performance, and industry trends with predictive modelingCommunicating findings to colleagues, management, and customers

2.

What are the benefits of business analytics?

Business analytics can help you improve operational efficiency, better understand your customers, project future outcomes, glean insights to aid in decision-making, measure performance, drive growth, discover hidden trends, generate leads, and scale your business in the right direction, according to digital skills training company Simplilearn.

3.

What is the difference between business analytics and data analytics?

Business analytics is a subset of data analytics. Data analytics is used across disciplines to find trends and solve problems using data mining, data cleansing, data transformation, data modeling, and more. Business analytics also involves data mining, statistical analysis, predictive modeling, and the like, but is focused on driving better business decisions.

4.

What is the difference between business analytics and business intelligence?

Business analytics and business intelligence (BI) serve similar purposes and are often used as interchangeable terms, but BI can be considered a subset of business analytics. BI focuses on descriptive analytics, data collection, data storage, knowledge management, and data analysis to evaluate past business data and better understand currently known information. Whereas BI studies historical data to guide business decision-making, business analytics is about looking forward. It uses data mining, data modeling, and machine learning to answer “why” something happened and predict what might happen in the future.

Business analytics techniques

According to Harvard Business School Online, there are three primary types of business analytics:

Descriptive analytics: What is happening in your business right now? Descriptive analytics uses historical and current data to describe the organization’s present state by identifying trends and patterns. This is the purview of BI.Predictive analytics: What is likely to happen in the future? Predictive analytics is the use of techniques such as statistical modeling, forecasting, and machine learning to make predictions about future outcomes.Prescriptive analytics: What do we need to do? Prescriptive analytics is the application of testing and other techniques to recommend specific solutions that will deliver desired business outcomes.

Simplilearn adds a fourth technique:

Diagnostic analytics: Why is it happening? Diagnostic analytics uses analytics techniques to discover the factors or reasons for past or current performance.

Examples of business analytics

San Jose Sharks build fan engagement

Starting in 2019, the San Jose Sharks began integrating its operational data, marketing systems, and ticket sales with front-end, fan-facing experiences and promotions to enable the NHL hockey team to capture and quantify the needs and preferences of its fan segments: season ticket holders, occasional visitors, and newcomers. It uses the insights to power targeted marketing campaigns based on actual purchasing behavior and experience data. When implementing the system, Neda Tabatabaie, vice president of business analytics and technology for the San Jose Sharks, said she anticipated a 12% increase in ticket revenue, a 20% projected reduction in season ticket holder churn, and a 7% increase in campaign effectiveness (measured in click-throughs).

GSK finds inventory reduction opportunities

As part of a program designed to accelerate its use of enterprise data and analytics, pharmaceutical titan GlaxoSmithKline (GSK) designed a set of analytics tools focused on inventory reduction opportunities across the company’s supply chain. The suite of tools included a digital value stream map, safety stock optimizer, inventory corridor report, and planning cockpit.

Shankar Jegasothy, director of supply chain analytics at GSK, says the tools helped GSK gain better visibility into its end-to-end supply chain and then use predictive and prescriptive analytics to guide decisions around inventory and planning.

Kaiser Permanente streamlines operations

Healthcare consortium Kaiser Permanente uses analytics to reduce patient waiting times and the amount of time hospital leaders spend manually preparing data for operational activities.

In 2018, the consortium’s IT function launched Operations Watch List (OWL), a mobile app that provides a comprehensive, near real-time view of key hospital quality, safety, and throughput metrics (including hospital census, bed demand and availability, and patient discharges).

In its first year, OWL reduced patient wait time for admission to the emergency department by an average of 27 minutes per patient. Surveys also showed hospital managers reduced the amount of time they spent manually preparing data for operational activities by an average of 323 minutes per month.

Business analytics tools

Business analytics professionals need to be fluent in a variety of tools and programming languages. According to the Harvard Business Analytics program, the top tools for business analytics professionals are:

SQL: SQL is the lingua franca of data analysis. Business analytics professionals use SQL queries to extract and analyze data from transactions databases and to develop visualizations.Statistical languages: Business analytics professionals frequently use R for statistical analysis and Python for general programming.Statistical software: Business analytics professionals frequently use software including SPSS, SAS, Sage, Mathematica, and Excel to manage and analyze data.

Business analytics dashboard components

According to analytics platform company OmniSci, the main components of a typical business analytics dashboard include:

Data aggregation: Before it can be analyzed, data must be gathered, organized, and filtered.Data mining: Data mining sorts through large datasets using databases, statistics, and machine learning to identify trends and establish relationships.Association and sequence identification: Predictable actions that are performed in association with other actions or sequentially must be identified.Text mining: Text mining is used to explore and organize large, unstructured datasets for qualitative and quantitative analysis.Forecasting: Forecasting analyzes historical data from a specific period to make informed estimates predictive of future events or behaviors.Predictive analytics: Predictive business analytics use a variety of statistical techniques to create predictive models that extract information from datasets, identify patterns, and provide a predictive score for an array of organizational outcomes.Optimization: Once trends have been identified and predictions made, simulation techniques can be used to test best-case scenarios.Data visualization: Data visualization provides visual representations of charts and graphs for easy and quick data analysis.

Business analytics salaries

Here are some of the most popular job titles related to business analytics and the average salary for each position, according to data from PayScale:

Analytics manager: $71K-$132KBusiness analyst: $48K-$84KBusiness analyst, IT: $51K-$100KBusiness intelligence analyst: $52K-$98KData analyst: $46K-$88KMarket research analyst: $42K-$77KQuantitative analyst: $61K-$131KResearch analyst, operations: $47K-$115KSenior business analyst: $65K-$117KStatistician: $56K-$120KAnalytics