Every company and government entity is tasked with striking a critical balance between data access and security. As Forrester’s Senior Analyst Richard Joyce stated, “For a typical Fortune 1000 company, just a 10 percent increase in data accessibility will result in more than $65 million additional net income.” As the need to become more data-driven accelerates, it’s imperative enterprises equally balance privacy and governance requirements.

To achieve this balance, we need to change how we perceive data security. Amidst growing friction between teams — i.e. those who create and manage data access policies and those who need data to perform their duties — we must accept security, IT, and privacy teams want to provision as much data as possible. But those teams face constraints and compliance complexity.

Traditionally, data security, privacy, and regulations have been thought of as a cost center expense. Instead, we need to look at data security as the means for positive change, a driver for greater data accessibility, enhanced operational efficiency, and actual business value.

Many remain far short of the goal

Enterprises of all sizes struggle with the shift. NewVantage Partners’s Data and AI Leadership Executive Survey 2023 found less than a quarter of firms reported a data-driven organization or data culture. And in the State of Data and Analytics Governance, Gartner suggests by 2025 80 percent of organizations seeking to scale digital business, including analytical initiatives, will fail because they don’t modernize their data and analytics governance.

Data access drives growth. So, what’s the reason for low data-culture adoption? To be truly data-driven requires tight collaboration between many different functions, and there’s a lack of certainty regarding individual-role responsibilities. Strategic gaps must be addressed.

The reality of data security and access

When it comes to data security and access, companies are typically either:

Overly restrictive on data access. Data security is seen as an impediment to overall company growth. This is typically due to data, organizational, and technological complexity.Or, overly focused on perimeter and application defenses, leveraging cyberdefenses and coarse-grained identity and access management (IAM). Data systems are open to exploitation in the event of a breach.

Most experience the worst of both these scenarios, where data security and access are simply broken — inconsistent, atomistic.

A primary challenge of solving the data democratization balancing act lies in the complex web of internal and external privacy, security, and governance policies. They change over time and need to be applied and maintained consistently and efficiently across business teams.

In the middle are the technical teams managing the complex data and analytical system. Due to constraints, security, privacy, and data management teams default to a tight lockdown of data to ensure compliance and security. It’s not any one team’s fault, but a major blocker to becoming data-driven.

Unified data security platform

Siloed, decentralized, inefficient, unclear roles and responsibilities, and an absence of a holistic strategy. So, what’s the solution as more companies face costly data breaches and low data usability rates? An enterprise-wide, scalable strategy that leverages a unified data security platform. One that includes integrated capabilities to simplify and automate universal data security processes across the entire data and analytic ecosystem. With the ability to discover and classify sensitive data, data attributes can be used to automatically deliver instantaneous data access to authorized users. Proper data security governance helps teams get access to more data faster.

Additional data masking and encryption layers can be added to make sensitive data available for analytics without compromising security. Even if a breach occurs, fine-grained access limits exposure, and audit capabilities quickly identify compromised data.

Executing a proper data security strategy provides the last mile of the data governance and cataloging journey. All of it key to the balancing act of data democratization, with comprehensive data governance enabling faster insights while maintaining compliance. 

Enterprise-wide governed data sharing

Privacera helps Fortune companies modernize their data architecture via a holistic, enterprise-wide data security platform and automated data governance. A data security platform empowers the data democratization you need to increase data usability and nurture your data-driven culture. Analysts and business units get more data faster. IT liberates time and resources. Security and privacy teams easily monitor and implement data security policies.

Learn more about achieving modern data security governance and democratized analytics for faster insights here.

Data and Information Security

Business intelligence definition

Business intelligence (BI) is a set of strategies and technologies enterprises use to analyze business information and transform it into actionable insights that inform strategic and tactical business decisions. BI tools access and analyze data sets and present analytical findings in reports, summaries, dashboards, graphs, charts, and maps to provide users with detailed intelligence about the state of the business.

The term business intelligence often also refers to a range of tools that provide quick, easy-to-digest access to insights about an organization’s current state, based on available data.

Benefits of BI

BI helps business decision-makers get the information they need to make informed decisions. But the benefits of BI extend beyond business decision-making, according to data visualization vendor Tableau, including the following:

Data-driven business decisions: The ability to drive business decisions with data is the central benefit of BI. A strong BI strategy can deliver accurate data and reporting capabilities faster to business users to help them make better business decisions in a more timely fashion.Faster analysis and intuitive dashboards: BI improves reporting efficiency by condensing reports into dashboards that are easy for non-technical users to analyze, saving them time when seeking to glean insights from data.Increased organizational efficiency: BI can help provide holistic views of business operations, giving leaders the ability to benchmark results against larger organizational goals and identify areas of opportunity.Improved customer experience: Ready access to data can help employees charged with customer satisfaction provide better experiences.Improved employee satisfaction: Providing business users access to data without having to contact analysts or IT can reduce friction, increase productivity, and facilitate faster results.Trusted and governed data: Modern BI platforms can combine internal databases with external data sources into a single data warehouse, allowing departments across an organization to access the same data at one time.Increased competitive advantage: A sound BI strategy can help businesses monitor their changing market and anticipate customer needs.

Business intelligence examples

Reporting is a central facet of BI and the dashboard is perhaps the archetypical BI tool. Dashboards are hosted software applications that automatically pull together available data into charts and graphs that give a sense of the immediate state of the company.

Although business intelligence does not tell business users what to do or what will happen if they take a certain course, neither is BI solely about generating reports. Rather, BI offers a way for people to examine data to understand trends and derive insights by streamlining the effort needed to search for, merge, and query the data necessary to make sound business decisions.

For example, a company that wants to better manage its supply chain needs BI capabilities to determine where delays are happening and where variabilities exist within the shipping process. That company could also use its BI capabilities to discover which products are most commonly delayed or which modes of transportation are most often involved in delays.

The potential use cases for BI extend beyond the typical business performance metrics of improved sales and reduced costs.

Tableau and software review site G2 also offer concrete examples of how organizations might put business intelligence tools to use:

A co-op organization could use BI to keep track of member acquisition and retention.BI tools could automatically generate sales and delivery reports from CRM data.A sales team could use BI to create a dashboard showing where each rep’s prospects are on the sales pipeline.

Business intelligence vs. business analytics

Business analytics and BI serve similar purposes and are often used as interchangeable terms, but BI should be considered a subset of business analytics. BI focuses on descriptive analytics, data collection, data storage, knowledge management, and data analysis to evaluate past business data and better understand currently known information. Whereas BI studies historical data to guide business decision-making, business analytics is about looking forward. It uses data mining, data modeling, and machine learning to answer why something happened and predict what might happen in the future.

Business intelligence is descriptive, telling you what’s happening now and what happened in the past to get your organization to that state: Where are sales prospects in the pipeline today? How many members have we lost or gained this month? Business analytics, on the other hand, is predictive (what’s going to happen in the future?) and prescriptive (what should the organization be doing to create better outcomes?).

This gets to the heart of the question of who business intelligence is for. BI aims to deliver straightforward snapshots of the current state of affairs to business managers. While the predictions and advice derived from business analytics requires data science professionals to analyze and interpret, one of the goals of BI is that it should be easy for relatively non-technical end users to understand, and even to dive into the data and create new reports.

Business intelligence systems and tools

A variety of different types of tools fall under the business intelligence umbrella. The software selection service SelectHub breaks down some of the most important categories and features:

DashboardsVisualizationsReportingData miningETL (extract-transfer-load — tools that import data from one data store into another)OLAP (online analytical processing)

Of these tools, dashboards and visualization are by far the most popular; they offer the quick and easy-to-digest data summaries that are at the heart of BI’s value proposition.

Some of the top BI tools include:

DomoDundas BIMicrosoft Power BIMicroStrategyOracle Analytics CloudQlikSASSisenseTableauTibco

Business intelligence jobs

Any company that’s serious about BI will need to have business intelligence analysts on staff. BI analysts use data analytics, data visualization, and data modeling techniques and technologies to identify trends. The role combines hard skills such as programming, data modeling, and statistics, with soft skills like communication, analytical thinking, and problem-solving.

Even if your company relies on self-service BI tools on a day-to-day basis, BI analysts have an important role to play, as they are necessary for managing and maintaining those tools and their vendors. They also set up and standardize the reports that managers are going to be generating to make sure that results are consistent and meaningful across your organization. And to avoid garbage in/garbage out problems, business intelligence analysts need to make sure the data going into the system is correct and consistent, which often involves getting it out of other data stores and cleaning it up.

Business intelligence analyst jobs often require only a bachelor’s degree, at least at the entry level, though to advance up the ranks an MBA may be helpful or even required. As of January 2023, the median business intelligence salary is around $72,000, though depending on your employer that could range from $53,000 to $97,000.

More on business intelligence:

8 keys to a successful business intelligence strategyTop 12 business intelligence tools6 BI challenges IT teams must addressWhat is a business intelligence analyst? A role for driving business value with dataCareer roadmap: Business intelligence analyst9 business intelligence certifications to advance your BI careerTop 10 BI data visualization toolsTop 7 business intelligence trends
Big Data, Business Intelligence, Enterprise Applications

Enterprises across multiple industries and domains are increasingly turning to graph analytics, thanks to its power to uncover complex non-linear patterns and relationships in a dataset that would not be easily visible or apparent using most traditional analytics techniques. Applications of graph analytics is wide-ranging, including customer relationship management, social network analysis, and financial crimes detection — to name just a few examples. With the advancement of computational platforms and corresponding software, enterprises have huge opportunities to leverage graph technology to create competitive advantages over their peers.

What are the benefits of graph analytics technology?

Stanford University’s associate professor of computer science Jure Leskovec has said that graphs are a general language for describing and analyzing entities with relations/interactions. This is indicative of the fact that it is important to represent data in the native form that reflects its complex and nested relationships. Traditionally, data is stored in two-dimensional tables using rows and columns with predefined relationships to represent its context. However, complex relationships — such as social, regulatory, and banking customer relationship networks — are better to organize, store, and analyze in graph data solutions as they natively represent their relations and interactions among entities. Comparing these graphics illustrates this point.

The graph model diagram on the right describes much more clearly the interactions among the entities than the two-dimensional table on the left. The graph clearly shows that all the entities can be grouped into two clusters and two key influencers are present. As the network grows bigger, it is much easier to generate complex insights from a graph data solution, which would not be accessible in a traditional tabular representation.

In addition, relational graphs can be used to represent complex domains that have a rich relational structure. By explicitly developing machine learning model(s) utilizing the relational structures uncovered by graph analytics, enhanced insights and model performance can be achieved.

What makes up a graph database?

The basic components of graph data are node, edge/link, and graph. Entities can be represented as nodes and the connections between entities (e.g., ownership, sharing address, email, phone numbers, etc.) can be represented as links or edges. In the example graph diagram shown below, the solid circles are nodes, and the lines connecting them are edges. Entire collections of nodes and edges can be represented as the graph. There could be multiple collections of nodes and edges or a graph in a domain. A graph can be either directed or undirected, depending on whether the edges have directionality. For example, a social media network that allows users to “follow” other users is an example of a directional graph — just because user A follows user B, user B does not have to follow user A. Additionally, a graph can also be weighted — where the link (edge) between any two nodes has a weight, reflecting the strength of the connection.

How are enterprises using graph analytics today?

Graph analytics is being used in a broad range of industries for a variety of applications. Example use cases described below provide a glimpse of the graph analytics landscape.

Customer and sales relationship management: By understanding the relationships among their customers, an enterprise, such as a bank, can target its sales efforts more effectively to achieve a higher ROI. Banks can optimize their sales and relationship management resources within the local network (a collection of bank accounts) by targeting key influencers (e.g., account with highest balance, account holders with a high percentage of ownership with other companies, etc.), consolidating marketing efforts if two or more sales relationships (local networks) share similar attributes, and divide and conquer if the sales relationship (local network) is too big.

Social network analysis: Social media companies are using graph analytics extensively to identify key influencers and interactions amongst themselves to gain competitive advantages over their competitors. Using the insights about their users, as revealed by graph analytics, they can create executable business strategies more effectively.

Financial crime detection: Perpetrators of financial crimes, such as money laundering, try to hide the origin of ill-gotten funds using multiple techniques. Graph analytics can quickly reveal connections between known financial criminals or sanctioned entities and seemingly innocent customers — surfacing suspicious transactions that would otherwise go unnoticed.

Biological/clinical research: Graph analytics is being used in several research areas, e.g., predicting a protein’s 3D structure based on its amino acid sequence (nodes are amino acids in a protein sequence and edges are proximity between amino acids). Knowing the 3D structure of proteins can help scientists, for example, in drug discovery.

Marketing: Patterns revealed by graph analytics in a user/customer database can be used to develop more effective marketing, e.g., product recommendations — songs, movies, retail purchases, etc.

Considerations for successful implementation

Aligning business operations to graph is essential for a successful implementation of graph analytics in an enterprise’s operations. It can be a significant effort to translate the business operations into data points that represent nodes and edges in Graph Theory. For example, if we want to represent a banking transaction as a graph, a node can be any entity that makes deposits, receives deposits, guarantor, signer, etc. An edge can be a directed link from the entity that makes deposits to the beneficiary of the deposit, or other types of transactions. The data representation grows significantly if there is no thoughtful process to filter the relevant entities and transactions or linkages. In addition, there may be special cases or exceptions that may need human intervention.

Data quality is another key element of success. Graph is a data-driven approach to represent relationships. If the underlying data is not correct or consistent, the insights generated from graph analysis can be adversely affected.

Computational resources are another important consideration for enterprise-level implementation. The data representation of a network can be very complex as it may have arbitrary data size and a complex topological structure. Graph data often have dynamic and multimodal features that span different levels (node/edge/graph) and contexts. As an example, the features in a banking dataset may include different types of bank account holders (node level), means of transaction (edge level), amounts of transactions (edge level), and legitimacy of transactions (suspicious or not) within the local network (graph level), as well as within the same system. Complex computations and mathematical estimations require intensive computational resources to accomplish these challenging tasks.

With the advancement of computational platforms and corresponding software, enterprises have huge opportunities to leverage graph technology at scale to create competitive advantages over their peers and to gain deeper insights available within their own data.

Learn more about graph technology and other Protiviti emerging technology solutions.

Connect with the authors:

Lucas Lau

Senior Director – Machine Learning and AI Lead, Protiviti

Arun Tripathi

Director – Machine Learning and AI, Protiviti

Analytics, Business Intelligence

Higher education is entering a new era of data-driven insights, which promise to elevate both learner experience and institutional performance. The HE colleges and universities capable of collecting and leveraging data in a timely manner will not only boost student outcomes but also run their back-office operations in a significantly more effective and cost-efficient way – says Alex Pearce, Chief Technologist for Education at Softcat.  

Integration Platform as a Service (IPaaS) and middleware solutions are the key to making sense of institutions’ complex mix of new and legacy systems, enabling them to create a powerful single-view of their data. With the help of IPaaS, colleges and universities are able to leverage exciting new technologies and use cases already revolutionising sectors such as finance, healthcare and retail. Here are the top five data-driven trends that HE institutions will be able to tap into with the help of IPaaS. 

Personalised learning experiences 

With IPaaS unlocking access to granular student data, institutions will be able to tailor their offering to the individual, guiding them towards success. They will also be able to build and leverage a unified data dashboard showing student and faculty key metrics, such as attendance levels, grades, resource usage etc. Artificial intelligence and machine learning can then be used to generate predictive analytics insights, nudging students towards beneficial behaviors.   

Smart buildings and campuses 

IPaaS can play a key role collating building data and creating a real-estate dashboard ensuring buildings are used as effectively and cost-efficiently as possible. These insights can ensure heating, energy use and occupancy levels are optimised – something that’s particularly important during the energy crisis, when energy prices are spiraling.  

Real-estate data can also be used alongside machine learning to foresee building issues and trigger predictive maintenance, fixing problems before they impact building availability. 

Next-generation remote learning 

The pandemic vividly highlighted the value of remote learning for HE institutions. And now, with society largely reopened, online teaching and meeting platforms continue to help to overcome physical distances. Thanks to IPaaS institutions can continue to extend their reach, connecting with students, speakers and networks locally, nationally, and internationally, making UK HE easily accessible globally. 

Student welfare 

Mental health is an increasingly important facet of HE, with institutions falling short at risk of putting their students at risk and suffering reputational damage as a result. IPaaS is the data link which can enable machine learning to spot worrying patterns in student behaviour and flag it to faculty in real time. 

Talent spotting  

IPaaS solutions can be used to create a holistic summary of student performance. When teamed with predictive analytics this approach can help faculty identify – in an objective manner – which students are best suited to further study. By surfacing granular data around student performance and behaviour, institutions are more likely to identify the candidates most likely to succeed.  


With competition between HE institutions for budget and talent at an all-time high it has never been more important for schools, colleges and universities to find new, personalised ways to engage with students, streamline their operations and respond rapidly to new challenges and opportunities.  

Leading institutions now realise that fragmented technology is one of the biggest barriers to achieving their strategic goals. As a result, they are increasingly looking to best-in-class IPaaS data integration and automation platforms to help them leverage data from their legacy systems so they can evolve. Institutions that fail to harness the power of IPaaS face being left behind. 

For more information please download the following whitepaper: How higher-education institutions can reap the data dividend thanks to IPaaS

Cloud Management, Data Center Design, Data Integration, Data Management, Education Industry

Every organization pursuing digital transformation needs to optimize IT from edge to cloud to move faster and speed time to innovation. But the devil’s in the details. Each proposed IT infrastructure purchase presents decision-makers with difficult questions. What’s the right infrastructure configuration to meet our service level agreements (SLAs)? Where should we modernize — on-premises or in the cloud? And how do we demonstrate ROI in order to proceed?

There are no easy, straightforward answers. Every organization is at a different stage in the transformation journey, and each one faces unique challenges. The conventional approach to IT purchasing decisions has been overwhelmingly manual: looking through spreadsheets, applying heuristics, and trying to understand all the complex dependencies of workloads on underlying infrastructure.

Partners and sellers are similarly constrained. They must provide a unique solution for each customer with little to no visibility into a prospect’s IT environment. This has created an IT infrastructure planning and buying process that is inaccurate, time-consuming, wasteful, and inherently risky from the perspective of meeting SLAs.

Smarter solutions make for smarter IT decisions

It’s time to discard legacy processes and reinvent IT procurement with a new approach that leverages the power of data-driven insights. For IT decision makers and their partners and sellers, a modern approach involves three essential steps to optimize procurement — and accelerate digital transformation:

1. Understand your VM needs

Before investing in infrastructure modernization, it’s critical to get a handle on your current workloads. After all, you must have a clear understanding of what you already have before deciding on what you need. To reach that understanding, enterprises, partners, and sellers should be able to collect and analyze fine-grained resource utilization data per virtual machine (VM) — and then leverage those insights to precisely determine the resources each VM needs to perform its job.

Why is this so important? VM admins often select from a menu of different sized VM templates when they provision a workload. They typically do so without access to data — which can lead to slowed performance due to under-provisioning, or oversubscribed VMs if they choose an oversized template. It’s essential to right-size your infrastructure plan before proceeding.

2. Model and price infrastructure with accuracy

Any infrastructure purchase requires a budget, or at least an understanding of how much money you intend to spend. To build that budget, an ideal IT procurement solution provides an overview of your inventory, including aggregate information on storage, compute, virtual resource allocation, and configuration details. It would also provide a simulator for on-premises IT that includes the ability to input your actual costs of storage, hosts, and memory. Bonus points for the ability to customize your estimate with depreciation term, as well as options for third-party licensing and hypervisor and environmental costs.

Taken together, these capabilities will tell you how much money you’re spending to meet your needs — and help you to avoid overpaying for infrastructure.

3. Optimize workloads across public and private clouds

Many IT decision makers wonder about the true cost of running particular applications in the public cloud versus keeping them on-premises. Public cloud costs often start out attractively low but can increase precipitously as usage and data volumes grow. As a result, it’s vital to have a clear understanding of cost before deciding where workloads will live. A complete cost estimate involves identifying the ideal configurations for compute, memory, storage, and network when moving apps and data to the cloud.

To do this, your organization and your partners and sellers need a procurement solution that can map their entire infrastructure against current pricing and configuration options from leading cloud providers. This enables you to make quick, easy, data-driven decisions about the costs of running applications in the cloud based on the actual resource needs of your VMs.

And, since you’ve already right sized your infrastructure (step 1), you won’t have to worry about moving idle resources to the cloud and paying for capacity you don’t need.

HPE leads the way in modern IT procurement

HPE has transformed the IT purchasing experience with a simple procurement solution delivered as a service: HPE CloudPhysics. Part of the HPE GreenLake edge-to-cloud platform, HPE CloudPhysics continuously monitors and analyzes your IT infrastructure, models that infrastructure as a virtual environment, and provides cost estimates of cloud migrations. Since it’s SaaS, there’s no hardware or software to deal with — and no future maintenance.

HPE CloudPhysics is powered by some of the most granular data capture in the industry, with over 200 metrics for VMs, hosts, data stores, and networks. With insights and visibility from HPE CloudPhysics, you and your sellers and partners can seamlessly collaborate to right-size infrastructure, optimize application workload placement, and lower costs. Installation takes just minutes, with insights generated in as little as 15 minutes.

Across industries, HPE CloudPhysics has already collected more than 200 trillion data samples from more than one million VM instances worldwide. With well over 4,500 infrastructure assessments completed, HPE CloudPhysics already has a proven record of significantly increasing the ROI of infrastructure investments.

This is the kind of game-changing solution you’re going to need to transform your planning and purchasing experience — and power your digital transformation.


About Jenna Colleran


Jenna Colleran is a Worldwide Product Marketing Manager at HPE. With over six years in the storage industry, Jenna has worked in primary storage and cloud storage, most recently in cloud data and infrastructure services. She holds a Bachelor of Arts degree from the University of Connecticut.

Cloud Management, HPE, IT Leadership

By Thomas Been, DataStax

Winning enterprises take data, process it, and use it to deliver in-the-moment experiences to customers. Starbucks, Netflix, The Home Depot, and countless other organizations large and small have built great success based on this understanding. But what does that success look like, and what are the challenges faced by organizations that use real-time data?

Released today, The State of the Data Race 2022 is a summary of important new research based on an in-depth survey of more than 500 technology leaders and practitioners across a variety of industries about their data strategies. The survey was developed to explore how organizations use real-time data – the data that powers in-the-moment use cases such as recommendations, personalization, always-up-to-date inventory, and logistics. The results show that real-time data pays off in some very important ways: higher revenue growth and increased developer productivity.

Let’s take a closer look at three surprising findings that surfaced in the report.

Real-time data drives revenue growth

Speed is everything when it comes to staying competitive. Fortunately, technology has evolved to the point where it can support the need for speed – now companies of all sizes have the ability to build data architectures that can leverage data for powerful, in-the-moment user experiences.

And, according to the survey results, that’s exactly what they are doing – and they’re driving real results with it.

When looking specifically at responses from the top users of real-time data – organizations that excel at leveraging data to create new products and revenue streams – there is a strong strategic focus on real-time data. More than half of these organizations (52%) say their corporate strategy is focused on building organization-wide value with real-time data.  

Of these organizations, 42% say that real-time data has a “transformative impact” on revenue growth. But perhaps even more significant is a finding that applies to all the organizations involved in the survey (not just the data leaders); it shows a strong correlation between building a real-time data strategy and revenue growth. Overall, 71% of all respondents agree (either completely or somewhat) that they can tie their revenue growth directly to real-time data.

In other words, an organization-wide focus on leveraging real-time data is a proven path to accelerate revenue growth.

To quote Greg Sly, senior vice president of infrastructure and platform services at Verizon: “Real-time data is air. If you don’t have real-time data, you’re back in 1985.”

Developer productivity improves with access to real-time data

There’s another significant advantage that real-time data brings to the organizations that make it a priority: it helps developers, the people responsible for building instantaneous experiences, do their jobs better.

When asked how the use of real-time data has impacted developers’ jobs, a majority (66%) of real-time-data-focused organizations agree that developer productivity has improved. 

It should come as no surprise that developers are the ones with the tightest relationship with real-time data. The largest percentage of all organizations (27%) agreed that developer teams were the groups that worked most extensively with real-time data (this percentage was just a bit higher at 32%, for organizations that hold real-time data as core to their strategy).

Coupled with the finding that developer teams are the ones who work the most with real-time data, the importance of ensuring that developers have fast and ready access to real-time data becomes particularly significant. For these teams, it’s all about ease of building and time to market. The complexity and siloed technologies that plague many organizations’ data estates can be a serious detriment; developers shouldn’t have to grapple with figuring out where the data they need resides, or how to access it. This is particularly important in light of how real-time data impacts how developers get work done.

(For more on enabling developers with real-time data, read You’ve Got the Data. Why Can’t Your Developers Build with It?).

Yet it’s not just about enabling access to the data. It’s also about reorienting how organizations operateto make developers more productive.

Compared to organizations with less aggressive commitments to real-time data, those who have an organization-wide strategy for creating value with it are more likely to:

Have clear product owners (43% versus 32%)Have business unit accountability of data (42% versus 30%)Have line-of-business staff, developers, and data scientists working together in cross-functional teams (45% versus 32%)

This organizational pattern lines up every role behind one overarching goal on the way to creating value: shipping an application.

Until an application ships, all the good ideas in the world won’t impact the customer experience – or the bottom line. Organizations that make a strategic commitment to creating value with real-time data are making the changes that focus attention, align incentives, and drive prioritization around the fact that everyone will sink, or swim together based on developers’ productivity.

Real-time data challenges are organizational

Across all the organizations polled in the survey, the top three challenges they face when trying to leverage real-time data are complexity, costs, and accessibility (39%, 32%, and 30%, respectively, identified these as their top hurdles).

These hurdles stem in part from the fact that over time, many enterprises have invested in a variety of point technology solutions, resulting in data that’s locked in a variety of silos across the organization, making it difficult and often costly for developers to access the data they need to build applications.

These particular challenges, however, don’t rank as highly for data leaders. And the reason why has a lot to do with the progress these organizations have made in building real-time data architectures. The top issue for the most accomplished real-time data organizations is the availability of the necessary skills in their business units to leverage real-time data. The largest percentage (35%) identify this as their leading challenge.

This finding indicates the successful trail data leaders have blazed with their real-time data architectures. They’ve built the technology foundation to successfully leverage real-time data, and now they’re working on the next step. They understand that the right place to focus on building data products and the revenue they generate isn’t in IT, it’s within the business units.

Everyone can get real

The findings in The State of the Data Race 2022 reinforce the fact that the value of apps that use data to drive smart actions in real time is tangible and measurable in dollars and cents. The technology is ready, waiting, and openly available for organizations of all sizes. At DataStax we’ve worked to remove the barriers to mobilizing real-time data by offering an open data stack that enables enterprises to quickly build the smart, highly scalable applications required to be a data-driven business.

An organization’s commitment, imagination, and drive are all that’s needed to join the ranks of companies that leverage real-time data for revenue growth and productivity.

Download a free copy of The State of the Data Race 2022 today.


About Thomas Been:

Thomas leads marketing at DataStax. He has helped businesses transform with data and technology for more than 20 years, in sales and marketing leadership roles at Sybase (acquired by SAP), TIBCO, and Druva.

Data Management, IT Leadership