This article was co-authored by Chris Boyd, a Senior Associate at Metis Strategy.

Today’s CIOs lead innovation efforts aimed at increasing revenue, accessing new markets, and growing product lines. However, according to Foundry’s 2022 State of the CIO Survey, 76% of CIOs say it’s challenging to find the right balance between business innovation and operational excellence. If CIOs can’t master operational excellence – “keeping the lights on” – they lose credibility with their peers and run the risk that the spotty Wi-Fi in the executive conference room will overshadow new innovations.

To strike the right balance, many technology leaders have adopted a Service Quality Index (SQI) to consistently measure and communicate the quality of “keep the lights on” work so their peers can quickly confirm the basics are covered and endorse innovation initiatives without pause.

 A CIO Service Quality Index (SQI) defines the key operational capabilities a CIO is responsible for delivering, the relative weight of the capability through the eyes of the customers, and the corresponding metrics that determine quality. In practice, it works like an equity index such as the S&P 500, summarizing the holistic performance of the CIO’s operational capabilities (e.g., the underlying stocks) with a single number, providing a basis for macro trend analysis (e.g., period-over-period performance), and allowing stakeholders to double-click into specific capabilities (e.g., sectors) for root cause analysis.

The customer-driven origins of SQI

FedEx built a SQI to better understand the level of quality being delivered to customers. They began by assessing their customer experience, which revealed 71 critical data points that strongly influenced the quality of service delivered, such as lost or damaged packages, missed pickups, overcharges, delivery delays, complaints, or unanswered customer requests.  Each of the 71 data points were weighted to reflect its relative importance to the customer based on the assessment.

The beauty of FedEx’s SQI is in its simplicity. In the dashboard, the company summarized the performance of all 71 data points with a single grade on a 0-100 scale. Once the SQI was published, FedEx executives could monitor the dashboard to identify trends and understand how well the company was serving its customers. Leaders also used the SQI to prioritize their work and increase focus on initiatives that made the biggest impact on customer experience. With a common set of data points to rally around, teams across the organization spun up projects aimed at “moving the needle” on one or many of the dimensions on the dashboard. FedEx’s SQI also helped ensure priorities were clearly communicated across the organization, and today it plays a major role in further improving customer experience and enabling the company’s growth.

Building a CIO Service Quality Index

Chances are you are among the majority of CIOs struggling to balance innovation and operational excellence. Or perhaps you are looking for a mechanism to show how aging infrastructure affects service quality to justify a new investment. In either case, consider constructing a SQI using the following five-step process as a smart first step.

Identify and weigh the dimensions of service qualityJust like FedEx examined its customer experience, start by assessing how various personas interact with IT. Identify the key operational capabilities your organization provides that visibly impact your constituents. A multi-billion industrials client, for example, chose to focus on employee help desk, network performance, data quality, platform availability, and issue resolution, since poor performance in any of these areas create negative customer sentiment and headwinds for the transformation agenda.Weigh dimensions of service quality, identify metrics, and get buy-inDetermine the relative weight for each dimension of service quality based on the impact to the personas you identified. Work with leaders in your organization to understand what metrics you are tracking today, determine which ones will be useful to measure quality dimensions, and where new metrics may be needed. For example, you may determine that 30% of your overall SQI will be driven by the Issue Resolution quality metrics. For that dimension, you can use total ticket volume, average time to resolution, and business impact of critical issues to represent 20%, 20% and 60% of the quality dimension, respectively. Share the first draft of dimensions and metrics with your constituents. Make sure your weighting scheme appropriately reflects the views of the stakeholders that will be responsible for endorsing and using the dashboard for decision analysis.  Build the MVP SQI and the “prospectus”Build your MVP on a spreadsheet knowing there will be frequent changes in the early innings. If you want your index to have staying power, provide transparency into the mechanics. Consider creating a SQI “prospectus” that, like an equity index prospectus, allows stakeholders (investors) to understand the quality dimensions (underlying stocks in the index), how quality metrics are calculated, sourced and weighted (index performance), who is measuring and calculating quality (fund managers), and how frequently performance will be updated (monthly, quarterly, etc.). Publish, iterate, integrate, and automateShare the MVP and prospectus and provide ample opportunities for feedback and Q&A. Explain the purpose of the SQI and how it can be used as a prioritization mechanism as your organization thinks about capacity planning or where AI/ML might be used to drive quality improvements. Be open to feedback on quality measures and calculation and update the index using an iterative approach. Once you have achieved alignment, identify opportunities to integrate the SQI into other communication channels. Steering committees, existing operational reports, and department meetings are good starting points. One client decided to display the IT Service Quality Index, which was updated monthly, on large monitors that adorned the walls of the IT department to keep the team focused on quality. You can explore automation solutions for updating the index once you reach a steady state and changes become less frequent.

Bringing it together

Building a Service Quality Index will not make operational issues vanish. However, it will give you a tool that simplifies performance measurement and allows you to be surgical about remediation plans. Whether you are in the early or late innings of mastering operational excellence, the SQI can give you some breathing room and begin shifting the conversation from keeping the lights on to turning on new lights.

Business Operations, IT Operations

A vast majority of enterprises surveyed globally are overspending in the cloud, according to a new HashiCorp-Forrester report.

In a survey that saw participation of over 1,000 IT decision makers across North America, Europe, Middle East and Asia-Pacific, 94% of respondents said their organizations had notable, avoidable cloud expenses due to a combination of factors including underused and overprovisioned resources, and lack of skills to utilize cloud infrastructure.

Underused resources were among the top reasons for overspending, the report showed, with more than 66% of respondents listing it, followed by 59% and 47% of respondents claiming that overprovisioned resources and lack of needed skills, respectively, caused the wastage.

Another 37% of respondents also listed manual containerization as a contributor to overspending in the cloud.

Nearly 60% of respondents said they are already using multicloud infrastructures, with an additional 21% saying they will be moving to such an architecture within the next 12 months.

The report showed that a majority of enterprises surveyed were already using multicloud infrastructures.

Multicloud infrastructure works for most enterprises

Further, the report said that 90% of respondents claimed a multicloud strategy is working for their enterprises. This contrasts with just 53% of respondents in last year’s survey claiming that such a strategy was working for them.

Reliability was the major driver of multicloud adoption this year, with 46% of respondents citing it as the top reason for adopting the computing architecture. Digital transformation came in second place this year, cited by 43% of respondents as the main driver for the move to multicloud, slipping from first place last year.

Other factors driving multicloud adoption this year included scalability, security and governance, and cost reduction.

Almost 86% of respondents claimed they are dependent on cloud operations and strategy teams, which perform critical tasks such as standardizing cloud services, creating and sharing best practices and policies, and centralizing cloud security and compliance.

Skill shortages were the top barrier to multicloud adoption, with 41% of respondents listing it as the top reason. Some of the other barriers listed by respondents included teams working in silos, compliance, risk management and lack of training.

Additionally, almost 99% of respondents said that infrastructure automation is important for multicloud operations as it can provide benefits such as faster, reliable self-service IT infrastructure, better security, and better utilization of cloud resources, along with faster incident response.

Eighty-nine percent of respondents said they see security as a key driver for multicloud success, with nearly 88% of respondents claiming they already relied on security automation tools. Another 83% of respondents said they already use some form of infrastructure as code and network infrastructure automation, according to the report.

Cloud Computing, Multi Cloud

At the Laboratory for Machine Tools and Production Engineering (WZL) of RWTH Aachen University, scientists, mathematicians, and software developers conduct manufacturing research, working together to gain new insights from machine, product, and manufacturing data. Manufacturers partner with the team at WZL to refine solutions before putting them into production in their own factories. 

Recently, WZL has been looking for ways to help manufacturers analyze changes in processes, monitor output and process quality, then adjust in real-time. Processing data at the point of inception, or the edge, would allow them to modify processes as required while managing large data volumes and IT infrastructure at scale.

Connected devices generate huge volumes of data

According to IDC, the amount of digital data worldwide will grow by 23% through 2025, driven in large part by the rising number of connected devices. Juniper Research found that the total number of IoT connections will reach 83 billion by 2024. This represents a projected 130% growth rate from 35 billion connections in 2020.

WZL is no stranger to this rise in data volume. As part of their manufacturing processes, fine blanking incubators generate massive amounts of data that must first be recorded at the sharp end and processed extremely quickly. Their specialized sensors for vibrations, acoustics and other manufacturing conditions can generate more than 1 million data points per second.

Traditionally, WZL’s engineers have processed small batches of this data in the data center. But this method could take days to weeks to gain insights. They wanted a solution that would enable them to implement and use extremely low-latency streaming models to garner insights in real-time without much in-house development.

Data-driven automation at the edge 

WZL implemented a platform which could ingest, store, and analyze their continuously streaming data as it was created. This system gives organizations access to a single solution for all their data (whether streaming or not) that provides out-of-the box functionality and support for high-speed data ingestion with an open-source and auto-scaling streaming storage solution. 

Now, up to 1,000 characteristic values are recorded every 0.4 milliseconds – nearly 80TB of data every 24 hours. This data is immediately stored and pre-analyzed in real-time at the edge on powerful compact servers, enabling further evaluation using artificial intelligence and machine learning. These characteristic values leverage huge amounts of streaming image, X-ray and IoT data to detect and predict abnormalities throughout the metal stamping process. 

The WZL team found that once the system was implemented, it could be scaled without constraint. “No matter how many sensors we use, once we set up the analytics pipeline and the data streams, we don’t have to address any load-balancing issues,” said Philipp Niemietz, Head of Digital Technologies at WZL. 

With conditions like speed and temperature under constant AI supervision, the machinery is now able to automatically adjust itself to prevent any interruptions. By monitoring the machines in this way, WZL have also enhanced their predictive maintenance capabilities. Learn more about how you can leverage Dell Technologies edge solutions.

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

IT Leadership

Enterprises driving toward data-first modernization need to determine the optimal multicloud strategy, starting with which applications and data are best suited to migrate to cloud and what should remain in the core and at the edge.

A hybrid approach is clearly established as the optimal operating model of choice. A Flexera report found the shift to hybrid infrastructure supported by overwhelming numbers of survey respondents, with 89% of them opting for a multicloud strategy and 80% taking a hybrid approach that combines use of public as well as private clouds.

The shift toward hybrid IT has clear upsides, enabling organizations to choose the right solution for each task and workload, depending on criteria such as performance, security, compliance, and cost, among other factors. The challenge is that CIOs must apply a rigorous process and holistic assessment to determine the optimal data modernization strategy, given that there is no one-size-fits-all answer.

Many organizations set out on the modernization journey guided by the premise that cloud-first or cloud-only is the ultimate destination, only to find that the path is not appropriate for all data and workloads. “Directionally correct CIOs and the C-suite looked at the public cloud and liked the operating model: the pay-as-you-go, predefined services, the automation and orchestration, and the partner ecosystem all available to you,” says Rocco Lavista, worldwide vice president for HPE GreenLake sales and go-to-market. “Many tried to move their whole estate into public cloud, and what they found is that that doesn’t work for everything. It’s less about what application and data should go on public cloud and more about a continuum from the edge to core [in colocated or private data centers] to public cloud.”

Close to the Edge

There are several reasons why certain data and workloads need to remain at the edge, as opposed to transitioning to public cloud. Data gravity is perhaps the most significant arbiter of where to deploy workloads, particularly when there is a need to analyze massive amounts of data quickly — for example, with X-ray or MRI machines in a hospital setting, for quality assurance data from a manufacturing line, and even with data collected at point-of-sale systems in a retail setting.

Artificial intelligence (AI) projects are another useful example. “Where I’ve seen AI projects fail is in trying to bring the massive amounts of data from where it’s created to the training model [in some public cloud] and get timely insights, versus taking the model and bringing it closer to where the data is created,” Lavista explains. “Here, there is a synergistic need between what is happening at the edge and the processing power required in real time to facilitate your business objectives.”

Application entanglement presents another barrier keeping organizations from migrating some applications and data to cloud. Some legacy applications have been architected in a way that doesn’t allow pieces of functionality and data to be migrated to cloud easily; in other cases, making a wholesale migration is out of the question, for reasons related to cost and complexity. There are also workloads that don’t make economic sense to refactor from operating in a fixed environment to a variable cost-based architecture and others with specific regulatory or industry obligations tied to data sovereignty or privacy that prevent a holistic migration strategy in embrace of public cloud.

The HPE GreenLake Advantage

Given the importance of the edge in the data modernization strategy, HPE seeks to remove any uncertainty regarding where to deploy applications and data. The HPE GreenLake edge-to-cloud platform brings the desired cloud-based operating model and platform experience, but with consistent and secure data governance practices, starting at the edge and running all the way to public cloud. This can be applied across any industry — such as retail, banking, manufacturing, or healthcare — and regardless of where the workload resides.

HPE GreenLake with the managed service offering is inclusive of all public clouds, ensuring a consistent experience whether data and applications are deployed on AWS, Microsoft Azure, or Google Cloud Platform as part of a hybrid mix that encompasses cloud in concert with on-premises infrastructure in an internal data center or colocation facility.

“IT teams want a unified solution they can use to manage all technology needs, from infrastructure as a service (IaaS) to platform as a service (PaaS) and container as a service (CaaS), that drive automation and orchestration that are not snowflakes,” says Lavista. “HPE GreenLake provides that standard operating model from edge to core and all the way through to the public cloud.”

By aligning with HPE GreenLake solutions, IT organizations also free themselves of the day-to-day operations of running infrastructure to focus on delivering core capabilities for business users as well as DevOps teams. The HPE GreenLake team works with organizations to assess which workloads are a better fit for cloud or edge, by evaluating a variety of factors, including technical complexity, system dependencies, service-level agreement (SLA) requirements, and latency demands. For example, a quality control system on a manufacturing line might be better suited for an edge solution, due to the need to analyze data in volume and in near real time. But an AI application that could benefit from a facial recognition service might be better served by public cloud for such service, given the broad ecosystem of available third-party services that eliminate the need to re-create the wheel for every innovation.

To ensure top performance, Lavista counsels companies to fully understand their core business objectives and to be pragmatic about their cloud migration goals so they avoid the trap of moving data and workloads simply because it’s the latest technology trend. “Understand your options based on where you are coming from,” he says. “If what you are looking for is to optimize the IT operating model, you can still get that without moving applications and data.”

For more information, visit https://www.hpe.com/us/en/greenlake/services.html.

Hybrid Cloud

Data visualization definition

Data visualization is the presentation of data in a graphical format such as a plot, graph, or map to make it easier for decision makers to see and understand trends, outliers, and patterns in data.

Maps and charts were among the earliest forms of data visualization. One of the most well-known early examples of data visualization was a flow map created by French civil engineer Charles Joseph Minard in 1869 to help understand what Napoleon’s troops suffered in the disastrous Russian campaign of 1812. The map used two dimensions to depict the number of troops, distance, temperature, latitude and longitude, direction of travel, and location relative to specific dates.

Today, data visualization encompasses all manners of presenting data visually, from dashboards to reports, statistical graphs, heat maps, plots, infographics, and more.

What is the business value of data visualization?

Data visualization helps people analyze data, especially large volumes of data, quickly and efficiently.

By providing easy-to-understand visual representations of data, it helps employees make more informed decisions based on that data. Presenting data in visual form can make it easier to comprehend, enable people to obtain insights more quickly. Visualizations can also make it easier to communicate those insights and to see how independent variables relate to one another. This can help you see trends, understand the frequency of events, and track connections between operations and performance, for example.

Key data visualization benefits include:

Unlocking the value big data by enabling people to absorb vast amounts of data at a glance
Increasing the speed of decision-making by providing access to real-time and on-demand information
Identifying errors and inaccuracies in data quickly

What are the types of data visualization?

There are myriad ways of visualizing data, but data design agency The Datalabs Agency breaks data visualization into two basic categories:

Exploration: Exploration visualizations help you understand what the data is telling you.
Explanation: Explanation visualizations tell a story to an audience using data.

It is essential to understand which of those two ends a given visualization is intended to achieve. The Data Visualisation Catalogue, a project developed by freelance designer Severino Ribecca, is a library of different information visualization types.

Some of the most common specific types of visualizations include:

2D area: These are typically geospatial visualizations. For example, cartograms use distortions of maps to convey information such as population or travel time. Choropleths use shades or patterns on a map to represent a statistical variable, such as population density by state.

Temporal: These are one-dimensional linear visualizations that have a start and finish time. Examples include a time series, which presents data like website visits by day or month, and Gantt charts, which illustrate project schedules.

Multidimensional: These common visualizations present data with two or more dimensions. Examples include pie charts, histograms, and scatter plots.

Hierarchical: These visualizations show how groups relate to one another. Tree diagrams are an example of a hierarchical visualization that shows how larger groups encompass sets of smaller groups.

Network: Network visualizations show how data sets are related to one another in a network. An example is a node-link diagram, also known as a network graph, which uses nodes and link lines to show how things are interconnected.

What are some data visualization examples?

Tableau has collected what it considers to be 10 of the best data visualization examples. Number one on Tableau’s list is Minard’s map of Napoleon’s march to Moscow, mentioned above. Other prominent examples include:

A dot map created by English physician John Snow in 1854 to understand the cholera outbreak in London that year. The map used bar graphs on city blocks to indicate cholera deaths at each household in a London neighborhood. The map showed that the worst-affected households were all drawing water from the same well, which eventually led to the insight that wells contaminated by sewage had caused the outbreak.
An animated age and gender demographic breakdown pyramid created by Pew Research Center as part of its The Next America project, published in 2014. The project is filled with innovative data visualizations. This one shows how population demographics have shifted since the 1950s, with a pyramid of many young people at the bottom and very few older people at the top in the 1950s to a rectangular shape in 2060.
A collection of four visualizations by Hanah Anderson and Matt Daniels of The Pudding that illustrate gender disparity in pop culture by breaking down the scripts of 2,000 movies and tallying spoken lines of dialogue for male and female characters. The visualizations include a breakdown of Disney movies, the overview of 2,000 scripts, a gradient bar with which users can search for specific movies, and a representation of age biases shown toward male and female roles.

Data visualization tools

Data visualization software encompasses many applications, tools, and scripts. They provide designers with the tools they need to create visual representations of large data sets. Some of the most popular include the following:

Domo: Domo is a cloud software company that specializes in business intelligence tools and data visualization. It focuses on business-user deployed dashboards and ease of use, making it a good choice for small businesses seeking to create custom apps.

Dundas BI: Dundas BI is a BI platform for visualizing data, building and sharing dashboards and reports, and embedding analytics.

Infogram: Infogram is a drag-and-drop visualization tool for creating visualizations for marketing reports, infographics, social media posts, dashboards, and more. Its ease-of-use makes it a good option for non-designers as well.

Klipfolio: Klipfolio is designed to enable users to access and combine data from hundreds of services without writing any code. It leverages pre-built, curated instant metrics and a powerful data modeler, making it a good tool for building custom dashboards.

Looker: Now part of Google Cloud, Looker has a plug-in marketplace with a directory of different types of visualizations and pre-made analytical blocks. It also features a drag-and-drop interface.

Microsoft Power BI: Microsoft Power BI is a business intelligence platform integrated with Microsoft Office. It has an easy-to-use interface for making dashboards and reports. It’s very similar to Excel so Excel skills transfer well. It also has a mobile app.

Qlik: Qlik’s Qlik Sense features an “associative” data engine for investigating data and AI-powered recommendations for visualizations. It is continuing to build out its open architecture and multicloud capabilities.

Sisense: Sisense is an end-to-end analytics platform best known for embedded analytics. Many customers use it in an OEM form.

Tableau: One of the most popular data visualization platforms on the market, Tableau is a platform that supports accessing, preparing, analyzing, and presenting data. It’s available in a variety of options, including a desktop app, server, and hosted online versions, and a free, public version. Tableau has a steep learning curve but is excellent for creating interactive charts.

Data visualization certifications

Data visualization skills are in high demand. Individuals with the right mix of experience and skills can demand high salaries. Certifications can help.

Some of the popular certifications include the following:

Data Visualization Nanodegree (Udacity)
Professional Certificate in IBM Data Science (IBM)
Data Visualization with Python (DataCamp)
Data Analysis and Visualization with Power BI (Udacity)
Data Visualization with R (Dataquest)
Visualize Data with Python (Codecademy)
Professional Certificate in Data Analytics and Visualization with Excel and R (IBM)
Data Visualization with Tableau Specialization (UCDavis)
Data Visualization with R (DataCamp)
Excel Skills for Data Analytics and Visualization Specialization (Macquarie University)

Data visualization jobs and salaries

Here are some of the most popular job titles related to data visualization and the average salary for each position, according to data from PayScale.

Data analyst: $64K
Data scientist: $98K
Data visualization specialist: $76K
Senior data analyst: $88K
Senior data scientist: $112K
BI analyst: $65K
Analytics specialist: $71K
Marketing data analyst: $61K
Analytics, Data Visualization