1.

What is business analytics?

Business analytics is the practical application of statistical analysis and technologies on business data to identify and anticipate trends and predict business outcomes. Research firm Gartner defines business analytics as “solutions used to build analysis models and simulations to create scenarios, understand realities, and predict future states.”

While quantitative analysis, operational analysis, and data visualizations are key components of business analytics, the goal is to use the insights gained to shape business decisions. The discipline is a key facet of the business analyst role.

Wake Forest University School of Business notes that key business analytics activities include:

Identifying new patterns and relationships with data miningUsing quantitative and statistical analysis to design business modelsConducting A/B and multivariable testing based on findingsForecasting future business needs, performance, and industry trends with predictive modelingCommunicating findings to colleagues, management, and customers

2.

What are the benefits of business analytics?

Business analytics can help you improve operational efficiency, better understand your customers, project future outcomes, glean insights to aid in decision-making, measure performance, drive growth, discover hidden trends, generate leads, and scale your business in the right direction, according to digital skills training company Simplilearn.

3.

What is the difference between business analytics and data analytics?

Business analytics is a subset of data analytics. Data analytics is used across disciplines to find trends and solve problems using data mining, data cleansing, data transformation, data modeling, and more. Business analytics also involves data mining, statistical analysis, predictive modeling, and the like, but is focused on driving better business decisions.

4.

What is the difference between business analytics and business intelligence?

Business analytics and business intelligence (BI) serve similar purposes and are often used as interchangeable terms, but BI can be considered a subset of business analytics. BI focuses on descriptive analytics, data collection, data storage, knowledge management, and data analysis to evaluate past business data and better understand currently known information. Whereas BI studies historical data to guide business decision-making, business analytics is about looking forward. It uses data mining, data modeling, and machine learning to answer “why” something happened and predict what might happen in the future.

Business analytics techniques

According to Harvard Business School Online, there are three primary types of business analytics:

Descriptive analytics: What is happening in your business right now? Descriptive analytics uses historical and current data to describe the organization’s present state by identifying trends and patterns. This is the purview of BI.Predictive analytics: What is likely to happen in the future? Predictive analytics is the use of techniques such as statistical modeling, forecasting, and machine learning to make predictions about future outcomes.Prescriptive analytics: What do we need to do? Prescriptive analytics is the application of testing and other techniques to recommend specific solutions that will deliver desired business outcomes.

Simplilearn adds a fourth technique:

Diagnostic analytics: Why is it happening? Diagnostic analytics uses analytics techniques to discover the factors or reasons for past or current performance.

Examples of business analytics

San Jose Sharks build fan engagement

Starting in 2019, the San Jose Sharks began integrating its operational data, marketing systems, and ticket sales with front-end, fan-facing experiences and promotions to enable the NHL hockey team to capture and quantify the needs and preferences of its fan segments: season ticket holders, occasional visitors, and newcomers. It uses the insights to power targeted marketing campaigns based on actual purchasing behavior and experience data. When implementing the system, Neda Tabatabaie, vice president of business analytics and technology for the San Jose Sharks, said she anticipated a 12% increase in ticket revenue, a 20% projected reduction in season ticket holder churn, and a 7% increase in campaign effectiveness (measured in click-throughs).

GSK finds inventory reduction opportunities

As part of a program designed to accelerate its use of enterprise data and analytics, pharmaceutical titan GlaxoSmithKline (GSK) designed a set of analytics tools focused on inventory reduction opportunities across the company’s supply chain. The suite of tools included a digital value stream map, safety stock optimizer, inventory corridor report, and planning cockpit.

Shankar Jegasothy, director of supply chain analytics at GSK, says the tools helped GSK gain better visibility into its end-to-end supply chain and then use predictive and prescriptive analytics to guide decisions around inventory and planning.

Kaiser Permanente streamlines operations

Healthcare consortium Kaiser Permanente uses analytics to reduce patient waiting times and the amount of time hospital leaders spend manually preparing data for operational activities.

In 2018, the consortium’s IT function launched Operations Watch List (OWL), a mobile app that provides a comprehensive, near real-time view of key hospital quality, safety, and throughput metrics (including hospital census, bed demand and availability, and patient discharges).

In its first year, OWL reduced patient wait time for admission to the emergency department by an average of 27 minutes per patient. Surveys also showed hospital managers reduced the amount of time they spent manually preparing data for operational activities by an average of 323 minutes per month.

Business analytics tools

Business analytics professionals need to be fluent in a variety of tools and programming languages. According to the Harvard Business Analytics program, the top tools for business analytics professionals are:

SQL: SQL is the lingua franca of data analysis. Business analytics professionals use SQL queries to extract and analyze data from transactions databases and to develop visualizations.Statistical languages: Business analytics professionals frequently use R for statistical analysis and Python for general programming.Statistical software: Business analytics professionals frequently use software including SPSS, SAS, Sage, Mathematica, and Excel to manage and analyze data.

Business analytics dashboard components

According to analytics platform company OmniSci, the main components of a typical business analytics dashboard include:

Data aggregation: Before it can be analyzed, data must be gathered, organized, and filtered.Data mining: Data mining sorts through large datasets using databases, statistics, and machine learning to identify trends and establish relationships.Association and sequence identification: Predictable actions that are performed in association with other actions or sequentially must be identified.Text mining: Text mining is used to explore and organize large, unstructured datasets for qualitative and quantitative analysis.Forecasting: Forecasting analyzes historical data from a specific period to make informed estimates predictive of future events or behaviors.Predictive analytics: Predictive business analytics use a variety of statistical techniques to create predictive models that extract information from datasets, identify patterns, and provide a predictive score for an array of organizational outcomes.Optimization: Once trends have been identified and predictions made, simulation techniques can be used to test best-case scenarios.Data visualization: Data visualization provides visual representations of charts and graphs for easy and quick data analysis.

Business analytics salaries

Here are some of the most popular job titles related to business analytics and the average salary for each position, according to data from PayScale:

Analytics manager: $71K-$132KBusiness analyst: $48K-$84KBusiness analyst, IT: $51K-$100KBusiness intelligence analyst: $52K-$98KData analyst: $46K-$88KMarket research analyst: $42K-$77KQuantitative analyst: $61K-$131KResearch analyst, operations: $47K-$115KSenior business analyst: $65K-$117KStatistician: $56K-$120KAnalytics

The logical progression from the virtualization of servers and storage in VSANs was hyperconvergence. By abstracting the three elements of storage, compute, and networking, data centers were promised limitless infrastructure control. That promised ideal was in keeping with the aims of hyperscale operators needing to grow to meet increased demand and that had to modernize their infrastructure to stay agile. Hyperconverged infrastructure (HCI) offered elasticity and scalability on a per-use basis for multiple clients, each of whom could deploy multiple applications and services.

There are clear caveats in the HCI world: limitless control is all well and good, but infrastructure details like lack of local storage and slow networking hardware restricting I/O would always define the hard limits on what is possible. Furthermore, there are some strictures emplaced by HCI vendors that limit the flavour of hypervisor or constrain hardware choices to approved kits. Worries around vendor lock-in surround the black-box nature of HCI-in-a-box appliances, too.

The elephant in the room for hyperconverged infrastructures is indubitably cloud. It’s something of a cliché in the technology landscape to mention the speed at which tech develops, but cloud-native technologies like Kubernetes are showing their capabilities and future potential in the cloud, the data center, and at the edge. The concept of HCI was presented first and foremost as a data center technology. It was clearly the sole remit, at the time, of the very large organization with its own facilities. Those facilities are effectively closed loops with limits created by physical resources.

Today, cloud facilities are available from hyperscalers at attractive prices to a much broader market. It is forecasted that the market for HCI solutions will grow significantly over the next few years, with year-on-year growth at just under 30%. Vendors are selling cheap(er) appliances and lower license tiers to try and mop up the midmarket, and hyperconvergence technologies are beginning to work with hybrid and multi-cloud topologies. The latter trend is demand-led. After all, if an IT team wants to consolidate its stack for efficiency and easy management, any consolidation must be all-encompassing and include local hardware, containers, multiple clouds, and edge installations. That ability also implies inherent elasticity, and by proxy, a degree of future-proofing baked in.

The cloud-native technologies around containers are well-beyond flash-in-the-pan status. The CNCF (Cloud Native Computing Foundation) Annual Survey for 2021 shows that containers and Kubernetes have gone mainstream. 96% of organizations are either using or evaluating Kubernetes. In addition, 93% of respondents are currently using, or planning to use, containers in production. Portable, scalable and platform-agnostic, containers are the natural next evolution in virtualization. CI/CD workflows are happening, increasingly, with microservices at their core.

So, what of hyperconvergence in these evolving computing environments? How can HCI solutions handle modern cloud-native workloads alongside full-blown virtual machines (VMs) across a distributed infrastructure. It can be done with “traditional” hyperconvergence, but the solution will be proprietary incurring steep cost.

Last year, SUSE launched Harvester, a 100% free-to-use, open source modern hyperconverged infrastructure solution that is built on a foundation of cloud native solutions including Kubernetes, Longhorn and Kubevirt. Built on top of Kubernetes, Harvester bridges the gap between traditional HCI software and the modern cloud-native ecosystem. It unifies your VMs with cloud-native workloads and provides organizations a single point of creation, monitoring, and control of an entire compute-storage-network stack. Since containers may run anywhere, from SOC ARM boards up to supercomputing clusters, Harvester is perfect for organizations with workloads spread over data centers, public clouds, and edge locations. Its small footprint makes it a perfect fit for edge scenarios and when you combine it with SUSE Rancher, you can centrally manage all your VMs and container workloads across all your edge locations.

VMs, containers, and HCI are critical technologies for extending IT service to new locations. Harvester represents how organizations can unify them and deploy HCI without proprietary closed solutions, using enterprise-grade open-source software that slots right into a modern cloud-native CI/CD pipeline.

To learn more about Harvester, we’ve provided the comprehensive report for you here.

SUSE

Vishal Ghariwala is the Chief Technology Officer for the APJ and Greater China regions for SUSE, a global leader in true open source solutions. In this capacity, he engages with customer and partner executives across the region, and is responsible for growing SUSE’s mindshare by being the executive technical voice to the market, press, and analysts. He also has a global charter with the SUSE Office of the CTO to assess relevant industry, market and technology trends and identify opportunities aligned with the company’s strategy.

Prior to joining SUSE, Vishal was the Director for Cloud Native Applications at Red Hat where he led a team of senior technologists responsible for driving the growth and adoption of the Red Hat OpenShift, API Management, Integration and Business Automation portfolios across the Asia Pacific region.

Vishal has over 20 years of experience in the Software industry and holds a Bachelor’s Degree in Electrical and Electronic Engineering from the Nanyang Technological University in Singapore.

Vishal is here on LinkedIn: https://www.linkedin.com/in/vishalghariwala/

Hyperconverged Infrastructure

The world has become far more complicated. For businesses, the need to balance employee safety, changed expectations about how and where we work, and the shifting threat landscape have transformed the very nature of how we use our computers. While users have always wanted safe, reliable and high performing PCs and notebooks, delivering this in the post-pandemic world poses an immense challenge. And with workplaces and teams distributed more widely than ever before, manageability faces a whole new set of obstacles.

Performance

Organisations need to ensure the computing platform they choose can deliver the performance they need while being as energy efficient as possible. The winner of a Grand Prix isn’t the fastest car. It’s the fastest car that stays in the race the longest. Performance is about more than the fastest CPU; it’s about ensuring you have the right processor, chipset, network and firmware all tuned to work together in harmony and at peak efficiency.

Great performance is about ensuring your computing platform tick all those boxes.

Security

If we think about that Grand Prix winning car, as well as having a powerful motor and great fuel efficiency so it can race faster for longer, it is also equipped with a variety of equipment to ensure the driver and those around them keep safe. Today’s threat environment moves faster and can impact an organisation faster than ever before. The adversaries are constantly changing how they attack and are exploiting newly discovered vulnerabilities.

New software patches, to thwart emerging threats and mitigate the risks of vulnerabilities, need to be easily and quickly deployed. Organisations need to be able to protect their data which demands the capability to remotely fix or wipe devices is also important should a device be lost or stolen.

The technology platform you choose needs built-in, multilayer hardware-based security above and below the operating system to help defend against attacks so IT teams can react quickly when a threat is detected without slowing users down, even when PCs are far from home. Security needs to be built into the technology platform by design and not bolted in as an afterthought.

Manageability

The COVID pandemic has changed the nature of work. Teams are now more distributed than ever so IT teams can’t rely on physical access to systems in order to support them. Old-school remote access systems were difficult to deploy and only gave IT teams limited ability to diagnose and fix problems.

Today’s computing platforms enable IT teams to remotely log in to users’ laptops to fix most issues, even if an operating system fails. Technology management and support teams need a platform that allows them to remotely log in to the device, wipe it if necessary and reinstall the operating systems and applications. This is a game changer for remote support.

A powerful manageability platform gives full KVM (Keyboard, Video, Mouse) capability throughout the power cycle – including uninterrupted control of the desktop when an operating system loads. And it allows authorised support people the ability to access and reconfigure the BIOS so every aspect of the user’s experience can be controlled and optimised.

Stability

A winning Formula One car is more than the sum of its individual parts and a great PC is more than just hardware. An optimised platform ensures all the parts of the system work together perfectly so it doesn’t let users down or make support harder.

That requires the computing platform to be rigorously tested. And, as well as offering benefits for users in their day to day work, a stable platform delivers smoother fleet management. With the cost of supporting a PC estimated at around $5000 per year according to Gartner, building an easy-to-manage and stable fleet of computers using a well-designed and thoroughly tested computing platform can deliver great value to organisations.

For organisations looking for a platform that supports these four pillars, they need to look for computers that are built on a platform that enables them to deliver great performance and security on a stable platform that ensures they can keep working and be supported whenever they need the assistance of their IT team.

Whether you’re in education and need to support students on and off campus, or a large business with team members distributed across the world, the Intel vPro platform delivers the performance, security, manageability and stability organisations need to meet the demands of today.

High-Performance Computing

What is a CAO?

A chief administrative officer (CAO) is a top-level executive responsible for overseeing the day-to-day operations of an organization and the company’s overall performance. CAOs are responsible for managing an organization’s finances as well as creating goals, policies, and procedures for the company to help it operate more efficiently and compliantly. They typically report directly to the CEO and act as a go-between for other senior-level management and the CEO.

CAOs often manage administrative staff and are also sometimes responsible for overseeing the accounting staff. These executives have a strong focus on policy, procedure, profits, and ensuring that all regulatory rules and regulations are followed. They work closely with departments and teams within the organization to ensure they’re operating effectively and to determine whether there is room for improvement. If a department is underperforming, a CAO can step in and identify what areas need to change or be improved to turn things around.

In addition to overseeing the daily operations of a company, CAOs also must have an eye on long-term strategic projects. That might include developing long-term budgets, developing and monitoring KPIs, training new managers, and keeping a pulse on changing regulatory and compliance rules.

Chief administrative officer responsibilities

The main responsibilities of a CAO are to ensure the company is operating efficiently daily, and to oversee relevant high-level management and other personnel. The CAO role can be found in several industries — most commonly in tech, finance, government, education, and healthcare. It’s a role that requires high-level decision-making, leadership skills, and strong communication skills. CAOs work closely with leaders across the organization and need to be able to communicate to the CEO how various departments are functioning within the company.

CAOs should have strong presentation skills and the ability to communicate complex business and financial information to other stakeholders in the company. It’s a role that requires an understanding of change management and an ability to juggle several complex projects at once. CAOs need a solid relationship built on trust with the CEO of the organization because they will work closely with them to improve business efficiency. 

The responsibilities of a CAO differ depending on industry, but general expectations for the role include:

Setting, monitoring, and managing KPIs for departments and management staffFormulating strategic, operational, and budgetary plansWorking closely with and training new managers in administrative rolesMentoring and coaching administrative staff within the organizationPerforming manager evaluationsWorking closely with C-suite and board of directorsStaying up to date on the latest changes to government rules and regulations related to administrative tasks, accounting, and financial reporting

Chief administrative officer skills

While skills differ by industry, CAOs are expected to have the following general skillset:

Strategic planningTeam leadershipLegal complianceFinancial reportingRegulatory complianceBudget managementStrategic project managementRisk management/risk controlAbility to generate “effective reports and give presentations”Knowledge of IRS laws, Generally Accepted Accounting Principles (GAAP), Security Exchange Commission (SEC) rules and regulations, and internal audit procedures within the company

Chief administrative officer vs. COO

The role of CAO is very similar to that of a chief operating officer (COO), as both are responsible for overseeing the operations of a business. The COO role, however, is more commonly found in companies that manufacture physical products, whereas the CAO role is better suited to companies focused on offering services. It’s not uncommon for a company to have both roles, depending on business needs.

Another difference between a CAO and COO is that CAOs oversee day-to-day operations and identify opportunities to improve departments, teams, and management within the organization. If a department isn’t performing well, a CAO will often take over as acting head of the department, working at the helm of the team or department to get a firsthand look at how it’s functioning and how it could be improved.  

Alternatively, chief operating officers typically focused more on the overall operations of a business, rather than the day-to-day operations of specific departments or teams. They’re responsible for overseeing projects such as choosing new technology upgrades, finding new plants for manufacturing, and overseeing physical supply chains.  

At companies that have both a CAO and a COO, the two often work closely together to develop success metrics and goals for the company. Their roles are related enough that these two executives will have to strategize together when it comes to budgets or implementing regulatory and compliance rules. Both the CAO and COO have an eye on operations and efficiency, just in a different scope and area of the business.

Chief administrative officer salary

The average annual salary for a chief administrative officer is $122,748 per year, according to data from PayScale. Reported salaries for the role ranged from $67,000 to $216,000 depending on experience, certifications, and location. Entry-level CAOs with less than one year experience reported an average salary of $90,000, while those with one to four years’ experience reported an average annual salary of $93,174. Midlevel CAOs with five to nine years’ experience reported an average annual salary of $113,543, and experienced CAOs with 10 to 19 years’ experience reported an average annual salary of $133,343. Late career CAOs with over 20 years’ experience reported an average annual salary of $149,279.

IT Leadership

Cairn Oil & Gas is a major oil and gas exploration and production company in India. It currently contributes 25% to India’s domestic crude production (about 28.4 MMT) and is aiming to account for 50% of the total output. The company plans to spend ₹3,16,09 crores (₹31.6 billion) over the next three years to boost its production.

The oil and gas industry currently confronts three major challenges: huge price fluctuation with volatile commodity prices, capital-intensive processes and long lead times, and managing production decline.

Sandeep Gupta, chief digital and information officer at Cairn Oil & Gas, is using state-of-the-art technologies to overcome these challenges and achieve business goals. “We have adopted a value-focused approach to deploying technological solutions. We partner with multiple OEMs and service integrators to deploy highly scalable projects across the value chain,” he says.

Reducing operational costs with drones, AI, and edge computing

Sandeep Gupta, chief digital and information officer, Cairn Oil & Gas

istock

The oil and gas industry is facing huge price fluctuation due to volatile commodity prices and geopolitical conditions. In such a scenario, it becomes crucial for the business to manage costs.

Sustained oil production depends on uninterrupted power supply. However, managing transmission lines is a high-cost, resource-intensive task. For Cairn, it meant managing 250km of power lines spread across 3,111 square kilometers. They supply power to the company’s Mangala, Bhagyam, and Aishwarya oil fields and its Rageshwari gas fields in Rajasthan.

To reduce operational costs, the company decided to use drones. The images captured by the drones are run through an AI image-recognition system. The system analyses potential damage to power lines, predicts possible failure points, and suggests preventive measures, thereby driving data-driven decision-making instead of operator-based judgment.

“Algorithms such as convolutional neural networks were trained on images captured when the overhead powerlines are running in their ideal condition. The algorithm then compares the subsequent images that are taken at an interval of six months when any anomalies are captured. An observation is then put into portal for the maintenance team to take corrective and preventive action,” says Gupta.

This is a service-based contract between Cairn and the maintenance provider where the monitoring is carried out on biannual basis for 220kV power lines and annually for 500kV power lines.

“Since the implementation of drone-based inspection, the mean time between failure has increased from 92 to 182 days. This has reduced oil loss to 2,277 barrels per year, leading to cost savings worth approximately ₹12 crores [₹120 million]. As it enables employees to carry out maintenance activities in an effective manner, a small team can work more efficiently, and the manpower required reduces,” Gupta says.

The remote location of operations coupled with a massive volume of data (Cairn generates about 300GB data per day) that is generated make the oil and gas industry ideal for the use of edge-based devices for computing.

With smart edge devices, critical parameters are stored and processed at remote locations. The devices are installed in the field which send data via MQTT protocol where cellular network connectivity is available. They store data up to 250GB on the Microsoft Azure cloud and perform analytics using machine-learning algorithms, as well as provide intelligent alarms.

Without these devices, the data generated would be transported to faraway data centres, clogging the network bandwidth. “Edge computing helps reduce our IT infrastructure cost as lower bandwidth is sufficient to handle the large volume of data. These devices deployed are tracking critical operational parameters such as pressure, temperature, emissions, and flow rate. The opportunity cost of not having edge computing would result in requiring a higher bandwidth of network, which would amount to around 2X of the current network cost,” says Gupta. “This also has an implication on the health and safety risk of our personnel and equipment.”

Reducing lead times through a cloud-first strategy

The oil exploration process has a lead time of around three to five years and requires huge capital commitment. Out of these three to five years, a significant amount of time is taken up by petrotechnical experts (geologists, geophysicists, petroleum engineers, and reservoir engineers) in simulating models that require massive computational power.

Petrotechnical workflow entails evaluation of subsurface reservoir characteristics to identify the location for drilling the wells. These workflows are carried out by petrotechnical experts via multiple suites of software applications that can help identify the location and trajectory of wells to be drilled.

“Capital allocation and planning for future exploration has become riskier due to long lead times. To achieve our goals, increasing computing capabilities are essential. For this, we have adopted and executed a cloud-first strategy,” says Gupta. Thus, Cairn has completely migrated the workloads for petrotechnical workflows to the cloud. “This migration has removed the constraints of on-premises computational capabilities. As a result, there is almost 30% reduction in time to first oil,” he says.

Managing decline in production through predictive analytics

Cairn has considerable volume, variety, and velocity of data coming from different sources across production, exploration, and administration. “Using this data, we have deployed multiple large-scale projects, including predictive analytics, model predictive control, and reservoir management, which have been scaled across multiple sites,” says Gupta. Model predictive control (MPC) is a technology where the equipment is monitored for various operating parameters and is then operated in a particular range to get maximum efficiency, while maintaining the constraints in the system.

At the heart of this lies Disha, a business intelligence initiative that uses dashboards driving critical actionable insights. “The philosophy for developing Disha was to make the right data available to the right people at the right time. We wanted to remove file-based data sharing and reporting as significant time goes in creating these reports. We connected data from various sources such as SAP HANA, Historian, Microsoft SharePoint, Petrel, LIMS, and Microsoft Azure cloud onto a single Microsoft PowerBI ecosystem where customized reports can be created,” says Gupta.

Disha was developed in a hybrid mode with an in-house team and an analytics provider over the course of three years. It offers more than 200 customized dashboards, including a well-monitoring dashboard, a production-optimisation dashboard, a CEO and CCO dashboard, and a rig-scheduling dashboard.

“With data now easily and quickly accessible in an interactive format across the organisation, which was earlier restricted to a select few, the corrective actions for resource allocation are now based on the data,” Gupta says. “For instance, we leverage Disha to monitor the parameter and output of the electronic submersible pump, which handles oil and water. It helps us in tracking the gains achieved through MPC implementation. All this enables better decision-making and has helped to allocate resources in optimized manner, thus managing the decline in productivity.” Going forward, Cairn plans to partner with a few big analytics providers and build a single platform to help contextualize its data and deploy micro solutions, according to business needs. “This will be a low-code platform that will enable individual teams to build solutions on their own,” Gupta says. “The initiatives are oriented towards sustaining the production levels, while reducing time to first oil. Some of the initiatives include artificial lift system monitoring, well monitoring, and well-test validation,” says Gupta.

Artificial Intelligence, Digital Transformation