Technologies like the Internet of Things (IoT), artificial intelligence (AI), and advanced analytics provide tremendous opportunities to increase efficiency, safety, and sustainability. However, for businesses with operations in remote locations, the lack of public infrastructure, including cloud connectivity, often places these digital innovations out of reach.

Until recently, this has been the predicament of oil and gas companies operating oil wells, pipelines, and offshore rigs in remote, hard-to-reach locales. But the arrival of private 5G for oil and gas has changed this. Here’s how private 5G is transforming oil and gas operations in the field.

Secure bandwidth & real-time monitoring in remote locales

5G is a hardened telco network environment that provides one of the most secure networks in the world. Using this same technology, private 5G delivers an ultra-secure, restricted-access mobile network that gives businesses reliable connectivity and bandwidth to support their data transmission needs.

Private 5G enables a transportable “network-in-a-box” solution that can be relocated to provide connectivity and bandwidth in remote locations. This self-contained network offers the low-latency connectivity needed to configure, provision, and monitor a network. Furthermore, private 5G is also incredibly reliable, especially compared to traditional Wi-Fi, enabling superior communications and bandwidth-intensive, edge-to-cloud data transmission.

Increased productivity and efficiency

This highly reliable network solution is transforming oil and gas companies, which rely on heavy equipment with lots of moving parts, often running 24×7. By implementing intelligent IoT solutions that track vibrations, odors, and other conditions, oil and gas companies can monitor distributed, remote sites and equipment from a central location.

This is a game changer from an efficiency and productivity standpoint. For example, private 5G accelerates time to production for remote locations by eliminating the cost and time associated with coordinating with telco to build infrastructure. Additionally, private 5G helps oil and gas companies keep sites running smoothly, combining IoT solutions with AI and machine learning to enable predictive maintenance. This reduces costly equipment breakdowns and repairs, minimizes operational disruptions, and extends the life of hardware.

Furthermore, private 5G enables operators to diagnose and upgrade firmware and machinery and perform maintenance remotely. This decreases the need for travel and the number of crews in the field and reduces equipment downtime.

Private 5G enables improved safety and sustainability

Private 5G supports advanced solutions that boost workplace safety. Oil and gas companies can apply intelligent edge solutions to monitor for security breaches and safety hazards. IoT sensors can detect gas and equipment leaks, temperature fluctuations, and vibrations to avoid catastrophic events and keep employees safe.

From a sustainability standpoint, private 5G enables solutions that help prevent oil and gas leaks, reducing environmental impacts. Furthermore, oil and gas companies can implement smart solutions that minimize energy and resource usage and reduce emissions in the field.

Unlock the potential of private 5G

Private 5G is transforming oil and gas operations as well as businesses in other industries with remote, hard-to-reach operations. As an award-winning, international IT solutions provider and network integrator, GDT can help your organization design and implement an HPE private 5G solution to meet your specific needs.

HPE brings together cellular and Wi-Fi for private networking across multiple edge-to-cloud use cases. HPE’s private 5G solution is based on HPE 5G Core Stack, an open, cloud-native, container-based 5G core network solution.

To discover how private 5G can transform your business, contact the experts at GDT for a free consultation.

5G

The air travel industry has dealt with significant change and uncertainty in the wake of the COVID-19 pandemic. In 2020, JetBlue Airways decided its competitive advantage depended on IT — in particular, on transforming its data stack to consolidate data operations, operationalize customer feedback, reduce downstream effects of weather and delays, and ensure aircraft safety.

“Back in 2020, the data team at JetBlue began a multi-year transformation of the company’s data stack,” says Ashley Van Name, general manager of data engineering at JetBlue. “The goal was to enable access to more data in near real-time, ensure that data from all critical systems was integrated in one place, and to remove any compute and storage limitations that prevented crewmembers from building advanced analytical products in the past.”

Prior to this effort, JetBlue’s data operations were centered on an on-premises data warehouse that stored information for a handful of key systems. The data was updated on a daily or hourly basis depending on the data set, but that still caused data latency issues.

“This was severely limiting,” Van Name says. “It meant that crewmembers could not build self-service reporting products using real-time data. All operational reporting needed to be built on top of the operational data storage layer, which was highly protected and limited in the amount of compute that could be allocated for reporting purposes.”

Data availability and query performance were also issues. The on-premises data warehouse was a physical system with a pre-provisioned amount of storage and compute, meaning that queries were constantly competing with data storage for resources.

“Given that we couldn’t stop analysts from querying the data they needed, we weren’t able to integrate as many additional data sets as we may have wanted in the warehouse — effectively, in our case, the ‘compute’ requirement won out over storage,” Van Name says.

The system was also limited to running 32 concurrent queries at any one time, which created a queue of queries on a daily basis, contributing to longer query run-times.

The answer? The Long Island City, N.Y.-based airlines decided to look to the cloud.

Near real-time data engine

JetBlue partnered with data cloud specialist Snowflake to transform its data stack, first by moving the company’s data from its legacy on-premises system to the Snowflake data cloud, which Van Name says greatly alleviated many of the company’s most immediate issues.

Ashley Van Name, general manager of data engineering, JetBlue

JetBlue

Jet Blue’s data team then focused on integrating critical data sets that analysts had not previously been able to access in the on-premises system. The team made more than 50 feeds of near real-time data available to analysts, spanning the airline’s flight movement system, crew tracking system, reservations systems, notification managers, check-in-systems, and more. Data from those feeds is available in Snowflake within a minute of being received from source systems.

“We effectively grew our data offerings in Snowflake to greater than 500% of what was available in the on-premise warehouse,” Van Name says.

JetBlue’s data transformation journey is just beginning. Van Name says moving the data into the cloud is just one piece of the puzzle: The next challenge is ensuring that analysts have an easy way to interact with the data available in the platform.

“So far, we have done a lot of work to clean, organize, and standardize our data offerings, but there is still progress to be made,” she says. “We firmly believe that once data is integrated and cleaned, the data team’s focus needs to shift to data curation.”

Data curation is critical to ensuring analysts of all levels can interact with the company’s data, Van Name says, adding that building single, easy-to-use “fact” tables that can answer common questions about a data set will remove the barrier to entry that JetBlue has traditionally seen when new analysts start interacting with data.

In addition to near real-time reporting, the data is also serving as input for machine learning models.

“In addition to data curation, we have begun to accelerate our internal data science initiatives,” says Sai Pradhan Ravuru, general manager of data science and analytics at JetBlue. “Over the past year and a half, a new data science team has been stood up and has been working with the data in Snowflake to build machine learning algorithms that provide predictions about the state of our operations, and also enable us to learn more about our customers and their preferences.”

Ravuru says the data science team is currently working on a large-scale AI product to orchestrate efficiencies at JetBlue.

“The product is powered by second-degree curated data models built in close collaboration between the data engineering and data science teams to refresh the feature stores used in ML products,” Ravuru says. “Several offshoot ecosystems of ML products form the basis of a long-term strategy to fuel each team at JetBlue with predictive insights.”

Navigating change

JetBlue shifted to Snowflake nearly two years ago. Van Name says that over the past year, internal adoption of the platform has increased by almost 75%, as measured by monthly active users. There has also been a greater than 20% increase in the number of self-service reports developed by users.

Sai Pradhan Ravuru, general manager of data science and analytics, JetBlue

JetBlue

Ravuru says his team has deployed two machine learning models to production, relating to dynamic pricing and customer personalization. Rapid prototyping and iteration are giving the team the ability to operationalize data models and ML products faster with each deployment.

“In addition, curated data models built agnostic of query latencies (i.e., queries per second) offer a flexible online feature store solution for the ML APIs developed by data scientists and AI and ML engineers,” Ravuru says. “Depending on the needs, the data is therefore served up in milliseconds or batches to strategically utilize the real-time streaming pipelines.”

While every company has its own unique challenges, Van Name believes adopting a data-focused mindset is a primary building block for supporting larger-scale change. It is especially important to ensure that leadership understands the current challenges and the technology options in the marketplace that can help alleviate those challenges, she says.

“Sometimes, it is challenging to have insight to all of the data problems that exist within a large organization,” Van Name says. “At JetBlue, we survey our data users on a yearly basis to get their feedback on an official forum. We use those responses to shape our strategy, and to get a better understanding of where we’re doing well and where we have opportunities for improvement. Receiving feedback is easy; putting it to action is where real change can be made.”

Van Name also notes that direct partnership with data-focused leaders throughout the organization is essential.

“Your data stack is only as good as the value that it brings to users,” she says. “As a technical data leader, you can take time to curate the best, most complete, and accurate set of information for your organization, but if no one is using it to make decisions or stay informed, it’s practically worthless. Building relationships with leaders of teams who can make use of the data will help to realize its full value.”

Analytics, Cloud Computing, Data Management

In 2016, Major League Baseball’s Texas Rangers announced it would build a brand-new state-of-the-art stadium in Arlington, Texas. It wasn’t just a new venue for the team, it was an opportunity to reimagine business operations.

The old stadium, which opened in 1992, provided the business operations team with data, but that data came from disparate sources, many of which were not consistently updated. The new Globe Life Field not only boasts a retractable roof, but it produces data in categories that didn’t even exist in 1992. With the new stadium on the horizon, the team needed to update existing IT systems and manual business and IT processes to handle the massive volumes of new data that would soon be at their fingertips.

“In the old stadium, we just didn’t have the ability to get the data that we needed,” says Machelle Noel, manager of analytic systems at the Texas Rangers Baseball Club. “Some of our systems were old. We just didn’t have the ability that we now have in this new, state-of-the-art facility.”

The new stadium, which opened in 2020, was a chance to develop a robust and scalable data and analytics environment that could provide a foundation for growth with scalable systems, real-time access to data, and a single source of truth, all while automating time-consuming manual processes.

“We knew we were going to have tons of new data sources,” Noel says. “Now what are we going to do with those? How are we going to get them? Where are we going to store them? How are we going to link them together? Moving into this new building really catapulted us into a whole new world.”

Driving better fan experiences with data

Noel had already established a relationship with consulting firm Resultant through a smaller data visualization project. She decided to bring Resultant in to assist, starting with the firm’s strategic data assessment (SDA) framework, which evaluates a client’s data challenges in terms of people and processes, data models and structures, data architecture and platforms, visual analytics and reporting, and advanced analytics. Resultant then provided the business operations team with a set of recommendations for going forward, which the Rangers implemented with the consulting firm’s help.

Noel notes that her team is small, so the consultancy helped by providing specific expertise in certain areas, like Alteryx, which is the platform the team uses for ETL.

Resultant recommended a new, on-prem data infrastructure, complete with data lakes to provide stake holders with a better way to manage data reliability, accuracy, and timeliness. The process included co-developing a comprehensive roadmap, project plan, and budget with the business operations team.

“At the old stadium, you’d pull up at the park and you’d give somebody your $20 to park and they would put that $20 in their fanny pack,” says Brian Vinson, client success leader and principal consultant at Resultant. “Then you’d get to the gate and show them your paper ticket. They would let you in and then you would go to your seat, then maybe you’d go buy some concessions. You’d scan your credit card to get your concessions or your hat, or pay cash, and the team wouldn’t see that report until the next day or the next week.”

In those days, when a game ended, it was time for business operations to get to work pulling data and preparing reports, which often took hours. 

Resultant helped the Rangers automate that task, automatically generating that report within an hour of a game’s completion. The new environment also generates near real-time updates that can be shared with executives during a game. This allows the operations team to determine which stadium entrances are the busiest at any given time to allow them to better distribute staff, promotion items, and concession resources. Departments can see what the top-selling shirts (and sizes) are at any given time, how many paper towels are left in any given restroom, even how many hot dogs are sold per minute.

“With digital ticketing and digital parking passes, we know who those people are, and we can follow the lifecycle of someone from when they come into the lot and which gate they came in,” Noel says. “We can see how full different sections get at what point in time.”

The team can also use the data to enhance the fan experience. A system the Rangers call ‘24/7’ logs all incidents that occur during an event — everything from spill clean-up to emptying the trash, replacing a lightbulb, to medical assistance. This system helped the operations team notice that there was a problem with broken seats in the stadium and approach their vendor with the data.

“We were able to take the data from that system and determine that we actually had a quality control problem with a lot of our new seats,” Noel says.  “We were able to proactively replace all the seats that were potentially in that batch. That enhances the fan experience because they’re not coming into a broken seat.”

Lessons learned

Noel and Vinson agree that one of the biggest lessons learned from the process is that it’s important to share successes and educate stakeholders about the art of the possible.

“The idea that ‘if you build it, they will come,’ does not always work, because you can build stuff and people don’t know about it,” Vinson says. “In the strategic data assessment, when people were like, ‘Oh, you can show us the ice cream sales?’ Yeah. I think you have to toot your own horn that, yes, we have this information available.”

When the business operations team first presented the new end-of-game report in an executive meeting, the owners asked to be included. Now, Noel says, they want it for every game, every event, every concert.

“Now, when we do a rodeo and it doesn’t come out when they expect it, they’re like, ‘Okay, where are my numbers?’ They want that information,” she says.

Analytics, Data Management

The education sector in the UK is seeing incredible transformation with the expansion of multi-academy trusts (MATs) and the government’s requirement to have all schools in MATs by 2030. This brings unprecedented challenges, but also an enormous opportunity for positive education reform.

Core to this challenge for MAT’s is the management of financial operations, budgets and funding across large numbers of schools Their ability to grow has been impeded by legacy accounting solutions, making it an expensive and lengthy process to setup, onboard and report on new schools as they are brought into the trust.

Sage, an Amazon Web Services (AWS) partner, is a world leader in financial technology. Sage Intacct is next-generation accounting software, that enables the transformation and scaling of financial operations, which MATs will need to perform.

Trusts need to consider four key topics when transforming their complex accounting and reporting operations -scale and expansion, automation, integration and reporting.

Trusts need to grow, scale and expand.  Having a system that can support the simple and fast addition of new schools, or other entities, is critical to successful expansion. Modern systems like Sage Intacct allow this to be done in minutes, removing the need for expensive setup costs and waiting on consultants to deliver.

Next up is automation, probably the greatest tool in your arsenal to mitigate the time and cost of financial operations. Leveraging automation, Sage Intacct can help reduce the day-to-day admin of your finance team, alleviating manual jobs and using technology such as optical character recognition (OCR) to accurately read and import financial documents.

The arrival of cloud accounting opened the gate to integrated systems and harmonising of processes and data. External applications such as forecasting tools, accounts payable (AP) processing and approval management all allow for huge savings in time and offer improved technology solutions. It’s possible to integrate bank accounts and have daily transaction feeds, saving your team yet another job of importing and matching bank data.

Finally, powerful, fast and accurate reporting is the pièce de resistance of your new accounting platform. Out of-the-box education and Skills Funding Agency and Department of Education reports ensure you have all the right data required for government reporting. And multi-entity consolidation allows you to have complete oversight of the trust’s financials.

Sage Intacct is built to support growing Trusts who need to minimise complexity and maximise impact.

Trusts who want to reduce the cost of financial operations and unlock the challenges of scale and growth need to start by reviewing their accounting infrastructure with expert help. Sage works closely with their education partner, ION, to help deliver the smartest and most intelligent financial solutions to schools. ION has delivered Sage Intacct for education to multiple MATs, implementing the game-changing software, training staff on how to use it, and providing world class support.  This partnership sets the trust up for success allowing the focus to be on education and growth, not losing time managing finances.

To find out more about the benefits of cloud accounting software for multi-academy trusts, click here.

Education Industry, Financial Services Industry

The end of the Great Resignation — the latest buzzword referring to a record number of people quitting their jobs since the pandemic — seems to be nowhere in sight.

“New employee expectations, and the availability of hybrid arrangements, will continue to fuel the rise in attrition. An individual organization with a turnover rate of 20% before the pandemic could face a turnover rate as high as 24% in 2022 and the years to come,” says Piers Hudson, senior director in the Gartner HR practice.

The Global Workforce Hopes and Fears Survey, conducted by PwC, predicts that one in five workers worldwide may quit their jobs in 2022 with 71% of respondents citing salary as the major driver for changing jobs.

The challenge for IT leaders is clear: With employees quitting faster than they can be replaced, the rush to hire the right talent is on — so too is the need to retain existing IT talent.

But for Kapil Mehrotra, group CTO at National Collateral Management Services (NCMS), high turnover presented an opportunity to cut costs of the IT department, streamline its operations, and find a long-term solution to the perpetual skills scarcity problem.

Here’s how Mehrotra transformed the Great Resignation into a new approach for staffing and skilling up the commodity-based service provider’s IT department.

Losing 40% of domain expertise in one month

From an IT infrastructure standpoint, NCMS is 100% on the cloud. The company’s IT department comprised 27 employees, with one person each handling business analytics and cybersecurity, and the rest of the team split between handling infrastructure and applications. The applications had been transformed into SaaS and PaaS environment.

With a scarcity for experienced and skilled resources in the market and companies willing to poach developers to fulfill their needs, it was just a matter of time before NCMS too saw a churn in its IT department.

“In March, 10 of the 27 employees from the IT department resigned when they received job offers with substantial hikes. At that time, application migration was under way, and our supply chain software was also getting a major upgrade. The sudden and substantial drop of 40% in the department’s strength made a significant impact on several such high-priority projects,” says Mehrotra.

“Those who left included an Android expert and specialists in the fields of .Net and IT infrastructure. As the company had legacy systems, it became tough to hire resources that could manage them. Nobody wanted to deal with legacy solutions. The potential candidates would convey their inability to work on such systems by showing their certifications on newer versions of the solutions,” he says.

Besides, whatever few skilled resources available for hire were expecting exorbitant salaries. “This would have not only impacted our budget but would have also created an imbalance in the IT department. HR wanted to maintain the equilibrium that would have otherwise got disturbed had we hired someone at very high salary compared to existing team members who had been in the company for years,” says Mehrotra.

Nurturing fresh talent in-house

So, while most technology leaders were scouting for experienced and skilled resources, Mehrotra decided to hire fresh talent straight from nearby universities. Immediately after the employees quit, he went to engineering colleges in Gurgaon and shortlisted 20 to 25 CVs. Mehrotra eventually hired four candidates, taking the depleted IT department’s head count to 21.

But Mehrotra now had two challenges at hand: He had to train the freshers and kickstart the pending high-priority projects as soon as possible.

“I told the business that we wouldn’t be able to take any new requirements from them for the next three months. This gave us the time to groom the freshers. We then got into a task-based contract with the outgoing team members. As per the contract, the team members who had exited were to complete the high-priority projects over the next months at a fixed monthly payout. If the project spilled over to the next month, there would be no additional payout,” Mehrotra says.

“Adopting this approach not only enabled completion of the projects hanging in the limbo, but also provided the freshers with practical and hands-on training. They ex-employees acted as mentors for the freshers who were asked to write code and do research. All this helped the new employees in getting a grip on the company’s infrastructure,” he says.

In addition, Mehrotra also got the freshers certified. “One got certified on .Net while another on Azure DevOps,” says Mehrotra.

New recruits help slash costs, streamline operations

The strategy of bringing first-time IT workers onboard has helped Mehrotra in slashing salary costs by 30%. “The new hires have come at a lower salary and have helped us in streamlining the operations. We are getting 21 people to do the work that was earlier done by 27 people. The old employees used to work in a leisurely manner. They used to enter office late, open their laptops at 11 a.m., and take regular breaks during working hours. The commitment levels of freshers are higher, and they stay in a company for an average of three years,” says Mehrotra.

After three months of working with the mentors, the freshers came up to speed. “We started taking requirements from business. The only difference working with freshers is that as an IT leader, I have stepped up and taken more responsibility. I make sure that I participate even in normal meetings to avoid any conflicts. Earlier what got completed in one day is currently taking seven days to complete. Therefore, we take timelines accordingly. We are currently working at 70% of our productivity and expect to return to 100% in the next three months,” says Mehrotra.

Sharing his learnings with other IT leaders, he says, “There will always be a skills scarcity in the market, but the time has come to break this chain. Hiring resources at ever- increasing salaries is not a sustainable solution. The answer lies in leveraging freshers. Just like big software companies, CIOs also must hire, train, and retain freshers. We must nurture good resources inhouse to bridge the skills gap.”  Mehrotra is now back to hiring and has approached recruitment consultants with the mandate to fill 11 positions, which are open to all, including candidates with even six months to a years’ experience.

IT Skills

Good cyber hygiene helps the security team reduce risk. So it’s not surprising that the line between IT operations and security is increasingly blurred. Let’s take a closer look.

One of the core principles in IT operations is “you can’t manage what you don’t know you have.” By extension, you also can’t secure what you don’t know you have. That’s why visibility is important to IT operations and security. Another important aspect is dependency mapping. Dependency mapping is part of visibility, showing the relationships between your servers and the applications or services they host.

There are many security use cases where dependency mapping comes into play. For example, if there’s a breach, dependency mapping offers visibility into what’s affected. If a server is compromised, what is it talking to? If it must be taken offline, what applications will break?

To further erase the line between IT operations and security, many operations tools have a security dimension as well.

What is good cyber hygiene?

Good cyber hygiene is knowing what you have and controlling it. Do you have the licenses you need for your software? Are you out of compliance and at risk for penalties? Are you paying for licenses you’re not using? Are your endpoints configured properly? Is there software on an endpoint that shouldn’t be there? These questions are all issues of hygiene, and they can only be answered with visibility and control. 

To assess your cyber hygiene, ask yourself:

What do you have?Is it managed?Do managed endpoints meet the criteria set for a healthy endpoint?

Think of endpoints in three categories: managed, unmanaged and unmanageable. Not all endpoints are computers or servers. That’s why good cyber hygiene requires tools that can identify and manage devices like cell phones, printers and machines on a factory floor.

There is no single tool that can identify and manage every type of endpoint. But the more visibility you have, the better your cyber hygiene. And the better your risk posture.

Work-from-home (WFH) made visibility much harder. If endpoints aren’t always on the network, how do you measure them? Many network tools weren’t built for that. But once you know what devices you have, where they are and what’s on them, you can enforce policies that ensure these devices behave as they should.

You also want the ability to patch and update software quickly. When Patch Tuesday comes around, can you get critical patches on all your devices in a reasonable time frame? Will you know in real time what’s been patched and what wasn’t? It’s about visibility.

That way, when security comes to operations and says, “There’s a zero-day flaw in Microsoft Word. How many of your endpoints have this version?” Operations can answer that question. They can say, “We know about that, and we’ve already patched it.” That’s the power of visibility and cyber hygiene.

Good hygiene delivers fresh data for IT analytics

Good hygiene is critical for fresh, accurate data. But in terms of executive hierarchy, where does the push for good cyber hygiene start? Outside of IT and security, most executives probably don’t think about cyber hygiene. They think about getting answers to questions that rely on good IT hygiene.

For example, if CFOs have a financial or legal issue around license compliance, they probably assume the IT ops team can quickly provide answers. Those executives aren’t thinking about hygiene. They’re thinking about getting reliable answers quickly.

What C-level executives need are executive dashboards that can tell them whether their top 10 business services are healthy. The data the dashboards display will vary depending on the executive and business the organization is in.

CIOs may want to know how many Windows 10 licenses they’re paying for. The CFO wants to know if the customer billing service is operating. The CMO needs to know if its customer website is running properly. The CISO wants to know about patch levels. This diverse group of performance issues all depends on fresh data for accuracy.

Fresh data can bring the most critical issues to the dashboard, so management doesn’t have to constantly pepper IT with questions. All this starts with good cyber hygiene.

Analytics supports alerting and baselining

When an issue arises, like a critical machine’s CPU use is off the charts, an automated alert takes the burden off IT to continuously search for problems. This capability is important for anyone managing an environment at scale; don’t make IT search for issues.

Baselining goes hand-in-hand with alerting because alerts must have set thresholds. Organizations often need guidance on how to set thresholds. There are several ways to do it and no right way.

One approach is automatic baselining. If an organization thinks its environment is relatively healthy, the current state is the baseline. So it sets up alerts to notify IT when something varies from that.

Analytics can play an important role here by helping organizations determine whether normal is the same as healthy. Your tools should tell you what a healthy endpoint looks like and that’s the baseline. Alerts tell you when something happens that changes that baseline state.

Analytics helps operations and security master the basics

Visibility and control are the basics of cyber hygiene. Start with those. Know what’s in your environment and what’s running on those assets—not a month ago—right now. If your tools can’t provide that information, you need tools that can. You may have great hygiene on 50 percent of the machines you know about, but that won’t get the job done. Fresh data from every endpoint in the environment: that’s what delivers visibility and control.

Need help with cyber hygiene? Here’s a complete guide to get you started.

Analytics

A modern, agile IT infrastructure has become the critical enabler for success, allowing organizations to unlock the potential of new technologies such as AI, analytics, and automation. Yet modernization journeys are often bumpy; IT leaders must overcome barriers such as resistance to change, management complexity, high costs, and talent shortages.

Those successful in their modernization endeavors can expect significant business gains. In Ampol’s case, the transport fuels provider enjoyed enhanced operational efficiency, business agility, and maximized service uptimes.

A vision for transformation, hampered by legacy

Ampol had a clear goal: intelligent operations for improved service reliability, increased agility, and reduced cost. To achieve this, Ampol created a vision centered on “uplifting and modernizing existing cloud environment and practices,” according to Lindsay Hoare, Ampol’s Head of Technology.

This meant having enterprise-wide visibility and environment transparency for real-time updates, modernizing its environment management capabilities with cloud-based and cloud-ready tools, building the right capabilities and skillsets for the cloud, redesigning the current infrastructure into a cloud-first one, and leveraging automation for enhanced operations.  

While Ampol had most workloads in the cloud, it is still highly dependent on its data center. This meant added complexity to infrastructure networking and management, which in turn drove up maintenance and management costs. The need for human intervention across the environment further increased the risk of error and resultant downtime. Its ambition to enable automation across the entire enterprise, at that point in time, felt unattainable as it lacked the technical expertise and capabilities to do so.

Realizing its ambitions with the right partner

Ampol knew it was not able to modernize its enterprise and bridge the ambition gap alone. It then turned to Accenture. “We needed a partner with a cloud mindset, one that could cover the technological breadth at which Ampol operates,” said Hoare. “Hence why we turned to Accenture, with whom we’ve built a strong partnership that has spanned over a decade.”

Accenture has been helping Ampol in its digital transformation journey across many aspects of its IT operations and as such has a deep understanding of Ampol’s automation ambitions.

“We brought to the table our AIOps capability that leverages automation, analytics, and AI for intelligent operations. Through our ongoing work with Ampol, we were able to accelerate cloud adoption alongside automation implementation, reducing implementation and deployment time,” said Duncan Eadie, Accenture’s Managing Director of Cloud, Infra, and Engineering for AAPAC.

Reaping business benefits through intelligent operations

Through its collaboration with Accenture, Ampol was able to realize its vision for intelligent operations which then translates to business benefits.

Visualization and monitoring

Ampol can now quickly pinpoint incidents to reduce the time to resolve. Recently, a device failure impacted Ampol’s retail network and service stations, but a map-based visualization of the network allowed engineers to identify the device and switch over to the secondary within the hour: an 85% improvement in downtime reduction.

Self-healing capabilities

Intelligent operations not only detect failures but also attempt to resolve them independently and create incidents for human intervention only when basic resolution is unsuccessful. As a result, Ampol’s network incidents have been reduced by 40% while business-impacting retail incidents are down by half.

Automating mundane tasks

Automation now regularly takes care of mundane and routine tasks such as patching, updates, virtual machine builds, and software installs. This frees up employees’ time that is otherwise spent on maintenance, enabling them to innovate and add real business value through working on more strategic assignments and business growth.

Future-proofing

As Ampol focuses on the global energy transition, it is investing in new energy solutions in a highly dynamic environment. A cloud-first infrastructure removes complexity, increases the levels of abstraction, and offers greater leverage of platform services, enabling agility and responsiveness. The right architecture and security zoning facilitate critical business-led experimentation and innovation to ensure Ampol continues to place at the front of the pack.

As IT infrastructure becomes a critical enabler across industries, organizations are compelled to embrace modernization. While significant roadblocks exist, a clear vision and the right partner can help overcome challenges and unlock the potential of the cloud, AI and analytics, and automation, to be a true game-changer.

“This is a long journey,” says Hoare, “we’ve been at it for years now… It needs drive and tenacity. But when you get there, you’ll be in a great place.”

Learn more about getting started with a modern infrastructure here.

Cloud Management, Digital Transformation

This article was co-authored by Duke Dyksterhouse, an Associate at Metis Strategy.

After transforming their organization’s operating model, realigning teams to products rather than to projects, CIOs we consult arrive at an inevitable question: “What next?” Of the many possible answers, some of our clients elect to carry the transformation further by separating their employees into two groups: those responsible for operations and those responsible for innovation.

By operations, we mean work that fixes or hones the processes and tools already employed in an organization. You might know it by one of its aliases: sustain, keep-the-lights-on, run-the-business, or support. By innovation we mean transformational work, the construction of new processes and products, often of the sort that generate revenue, improve experiences, or pivot the enterprise.

In many organizations, the same group, team, or even individual handles both responsibilities, which is fine. But by assigning these responsibilities to different resources, some organizations can drive focus, sharpen capacity calculations, and simplify strategic planning, especially amid a product-led operating model, which can make such a division more attractive for several reasons.

First, it’s a straightforward proposition whose end state is relatively easy to envision and measure, making it a nice palate cleanser for those still wrapping their heads around the broader operating model shift. Second, because product teams are permanent, unlike temporary project teams, product-led operating models are more amenable to a division of responsibility that is more methodical and longer-standing. And finally, separating the two roles within product teams can give individuals more clarity and focus, primarily by reducing multitasking.

Splitting these responsibilities without a clear vision and careful plan, however, can spell disaster, reversing the progress begotten by a new operating model. If you’re considering separating operations and innovation responsibilities in your own organization, weigh the following trade-offs before deciding. And if you proceed with the split, let the principles below guide your moves.

Trade-offs

Advantages

Focus: Enhanced focus is perhaps the greatest benefit teams stand to gain from a division of responsibilities. That focus can streamline operations and bring much-needed structure to the time spent exploring new ideas, vital to a company’s long-term success. Employees responsible for both innovation and operations too often are forced (usually by their own managers and technology leaders) to sacrifice the former in favor of the latter. Dividing the labor by work type helps guard against this.

Streamlined capacity management and resource planning: Capacity management becomes easier when it’s split into smaller pieces, especially when the split is by type of work. Operations will always take priority over innovation whenever there’s a fire. The problem is that there’s always a problem: a server to be restored, a computer to be fixed, a security flaw to be patched. If the resources responsible for keeping things up and running are the same as those responsible for transforming the company, it stands to reason that the company’s innovative activities will stall, and its capacity calculations will prove an unreliable input to its strategy and budget. For those considering outsourcing or offshoring key functions of IT, the split can shine light on which capabilities are commoditized and which are differentiating. 

Clearer strategic planning: Splitting operations and innovation doesn’t erase each one’s dependencies on the other, but the split can make those dependencies easier to coordinate, in part due to the clarity gained through streamlined capacity management and budgeting. Road-mapping and transformations also become easier as each group can undertake the work that will most affect its assigned success metrics. When operations and innovation activities reside under the same umbrella, those metrics might be at odds, such as measures of reliability and stability versus those of experimentation.

Disadvantages

Navigating the divide: The biggest downside to separating responsibilities is that doing so introduces an explicit divide that teams and their leaders must navigate. Their failure to do so can create work silos and dilute responsibility. Innovation teams, once they’ve developed a viable product, must resist the temptation to “throw their work over the wall” to the ops team. That temptation runs counter to the spirit of today’s best product-oriented operating models, and giving in to it will return the organization to square one. Establishing norms that specify how long a new product will be owned by innovation, what performance measures must be met before it is transitioned, and the knowledge transfer process is critical for organizations that successfully navigate the divide.

Relationship Management: In product teams where there is no formal split between responsibilities, teammates will often come to some tacit agreement of who’s responsible for what. In part, this is because they are held accountable as a team. But where there is a formal split, that agreement may dissolve and thus introduce a need for deliberate coordination. If that need exists, address it. Instate a manager to oversee both parties. Or instate procedures or cadences that keep them aligned. Whatever the solution, it must make unmistakably clear who is responsible for what.

Operations Burnout: While many will love focusing on ops, there will be others that despise it and view it as a career-limiting move. Have discussions with your teams. See what moves make sense for individuals’ career aspirations. Consider the idea of rotational programs to provide the option or requirement to work in different domains to develop a “full-stack” skill set.

Key principles and considerations

Splitting responsibilities should not be taken lightly. Doing so can destroy the gains made in the shift to a product-focused operating model, with the consequences reverberating across every part of the organization. If you do decide to draw the line, keep these principles top of mind to help ensure the split preserves momentum and delivers value.

Create a “One IT” mindset: Splitting responsibilities should not equate to splitting the team, at least in spirit. A sports analogy might be appropriate here. While the players on a sports team have different responsibilities, they play as a single unit. Similarly, an ops-innovation divided team must play as a single unit, chasing the same objectives, attending the same strategic meetings, and anticipating the consequences of each other’s moves.

Determine the appropriate level for the split: You needn’t split the responsibilities of all teams identically; often they can be split at multiple levels in an operating model. Consider a model in which product teams are loosely grouped by links in the value chain. For one link, say Marketing & Sales, you may decide it’s appropriate to divide operations and innovation at the broadest level of that link, sharing the operations resources across all product teams that compose Marketing & Sales. But for another link, such as Corporate Financials, you might split responsibilities at a more granular level, perhaps by individual product teams. In that case, operations resources are not shared across the link but dedicated to a specific team. The consideration here is the same as all centralization-decentralization trade-offs: standardization versus customization.

Take the time to clearly define operations versus innovation work: Define precisely what qualifies as operations and what as innovation; ambiguity will lead to chaos and strain. A client of ours in the healthcare industry worked closely with its engineers to classify work right down to the ticket type.

Stay focused on agility and business value: The goals and tempos of the two groups will vary, but that’s no excuse to operate in isolation. Teams must be coordinated in their moves. Two effective means of engendering that coordination are: one, align teams to the same business objectives. If the teams’ work don’t eventually translate to customer value, then it’s moot. And two, if the teams follow different Agile methodologies, align their key elements: their release schedules, their PI planning, perhaps even their retrospectives. These ceremonies are like the beats in a song; they will keep teams in sync even if they dance to different melodies.    

Have impeccable ITSM: If you split responsibilities and one side of the division struggles, the other side will absorb the load, and you’ll lose the benefits of the split while still incurring its costs. So, before you split things, hone your ITSM. Hire resources with the right skills, arm them with the right tools, and lay tight escalation paths that they can follow when they do, in fact, need help from the innovation teams.

Embrace APIs and microservices: After splitting operations and innovation, there will be a constant and ever-evolving need to align the systems and processes that govern the two groups. A robust catalog of APIs and microservices can alleviate many of these pressures by empowering teams to navigate this split for themselves, rather than having the coordination handed down to them from the top.

Dividing resources by the type of work they’re responsible for, operations versus innovation, can amplify the benefits of a product-oriented model. But it’s a move that requires precision. Articulating what qualifies for each type of work, dividing at the right level of the op model, coordinating teams to move as a unit—these are but a few of the variables that can squelch an op models’ benefits if handled nonchalantly. Also, to divide responsibilities is not categorically better, even when it’s done right. Whether such a split is silly or sage depends on the idiosyncrasies of the organization. If your gut urges you to keep teams together, listen to it. We’ve laid out the advice that we have simply to say: if you do decide to split things, split them like you mean it.

Business Operations, Digital Transformation, Innovation

In the coming years, NASA’s James Webb telescope will discover the edge of the observable universe, allowing astronomers to search for the very earliest stars and galaxies, formed more than 13 billion years ago.

That’s quite a contrast to today’s network operations visibility, which can sometimes feel like the lens cap has been left on the telescope. Explosive growth in new technology adoption, growing complexity, and the explosive use of internet and cloud networks has created unprecedented blind spots in how we monitor network delivery.

These visibility gaps obscure knowledge about critical applications and service performance. They can also hide security threats making them more difficult to detect. Ultimately, it can impact customer experience, revenue growth, and brand perception.

A global survey by Dimensional Research finds that 81% of organizations have network blind spots. More than 60% of larger companies state they have 50,000 or more network devices and 73% indicate it is growing increasingly difficult to manage their network. According to the study, removing network blind spots and increased monitoring coverage will improve security, reliability, and performance.

Dimensional Research also reports that current monitoring and operations solutions are ill-equipped for the tasks at hand and unable to support a massive influx of new technology over the next two years, leading to slower adoption and deployment with increased business risk.

Without solutions that deliver expanded visibility into remote locations, un-managed networks, and traffic patterns, IT can become overly dependent on end-users to report service issues after these problems have impacted performance. And no organization wants that to happen.

Performance insights across the edge infrastructure and beyond 

The massive adoption of SaaS and Cloud apps has made the job of IT even harder when it comes to understanding the performance of business functions. With no visibility into the internet that delivers these apps to users, IT is forced to resort to status pages and support tickets to determine if an outage does or does not affect users.

Now is the time to rethink network operations and evolve traditional NetOps into Experience-Driven NetOps. You need to extend visibility beyond the edge of the enterprise network to internet monitoring and bring modern capabilities like end-user experience monitoring, active testing of network delivery, and network path tracing into the network operations center. Only by being equipped with such capabilities can organizations ensure networks are experience-proven and network operations teams are experience-driven.  As a result, they gain credibility and build confidence in business users while delivering hybrid working and cloud transformations.

Take the real-world example of a major oil and gas services company. Most employees were set to work from home at the outset of the pandemic. The organization needed to scale up the WAN infrastructure from 10,000 to 60,000 users in just a few weeks. The challenge was to see into VPN gateways, ISP links, and Internet router performance to manage this increase in use.  By standardizing on a modern network monitoring platform, the company benefited from unified performance and capacity analytics that enabled making the right upgrade decisions to increase the number of remote workers by six-fold.

You can learn more about how to tackle the challenges of network visibility in this new eBook, Guide To Visibility Anywhere. Read now and discover how organizations can create network visibility anywhere.

Networking

What is a CAO?

A chief administrative officer (CAO) is a top-level executive responsible for overseeing the day-to-day operations of an organization and the company’s overall performance. CAOs are responsible for managing an organization’s finances as well as creating goals, policies, and procedures for the company to help it operate more efficiently and compliantly. They typically report directly to the CEO and act as a go-between for other senior-level management and the CEO.

CAOs often manage administrative staff and are also sometimes responsible for overseeing the accounting staff. These executives have a strong focus on policy, procedure, profits, and ensuring that all regulatory rules and regulations are followed. They work closely with departments and teams within the organization to ensure they’re operating effectively and to determine whether there is room for improvement. If a department is underperforming, a CAO can step in and identify what areas need to change or be improved to turn things around.

In addition to overseeing the daily operations of a company, CAOs also must have an eye on long-term strategic projects. That might include developing long-term budgets, developing and monitoring KPIs, training new managers, and keeping a pulse on changing regulatory and compliance rules.

Chief administrative officer responsibilities

The main responsibilities of a CAO are to ensure the company is operating efficiently daily, and to oversee relevant high-level management and other personnel. The CAO role can be found in several industries — most commonly in tech, finance, government, education, and healthcare. It’s a role that requires high-level decision-making, leadership skills, and strong communication skills. CAOs work closely with leaders across the organization and need to be able to communicate to the CEO how various departments are functioning within the company.

CAOs should have strong presentation skills and the ability to communicate complex business and financial information to other stakeholders in the company. It’s a role that requires an understanding of change management and an ability to juggle several complex projects at once. CAOs need a solid relationship built on trust with the CEO of the organization because they will work closely with them to improve business efficiency. 

The responsibilities of a CAO differ depending on industry, but general expectations for the role include:

Setting, monitoring, and managing KPIs for departments and management staffFormulating strategic, operational, and budgetary plansWorking closely with and training new managers in administrative rolesMentoring and coaching administrative staff within the organizationPerforming manager evaluationsWorking closely with C-suite and board of directorsStaying up to date on the latest changes to government rules and regulations related to administrative tasks, accounting, and financial reporting

Chief administrative officer skills

While skills differ by industry, CAOs are expected to have the following general skillset:

Strategic planningTeam leadershipLegal complianceFinancial reportingRegulatory complianceBudget managementStrategic project managementRisk management/risk controlAbility to generate “effective reports and give presentations”Knowledge of IRS laws, Generally Accepted Accounting Principles (GAAP), Security Exchange Commission (SEC) rules and regulations, and internal audit procedures within the company

Chief administrative officer vs. COO

The role of CAO is very similar to that of a chief operating officer (COO), as both are responsible for overseeing the operations of a business. The COO role, however, is more commonly found in companies that manufacture physical products, whereas the CAO role is better suited to companies focused on offering services. It’s not uncommon for a company to have both roles, depending on business needs.

Another difference between a CAO and COO is that CAOs oversee day-to-day operations and identify opportunities to improve departments, teams, and management within the organization. If a department isn’t performing well, a CAO will often take over as acting head of the department, working at the helm of the team or department to get a firsthand look at how it’s functioning and how it could be improved.  

Alternatively, chief operating officers typically focused more on the overall operations of a business, rather than the day-to-day operations of specific departments or teams. They’re responsible for overseeing projects such as choosing new technology upgrades, finding new plants for manufacturing, and overseeing physical supply chains.  

At companies that have both a CAO and a COO, the two often work closely together to develop success metrics and goals for the company. Their roles are related enough that these two executives will have to strategize together when it comes to budgets or implementing regulatory and compliance rules. Both the CAO and COO have an eye on operations and efficiency, just in a different scope and area of the business.

Chief administrative officer salary

The average annual salary for a chief administrative officer is $122,748 per year, according to data from PayScale. Reported salaries for the role ranged from $67,000 to $216,000 depending on experience, certifications, and location. Entry-level CAOs with less than one year experience reported an average salary of $90,000, while those with one to four years’ experience reported an average annual salary of $93,174. Midlevel CAOs with five to nine years’ experience reported an average annual salary of $113,543, and experienced CAOs with 10 to 19 years’ experience reported an average annual salary of $133,343. Late career CAOs with over 20 years’ experience reported an average annual salary of $149,279.

IT Leadership