Nowadays, the world seems to experience once-in-a-century storms almost monthly. These cataclysmic weather events often cause extensive property damage, including major disruptions to the power grid that can cripple IT systems. More commonly, human error and power fluctuations can be just as costly and devastating to continued IT service delivery. To avoid costly outages and data loss, businesses must ensure continued operations with power protection delivered by a smart solution like Dell VxRail and the APC by Schneider Electric Smart UPS with PowerChute Network Shutdown software.

If the outage is prolonged, the Dell-APC solution enables remote shut down to protect IT systems and ensure a non-disruptive restart.

When the power goes out, gracefully shutting down connected IT devices — like servers, storage devices, and hyper-converged infrastructure (HCI) — helps prevent further damage to those devices. It also prevents loss of business data and damage to enterprise workloads and helps ensure a smoother process for restarting and getting the business back up and running.

Why is this so important? Because the cost of downtime can be catastrophic. Estimates of IT service downtime costs range from $80,000 an hour on the lower end of the scale to $5 million an hour for larger enterprises. And that doesn’t account for damage to business reputation — whether a retailer loses its POS systems, or a larger organization loses its online customer service and sales systems.

Dell Technologies VxRail

With so much at stake, a UPS with remote management capabilities is critical to protect the HCI system and workloads it supports. HCI systems, like Dell VxRail, have become the backbone for data centers and larger organizations. HCI has historically been used to support specific workloads like virtual desktops (VDI). However, it has emerged as a workhorse for running mission-critical workloads that require elevated levels of performance and availability. Enterprises should consider deploying an intelligent smart UPS like the Dell-APC PowerChute solution to protect those mission-critical workloads running on HCI.

While HCI is also well-suited for supporting multiple sites, losing power at remote sites can still cause system damage and data corruption. To prevent this type of damage, organizations must install a UPS at every HCI installation. Ideally, the UPS will keep systems operating throughout an outage. However, if an outage lasts too long, businesses must have a process in place to ensure an automated graceful shutdown, followed by a sequenced infrastructure restart. 

To gracefully shut down the HCI, the UPS must be able to communicate over a distributed network. Then it has to initiate a step-by-step restart sequence to ensure hardware and data protection. The automated restart should begin once power is restored. This automated remedy for power interruption can save time and money — and, ultimately, minimize downtime.

Integrated systems like Dell VxRail HCI and the APC by Schneider Electric Smart UPS with PowerChute Network Shutdown software can help businesses simplify and automate the process during catastrophic power outages and ensure business continuity by enabling graceful shutdown and the ability to simply move virtual machines to another system. This level of network protection acts as insurance against catastrophic downtime that could otherwise lead to the loss of all IT services.  

To learn more about how integrated IT solutions like Dell VxRail and the APC by Schneider Electric Smart UPS with PowerChute Network Shutdown software protect business data assets and ensure business continuity, please visit us here.

Watch this video to learn more:

Infrastructure Management, IT Leadership

Conversational AI is changing the way we do business.

In 2018, IBM boldly declared that chatbots could now handle 80% of routine customer inquiries. That report even forecasted that bots would have a 90% success rate in their interactions by 2022.[1] As we survey the landscape of businesses using conversational AI, it appears to be playing out that way.

Not many customers are thrilled with these developments, however. According to recent research by UJET, 80% of customers who interacted with bots reported that it increased their frustration levels. Seventy-two percent even called it a “waste of time.”[2]

While it’s true that chatbots and conversational IVR systems have made significant strides in their ability to deliver quality service, they still come with serious limitations. Most notably, they tend to take on the biases of their human designers — sometimes even amplifying them. If contact center leaders want to rely heavily on this technology, they can’t ignore this issue.

What is chatbot and conversational AI bias?

At first glance, the idea of a computer holding biases may seem paradoxical. It’s a machine, you might say, so how can it have an attitude or disposition for or against something?

Remember, though, that artificial intelligence is created by humans. As such, its programming reflects its human creators — including any of their biases. In many cases, those biases may even be amplified because they become deeply encoded in the AI.

There have been a few extreme (and well-known) examples of this. Microsoft’s chatbot, Tay, was shut down after only 24 hours when it started tweeting hateful, racist comments. Facebook’s chatbot, Blender, similarly learned vulgar and offensive language from Reddit data.

As disturbing and important as those extreme examples are, they overshadow the more pervasive and persistent problem of chatbot bias. For instance, the natural language processing (NLP) engine that drives conversational AI often does quite poorly at recognizing linguistic variation.[3] This regularly results in bots not recognizing regional dialects or not considering the vernacular of all the cultural and ethnic groups that will use chatbots.

More subtle is the tendency of chatbots and other forms of conversational AI to take on female characteristics, reinforcing stereotypes about women and their role in a service economy.[4] In both cases, it’s clear that these bots are mirroring biases present in their human authors. The question is: what can be done about it — especially at the contact center level?

Confronting the problem

Many of the solutions for chatbot bias lie in the hands of developers and the processes they use to build their chatbots. Most importantly, development teams need a diverse set of viewpoints at the table to ensure those views are represented in the technology.

It’s also crucial to acknowledge the limitations of conversational AI and build solutions with those limitations in mind. For instance, chatbots tend to perform better when their sets of tasks aren’t so broad as to introduce too many variables. When a bot has a specific job, it can more narrowly focus its parameters for a certain audience without risking bias.

Developers don’t operate in a vacuum, though, and it’s critical to consider the end user’s perspective when designing and evaluating chatbots. Customer feedback is an essential component of developing and redesigning chatbots to better eliminate bias.

An effective approach for fine-tuning chatbot algorithms involves all the above — and more. To accelerate the process and dig deeper, you need to harness the power of AI not only for building chatbots but for testing them.

Digging deeper to uproot bias

These aren’t the only ways to teach bots to do better, though. One of the most effective options is to let AI do the work for you. In other words, instead of only waiting for diverse perspectives from your development team or customers, why not be proactive to uproot bias by throwing diverse scenarios at your bots?

An effective conversational AI testing solution should be able to perform a range of tests that help expose bias. For instance, AI allows you to add “noise” to tests you run for your conversational IVR. This noise can be literal, but it can also include bias-oriented changes such as introducing the IVR to different accents, genders, or linguistic variations to see if it responds appropriately.

On the chatbot side, AI enables you to test your bots with a wide array of alternatives and variations in phrasing and responses. Consider the possibilities, for instance, if you could immediately generate a long list of potential options for how someone might phrase a request. These might include the simple rephrasing of a question or paraphrased versions of a longer inquiry. Armed with these alternatives, you could then test your bot against the ones with the most potential for a biased reaction.

Testing can take you even further in your quest to mitigate bias. Training data is one of the most critical components for teaching your bot to respond appropriately, and you can use NLP testing to analyze the training data you’re using and determine whether it’s instilling bias in your chatbots. You can even use AI-powered test features to expand the available set of test data to bring more diverse conversational angles to the table. In effect, this allows you to diversify your bot’s perspective even if your development team isn’t yet as diverse as it could be.

AI-powered testing solutions are capable of these types of tests — and more. And, when you use AI, you rapidly accelerate your capacity for testing your conversational AI systems, whether for biases or many other issues.

You don’t have to wait until you’ve assembled the perfect team of developers or accumulated a diverse set of customer data to weed out bias in your chatbots and conversational IVR. Cyara Botium’s AI-powered testing features can help you get started right away. Take a look at our Building Better Chatbots eBook to learn more.

[1] IBM. “Digital customer care in the age of AI.”

[2] Forbes. “Chatbots And Automations Increase Customer Service Frustrations For Consumers At The Holidays.”

[3] Oxford Insights. “Racial Bias in Natural Language Processing.”

[4] UNESCO. “New UNESCO report on Artificial Intelligence and Gender Equality.”

Artificial Intelligence, Machine Learning

The “endless aisle” concept isn’t new, but it’s definitely the future for many supply chain operators. This retail strategy enables customers at a physical store to virtually browse and order any products that are either out of stock or not sold in-store and have them shipped to the store or their home. A fulfillment center or another nearby retail location that has the item in stock fills their order.

Increasingly, consumers expect an endless aisle experience. The pandemic has accelerated the transition to digital shopping and fundamentally changed consumers’ purchasing mindset. Today’s consumers regularly buy everything from daily groceries to new cars online or through an app, and they expect fast delivery — even within an hour, in many cases. If the retailer they go to first can’t meet that expectation, the consumer can open any number of apps and purchase the same product from another retailer, either brick-and-mortar or online, and pick it up or have it delivered when they want it.

So, the pressure is on to create the endless aisle. However, supporting this strategy will require most supply chain operators to significantly modernize their operations, including implementing solutions powered by artificial intelligence (AI) and machine learning (ML). It requires a mindset shift for operators — from thinking about technology not only as a tool to help lower supply chain costs, but also as the key to preventing missed sales opportunities, filling more orders faster, and increasing profitability.

Top challenges to building the endless aisle

1. Legacy limitations and lack of insight

Many companies, especially in the retail space, have already focused a lot of attention on creating the front-end experience for the endless aisle, giving their customers various digital options for ordering products from both in-store and online inventories. But it’s on the back end where most businesses fall short on delivering this experience: They can’t get the right products from here to there fast enough.

Several issues can hinder an organization’s ability to achieve a true endless aisle experience:

Outdated facilities, order management systems, and supply chain processesInflexible systems that prevent order fulfillment from multiple warehouse or retail locationsThe lack of true, real-time visibility into inventory status (i.e., what is available, where it is located now and where it needs to be)The inability to project where the next order will most likely originate so that inventory can be staged at the closest location to fill that order at the lowest cost

AI and ML play a leading role in helping supply chain operators overcome these limitations and build a next-generation supply chain. Following is a closer look at how these advanced technologies can enable the endless aisle by helping organizations to develop intelligent warehousing and engage confidently in more predictive decision-making.

2. Creating smarter, more flexible warehouses

Historically, supply chain operators have had dedicated warehouses and distribution centers that serve specific customers or regions. That strategy creates complexities for companies in forecasting the type and amount of inventory needed at those facilities. The result is that companies can’t flex much or at all.

No organization can create smarter warehouses or a more agile, flexible supply chain without updating their back-end technology first. Most will also need to rethink their entire order management process — including whether there’s a different way to handle it rather than with their inflexible, traditional enterprise resource planning (ERP) system, which lets them map specific products only to specific locations and offers very little visibility.

If these organizations have intelligent warehousing systems within their supply chain, they could request and supply any inventory they have to any customer or geography at any time. They could also confidently enable the endless aisle concept while at the same time reducing shipping costs and delays.

To create intelligent warehousing and deliver the endless aisle, many organizations will need to wrap new technologies like AI and ML around their legacy ERP system to improve and extend its capabilities or even completely replace certain functions. Integrating their ERP system and warehouse management system will also be a critical measure to ensure efficiency and timeliness when the business eventually starts shipping inventory from more places to serve customers in any location.

3. Enabling more predictive, proactive decision-making

Predictive modeling, using both AI and ML, lets an organization know how much inventory to stock, and where to place the goods based on historical and current patterns and behaviors. This insight is a must for any supply chain operator that wants to stay ahead of trends, prepare for future sales, and accelerate order-to-fulfillment time.

ML is also an excellent tool for minimizing costs and lost revenue due to obsolescence, excess inventory, and stockouts. And AI tells the organization where future demand is likely to originate and suggests where future inventory should be placed as it arrives. AI also helps supply chain operators avoid costs from excess shipping charges, long transit times, and stockouts and obsolescence.

These advanced technologies are also essential to providing real-time data insights that inform supply chain “digital twins” — logical views of the physical supply chain used for simulation modeling — that allow the business to understand, well in advance, what options it has to fulfill customer requirements when supply chain disruptions inevitably occur.

Many companies that have made significant progress on their journey toward building a next-generation supply chain are also using AI and ML to enhance their forecasting so they can address their “SKU problem.” They are better able to determine what inventory they need to have on hand instead of keeping two of everything on the shelf “just in case.” More organizations are also embracing AI and ML as force multipliers for their supply chain workforce; intelligent automation is helping them overcome current labor shortages while allowing their existing workers to be more productive.

There is no one-size-fits-all approach to modernizing the supply chain, creating intelligent warehousing, and laying the groundwork for the endless aisle. Each company’s journey will vary in scope and duration. Some organizations will choose to augment their existing infrastructure with more intelligent solutions, while others will go so far as to set up entirely new and separate supply chain operations. But the need for change is urgent, and those businesses that act now regardless of any further disruption or uncertainty that may be on the horizon are those that will emerge as tomorrow’s supply chain leaders.

Learn more about Protiviti’s Emerging Technology Solutions and Supply Chain Services.

Connect with the authors:

John Weber

Director – Supply Chain, Protiviti

Geoff Weathersby

Director – IoT and Emerging Technology, Protiviti

Artificial Intelligence, Machine Learning

Lately glazing up in a clear night sky and identifying different star constellations (in these days with the support of a mobile app – of course!) I got unswervingly reminded that everything is related to and interconnected with each other. Stars, together with planets and asteroids, form the solar system we live in, which constitutes to the galaxy, which in turn presents a part of the universe as we know it today.

Although some still perceive it straightforward – the business world is dynamic, interconnected, ambiguous, and unpredictable. Such an interconnected constellation accounts for the broad range of endeavours from strategy development, buying decisions, digital innovations, and transformation realization to system modernization itself.

To see the dynamics between different components, “systems thinking” can help. Once your organization thinks in systems, it can better understand root challenges, implications from one component to another, and even innovate more with effective disruption to gain new revenues, reduce costs, or mitigate risks more effectively.

The definition of system thinking

What is system thinking? First, let me outline the context of what a system is before illustrating how your organization can propel the transformation towards a digital- and sustainable-first enterprise with system thinking. A system is a set or structure of things, activities, ideas, and information that interrelate and interact with each other. Systems consequently alter other systems, because every part itself forms a (sub) system consisting of further parts. Even businesses and humans themselves are systems! That being laid as foundation, let’s move beyond academia and make this more pragmatic:

Look at the transformation as a system and simplify

Digital transformation and sustainable transformation, or any other considerable change in an organization as response to evolving market and customer needs, presents a system: There is continuous effort of diverse stakeholders with initiatives, activities, investments, and ideas that leverage digital concepts and technologies to achieve desired outcomes, such as increased operational efficiency or faster innovation with new digital products.

In this structure, different stakeholders and teams are driving distinct agendas as part of their contribution to the gearing within the overall transformation engine, which can span from experience to intelligence to platform agendas, and others. Commonly, these agendas target distinct objectives such as productivity, agility, or efficiency, and they are related to and within each other.

Following the acknowledgment of this reasoning, simplification is imperative to make these interrelations and activities visible and to be able to articulate and communicate the complex system presenting the organization’s current journey. A model like the HPE Digital Journey Map offers a simplified representation of the digital transformation system intended to promote understanding of the real system and to seek answers to specific questions about the system.


Embed system thinking in your ambition & strategy

In an era in which computing and connectivity are ubiquitous, servitization increasingly becomes relevant as guiding principle keeping the transformation journey on path. Capabilities of more and more smart, connected, and service-enriched products evolve significantly, , towards the end of 2022 the forecast is around 29 billion devices, and their traditional industry boundaries blur and shift.

A famous example given by the renowned economist Michael Porter depicts a tractor company that evolves from smart, connected tractors into farm equipment offerings and eventually into farm management systems. Spotted the keyword? The evolution occurs seemingly naturally, from discrete products and their intelligent enhancements into so-called product systems. In the tractor example closely related products and adjacent services are integrated. Eventually, multiple of these product systems can be combined together and triangulated with further external data, e.g. soil or weather data, into powerful systems of systems: entire farm management systems.


Hence, ingraining system thinking into your organization’s ambition and consequently into the transformation strategy will boost your organization into a leading position to redefine market boundaries and drive disproportionately positive value for your customers, ecosystem, and certainly your own business. Embedding systems theory at the core of your game plan undoubtedly influences environmental factors, competitive advantages through differentiation, and co-creation and -production components. Beyond new offerings, this also accounts for purchase decisions – rather than done in vacuum, buying decisions take place related and dependent on other (business) needs.

Recognize different systems in modernizing effectively

From understanding the phenomena of transformations to the strategic perspective of an organization’s ambition and its plan of action, let’s cascade further into the actual application of digital technologies and IT modernization. Indeed, as a CIO or CTO who is principally responsible for the platform’s agenda, a core driver focuses on modernizing the IT landscape, including platforms and applications for increased agility and optimized costs in responding to the business. In particular, the organization’s use of the applications will expose different paces and requirements for the various options of modernizing, including re-platforming, re-hosting, re-engineering, and other modernization outcomes.


The varying rates of change, adoption, and implications on governance, operations, and data within application landscapes can be distinguished between systems of record, systems of differentiation, and systems of innovation (this notion is coined as PACE-layered application strategy by Gartner) according to their primary purpose. These different layers reflect the characteristics of the different software modules related to their use and data lifecycle from new business models (innovation) to best of breed (differentiation) to core transaction processing (record), recognizing the interrelations with their users, the information flows, or funding aspects. Retaining further depth of this part for a different article, the essential to take from here is that this approach can allow to navigate data-first modernization more effectively incorporating the concept of systems.

Leveraging deep technological and methodical expertise as well as the HPE Digital Journey Map, Digital Advisors from HPE can help you exploring the system of transformations in the digital era with new value propositions, leading use cases and successful modernization patterns to propel your efforts and activities next. Reach out to an advisor like me on to start our conversation today.


About Ian Jagger

Jagger is a content creator and narrator focused on digital transformation, linking technology capabilities expertise with business goals. He holds an MBA in marketing and is a Chartered Marketer. Today, he focuses on digital transformation narrative globally for HPE’s Advisory and Transformation Practice. His experience spans strategic development and planning for Start-ups through to content creation, thought leadership, AR/PR, campaign program building, and implementation for Enterprise. Successful solution launches include HPE Digital Next Advisory, HPE Right Mix Advisor, and HPE Micro Datacenter.

Digital Transformation, IT Leadership

Decision support systems definition

A decision support system (DSS) is an interactive information system that analyzes large volumes of data for informing business decisions. A DSS supports the management, operations, and planning levels of an organization in making better decisions by assessing the significance of uncertainties and the tradeoffs involved in making one decision over another.

A DSS leverages a combination of raw data, documents, personal knowledge, and/or business models to help users make decisions. The data sources used by a DSS could include relational data sources, cubes, data warehouses, electronic health records (EHRs), revenue projections, sales projections, and more.

The concept of DSS grew out of research conducted at the Carnegie Institute of Technology in the 1950s and 1960s, but really took root in the enterprise in the 1980s in the form of executive information systems (EIS), group decision support systems (GDSS), and organizational decision support systems (ODSS). With organizations increasingly focused on data-driven decision making, decision science (or decision intelligence) is on the rise, and decision scientists may be the key to unlocking the potential of decision science systems. Bringing together applied data science, social science, and managerial science, decision science focuses on selecting between options to reduce the effort required to make higher-quality decisions.

Decision support system examples

Decision support systems are used in a broad array of industries. Example uses include:

GPS route planning. A DSS can be used to plan the fastest and best routes between two points by analyzing the available options. These systems often include the capability to monitor traffic in real-time to route around congestion.Crop planning. Farmers use DSS to help them determine the best time to plant, fertilize, and reap their crops. Bayer Crop Science has applied analytics and decision-support to every element of its business, including the creation of “virtual factories” to perform “what-if” analyses at its corn manufacturing sites.Clinical DSS. These systems help clinicians diagnose their patients. Penn Medicine has created a clinical DSS that helps it get ICU patients off ventilators faster.ERP dashboards. These systems help managers monitor performance indicators. Digital marketing and services firm Clearlink uses a DSS system to help its managers pinpoint which agents need extra help.

Decision support systems vs. business intelligence

DSS and business intelligence (BI) are often conflated. Some experts consider BI a successor to DSS. Decision support systems are generally recognized as one element of business intelligence systems, along with data warehousing and data mining.

Whereas BI is a broad category of applications, services, and technologies for gathering, storing, analyzing, and accessing data for decision-making, DSS applications tend to be more purpose-built for supporting specific decisions. For example, a business DSS might help a company project its revenue over a set period by analyzing past product sales data and current variables. Healthcare providers use clinical decision support systems to make the clinical workflow more efficient: computerized alerts and reminders to care providers, clinical guidelines, condition-specific order sets, and so on.

DSS vs. decision intelligence

Research firm, Gartner, declared decision intelligence a top strategic technology trend for 2022. Decision intelligence seeks to update and reinvent decision support systems with a sophisticated mix of tools including artificial intelligence (AI) and machine learning (ML) to help automate decision-making. According to Gartner, the goal is to design, model, align, execute, monitor, and tune decision models and processes.

Types of decision support system

In the book Decision Support Systems: Concepts and Resources for Managers, Daniel J. Power, professor of management information systems at the University of Northern Iowa, breaks down decision support systems into five categories based on their primary sources of information.

Data-driven DSS. These systems include file drawer and management reporting systems, executive information systems, and geographic information systems (GIS). They emphasize access to and manipulation of large databases of structured data, often a time-series of internal company data and sometimes external data.

Model-driven DSS. These DSS include systems that use accounting and financial models, representational models, and optimization models. They emphasize access to and manipulation of a model. They generally leverage simple statistical and analytical tools, but Power notes that some OLAP systems that allow complex analysis of data may be classified as hybrid DSS systems. Model-driven DSS use data and parameters provided by decision-makers, but Power notes they are usually not data-intensive.

Knowledge-driven DSS. These systems suggest or recommend actions to managers. Sometimes called advisory systems, consultation systems, or suggestion systems, they provide specialized problem-solving expertise based on a particular domain. They are typically used for tasks including classification, configuration, diagnosis, interpretation, planning, and prediction that would otherwise depend on a human expert. These systems are often paired with data mining to sift through databases to produce data content relationships.

Document-driven DSS. These systems integrate storage and processing technologies for document retrieval and analysis. A search engine is an example.

Communication-driven and group DSS. Communication-driven DSS focuses on communication, collaboration, and coordination to help people working on a shared task, while group DSS (GDSS) focuses on supporting groups of decision makers to analyze problem situations and perform group decision-making tasks.

Components of a decision support system

According to Management Study HQ, decision support systems consist of three key components: the database, software system, and user interface.

DSS database. The database draws on a variety of sources, including data internal to the organization, data generated by applications, and external data purchased from third parties or mined from the Internet. The size of the DSS database will vary based on need, from a small, standalone system to a large data warehouse.DSS software system. The software system is built on a model (including decision context and user criteria). The number and types of models depend on the purpose of the DSS. Commonly used models include:
Statistical models. These models are used to establish relationships between events and factors related to that event. For example, they could be used to analyze sales in relation to location or weather.
Sensitivity analysis models. These models are used for “what-if” analysis.
Optimization analysis models. These models are used to find the optimum value for a target variable in relation to other variables.
Forecasting models. These include regression models, time series analysis, and other models used to analyze business conditions and make plans.
Backward analysis sensitivity models. Sometimes called goal-seeking analysis, these models set a target value for a particular variable and then determine the values other variables need to hit to meet that target value.
DSS user interface. Dashboards and other user interfaces that allow users to interact with and view results.

Decision support system software

According to Capterra, the popular decision support system software includes:

Checkbox. This no-code service automation software for enterprises uses a drag-and-drop interface for building applications with customizable rules, decision-tree logic, calculations, and weighted scores.Yonyx. Yonyx is a platform for creating DSS applications. It features support for creating and visualizing decision tree–driven customer interaction flows. It especially focuses on decision trees for call centers, customer self-service, CRM integration, and enterprise data.Parmenides Edios. Geared for midsize/large companies, Parmenides Eidos provides visual reasoning and knowledge representation to support scenario-based strategizing, problem solving, and decision-making.XLSTAT. XLSTAT is an Excel data analysis add-on geared for corporate users and researchers. It boasts more than 250 statistical features, including data visualization, statistical modeling, data mining, stat tests, forecasting methods, machine learning, conjoint analysis, and more.1000minds is an online suite of tools and processes for decision-making, prioritization, and conjoint analysis. It is derived from research at the University of Otago in the 1990s into methods for prioritizing patients for surgery.Information Builders WebFOCUS. This data and analytics platform is geared for enterprise and midmarket companies that need to integrate and embed data across applications. It offers cloud, multicloud, on-prem, and hybrid options.QlikView is Qlik’s classic analytics solution, built on the company’s Associative Engine. It’s designed to help users with their day-to-day tasks using a configurable dashboard.SAP BusinessObjects. BusinessObjects consists of reporting and analysis applications to help users understand trends and root causes.TIBCO Spotfire. This data visualization and analytics software helps users create dashboards and power predictive applications and real-time analytics applications.Briq is a predictive analytics and automation platform built specifically for general contractors and subcontractors in construction. It leverages data from accounting, project management, CRM, and other systems, to power AI for predictive and prescriptive analytics.Analytics, Data Science

ERP definition

Enterprise resource planning (ERP) is a system of integrated software applications that manages day-to-day business processes and operations across finance, human resources, procurement, distribution, supply chain, and other functions. ERP systems are critical applications for most organizations because they integrate all the processes necessary to run their business into a single system that also facilitates resource planning. ERP systems typically operate on an integrated software platform using common data definitions operating on a single database.

ERPs were originally designed for manufacturing companies but have since expanded to serve nearly every industry, each of which can have its own ERP peculiarities and offerings. For example, government ERP uses contract lifecycle management (CLM) rather than traditional purchasing and follows government accounting rules rather than GAAP.

Benefits of ERP

ERP systems improve enterprise operations in a number of ways. By integrating financial information in a single system, ERP systems unify an organization’s financial reporting. They also integrate order management, making order taking, manufacturing, inventory, accounting, and distribution a much simpler, less error-prone process. Most ERPs also include customer relationship management (CRM) tools to track customer interactions, thereby providing deeper insights about customer behavior and needs. They can also standardize and automate manufacturing and supporting processes, and unify procurement across an organization’s business units. ERP systems can also provide a standardized HR platform for time reporting, expense tracking, training, and skills matching, and greatly enhance an organization’s ability to file the necessary compliance reporting across finance, HR, and the supply chain.

Key features of ERP systems

The scale, scope, and functionality of ERP systems vary widely, but most ERP systems offer the following characteristics:

Enterprise-wide integration. Business processes are integrated end to end across departments and business units. For example, a new order automatically initiates a credit check, queries product availability, and updates the distribution schedule. Once the order is shipped, the invoice is sent.Real-time (or near real-time) operations. Because the processes in the example above occur within a few seconds of order receipt, problems are identified quickly, giving the seller more time to correct the situation.A common database. A common database enables data to be defined once for the enterprise with every department using the same definition. Some ERP systems split the physical database to improve performance.Consistent look and feel. ERP systems provide a consistent user interface, thereby reducing training costs. When other software is acquired by an ERP vendor, common look and feel is sometimes abandoned in favor of speed to market. As new releases enter the market, most ERP vendors restore the consistent user interface.

Types of ERP solutions

ERP systems are categorized in tiers based on the size and complexity of enterprises served:

Tier I ERPs support large global enterprises, handling all internationalization issues, including currency, language, alphabet, postal code, accounting rules, etc. Tier I vendors include Oracle, SAP, Microsoft, and Infor.Tier I Government ERPs support large, mostly federal, government agencies. Oracle, SAP, and CompuServe PRISM are considered Tier I with Infor and CGI Momentum close behind.Tier II ERPs support large enterprises that may operate in multiple countries but lack global reach. Tier II customers can be standalone entities or business units of large global enterprises. Depending on how vendors are categorized there are 25 to 45 vendors in this tier.Tier II Government ERPs focus on state and local governments with some federal installations. Tyler Technologies and UNIT4 fall in this category.Tier III ERPs support midtier enterprises, handling a handful of languages and currencies but only a single alphabet. Depending on how ERPs are categorized, there are 75 to 100 Tier III ERP solutions.Tier IV ERPs are designed for small enterprises and often focus on accounting.

ERP vendors

The top ERP vendors today include:


Selecting an ERP solution

Choosing an ERP system is among the most challenging decisions IT leaders face. In addition to the above tier criteria, there is a wide range of features and capabilities to consider. With any industry, it is important to pick an ERP vendor with industry experience. Educating a vendor about the nuances of a new industry is very time consuming.

To help you get a sense of the kinds of decisions that go into choosing an ERP system, check out “The best ERP systems: 10 enterprise resource planning tools compared,” with evaluations and user reviews of Acumatica Cloud ERP, Deltek ERP, Epicor ERP, Infor ERP, Microsoft Dynamics ERP, NetSuite ERP, Oracle E-Business Suite, Oracle JD Edwards EnterpriseOne ERP,  Oracle Peoplesoft Financial Management and SAP ERP Solutions.

ERP implementation

Most successful ERP implementations are led by an executive sponsor who sponsors the business case, gets approval to proceed, monitors progress, chairs the steering committee, removes roadblocks, and captures the benefits. The CIO works closely with the executive sponsor to ensure adequate attention is paid to integration with existing systems, data migration, and infrastructure upgrades. The CIO also advises the executive sponsor on challenges and helps the executive sponsor select a firm specializing in ERP implementations.

The executive sponsor should also be advised by an organizational change management executive, as ERP implementations result in new business processes, roles, user interfaces, and job responsibilities. Reporting to the program’s executive team should be a business project manager and an IT project manager. If the enterprise has engaged an ERP integration firm, its project managers should be part of the core program management team.

Most ERP practitioners structure their ERP implementation as follows:

Gain approval: The executive sponsor oversees the creation of any documentation required for approval. This document, usually called a business case, typically includes a description of the program’s objectives and scope, implementation costs and schedule, development and operational risks, and projected benefits. The executive sponsor then presents the business case to the appropriate executives for formal approval. Plan the program: The timeline is now refined into a work plan, which should include finalizing team members, selecting any external partners (implementation specialists, organizational change management specialists, technical specialists), finalizing contracts, planning infrastructure upgrades, and documenting tasks, dependencies, resources, and timing with as much specificity as possible. Configure software: This largest, most difficult phase includes analyzing gaps in current business processes and supporting applications, configuring parameters in the ERP software to reflect new business processes, completing any necessary customization, migrating data using standardized data definitions, performing system tests, and providing all functional and technical documentation. Deploy the system: Prior to the final cutover, multiple activities have to be completed, including training of staff on the system, planning support to answer questions and resolve problems after the ERP is operational, testing the system, making the “Go live” decision in conjunction with the executive sponsor. Stabilize the system: Following deployment, most organizations experience a dip in business performance as staff learn new roles, tools, business processes, and metrics. In addition, poorly cleansed data and infrastructure bottlenecks will cause disruption. All impose a workload bubble on the ERP deployment and support team.

Hidden costs of ERP

Four factors are commonly underestimated during project planning:

Business process change. Once teams see the results of their improvements, most feel empowered and seek additional improvements. Success breeds success often consuming more time than originally budgeted.Organizational change management. Change creates uncertainty at all organization levels. With many executives unfamiliar with the nuances of organization change management, the effort is easily underestimated.Data migration. Enterprises often have overlapping databases and weak editing rules. The tighter editing required with an ERP system increases data migration time. This required time is easy to underestimate, particularly if all data sources cannot be identified.Custom code. Customization increases implementation cost significantly and should be avoided. It also voids the warranty, and problems reported to the vendor must be reproduced on unmodified software. It also makes upgrades difficult. Finally, most enterprises underestimate the cost of customizing their systems.

Why ERP projects fail

ERP projects fail for many of the same reasons that other projects fail, including ineffective executive sponsors, poorly defined program goals, weak project management, inadequate resources, and poor data cleanup. But there are several causes of failure that are closely tied to ERPs:

Inappropriate package selection. Many enterprises believe a Tier I ERP is by definition “best” for every enterprise. In reality, only very large, global enterprises will ever use more than a small percentage of their functionality. Enterprises that are not complex enough to justify Tier I may find implementation delayed by feature overload. Conversely, large global enterprises may find that Tier II or Tier III ERPs lack sufficient features for complex, global operations.Internal resistance. While any new program can generate resistance, this is more common with ERPs. Remote business units frequently view the standardization imposed by an ERP as an effort by headquarters to increase control over the field. Even with an active change management campaign, it is not uncommon to find people in the field slowing implementation as much as possible. Even groups who support the ERP can become disenchanted if the implementation team provides poor support. Disenchanted supporters can become vicious critics when they feel they have been taken for granted and not offered appropriate support.

Cloud ERP

Over the past few years, ERP vendors have created new systems designed specifically for the cloud, while longtime ERP vendors have created cloud versions of their software. Cloud ERP There are a number of reasons to move to cloud ERP, which falls into two major types:

ERP as a service. With these ERPs, all customers operate on the same code base and have no access to the source code. Users can configure but not customize the code.ERP in an IaaS cloud. Enterprises that rely on custom code in their ERP cannot use ERP as a service. If they wish to operate in the cloud, the only option is to move to an IaaS provider, which shifts their servers to a different location.

For most enterprises, ERP as a service offers three advantages: The initial cost is lower, upgrades to new releases are easier, and reluctant executives cannot pressure the organization to write custom code for their organization. Still, migrating to a cloud ERP can be tricky and requires a somewhat different approach than implementing on on-premises solution. See “13 secrets of a successful cloud ERP migration.”

Enterprise Applications, ERP Systems

Australia privately-owned footwear company, Munro Footwear Group (MFG) was facing a “ticking time bomb” in late 2019 as its core ERP would no longer be supported by the vendor by the end of 2020.

MFG has 2,000 employees and around 290 stores, which includes brands like Midas, Colorado, and Diana Ferrari. Following multiple acquisitions, the group was running multiple systems causing the IT team to have to deal with “two sources of truth”.

In 2020, MFG decided to merge its two ERP systems to simplify and retire down to one system. It took 10 months to retire and migrate the old ERP system onto the new one.

How MFG connected multiple systems

Ahead of the ERP project, in late 2019 MFG chose Boomi AtomSphere Platform to connect all its different systems. The low-code, cloud-native integration platform as a service (iPaaS) is designed to help organisations unify information by connecting applications, data, and people wherever they’re located.

This came in handy when in mid-2020 the coronavirus pandemic forced the closure of its physical stores, the iPaaS was then used to integrate the systems of physical stores and MFG’s ecommerce.

The implementation process was rapid, partly due to the collection of training resources in Boomi’s library. “We were off and running in terms of developing live solutions within four to six weeks,” CTO Keng Ng says to CIO Australia.

One of the challenges with this kind of project is that some systems are being connected to the new central system, while others are being disconnected and discarded. Before, every system was almost hardwired or welded onto other systems, according to Ng, which meant that making changes to one system without impacting another was often difficult. After the integration, this part of the problem was solved.

Ng refers to AtmosPhere as a Swiss army knife when explaining how much easier it has become for MFG to run all the technology in the background.

During COVID-19, when the company was faced with massive supply chain disruptions and ships were detoured to different ports or didn’t arrive at all, instead of cancelling orders and re-importing them with the new addresses, MFG stopped the messages from the carrier from reaching its system, and changed the address in the iPaaS and then let it flow through

“All the other systems around didn’t know anything was happening. So, we were able to manage unexpected business issues by making changes to the data as it happened,” he says.

Instead of doing any major changes to connect systems, all MFG had to do when connecting a new system was to pull a library in and connect it. “It has a huge library of different technologies and connectors so that it can connect something from the 1990s to something in the cloud that is modern and contemporary,” Ng says.

“The [only] limitation is probably — if there’s any — on the system side, where some vendor might go: ‘I don’t want you to connect to us’, But that’s more of a policy discussion and commercial discussion with those vendors,” Ng says.

The challenges of merging data from two ERPs

This was followed by the two ERP integrations, which was further complicated by a more recent acquisition.

This kind of major system “rationalisation and simplification” project, as Ng likes to refer to, was always going to be challenging — but then the COVID-19 pandemic erupted during the implementation and migration.

“We’re talking about retiring a major ERP, which is usually ugly and difficult, and there were a lot of business processes and business understanding that we had to change because we had two companies coming together,” he says.

All the differences in terminology, accounting rules, and business structure was what made the process longer, according to Ng.

“The other challenge was a lot of knowledge from the old business had walked out the door in the previous acquisition, so the problem MFG encountered was picking through to understand what was and wasn’t available,” Ng says.

The biggest lesson Ng and his team learned was that, with a big project, you have to expect unknown unknowns to cause trouble. “The historical ERP system was filled with hidden features and knowledge, and despite best analysis, there were plenty of codes to crack along the way,” he says.

200 integrations have been completed

One major project MFG needed Boomi for was to integrate the systems that make up its brick-and-mortar and e-commerce operations. Thanks to Boomi, MFG was able to bridge its in-house retail application with its ERP and ordering systems, in less than a month.

“Those three are really big systems. The retail app is in 180 stores. The ordering system runs all our warehouses, and also we connect to our web fulfilment system. So, it’s an online system that when you buy something online, we connect to it via Boomi so that we can then route the orders to stores that could fulfil it,” Ng says.

Since MFG started using Boomi, it’s made more than 200 integrations, including a vital integration relating to the group’s shoes.

“We release upwards of 400,000 items every season, and our warehouse, our suppliers, our third-party carriers, our stores, our online websites all need to know information like: how much is it worth? How much should we sell it for? What colour is it? What material? What style? What’s the brand?” he says.

All it takes is to enter the data once in the system and it gets shared across to all stores and the ecommerce system.

There are more integrations planned, with MFG making an omni-channel push while adopting a strategy that, according to Ng, will see the company adopting the best system to do the job, whether it’s loyalty, whether it’s CRM, whether it’s ecommerce, whether it’s POS.

“And when we go with that strategy, chances are we’re not going to get one big system that does everything; we’re going to have disparate systems, which are best-of-class in their space, but they’re probably from different vendors,” he says.

MFG expects the iPaaS will be able to handle all the upcoming software.

ERP Systems

Heading down the path of systems thinking for the hybrid cloud is the equivalent of taking the road less traveled in the storage industry. It is much more common to hear vendor noise about direct cloud integration features, such as a mechanism to move data on a storage array to public cloud services or run separate instances of the core vendor software inside public cloud environments. This is because of a narrow way of thinking that is centered on a storage array mentality. While there is value in those capabilities, practitioners need to consider a broader vision.

When my Infinidat colleagues and I talk to CIOs and other senior leaders at large enterprise organizations, we speak much more about the bigger picture of all the different aspects of the enterprise environment. The CIO needs it to be as simple as possible, especially if the desired state is a low investment in traditional data centers, which is the direction the IT pendulum continues to swing.

Applying systems thinking to the hybrid cloud is changing the way CIOs and IT teams are approaching their cloud journey. Systems thinking takes into consideration the end-to-end environment and the operational realities associated with that environment. There are several components with different values across the environment, which ultimately supports an overall cloud transformation. Storage is a critical part of the overall corporate cloud strategy.

Savvy IT leaders have come to realize the benefits of both the public cloud and private cloud, culminating in hybrid cloud implementations. Escalating costs on the public cloud will likely reinforce hybrid approaches to storage and cause the pendulum to swing back toward private cloud in the future, but besides serving as a transitional path today, the main reasons for using a private cloud today are about control and cybersecurity.

Being able to create a system that can accommodate both of those elements at the right scale for a large enterprise environment is not an easy task. And it goes far beyond the kind of individual array type services that are baked into point solutions within a typical storage environment.

What exactly is hybrid cloud?

Hybrid cloud is simply a world where you have workloads running in at least one public cloud component, plus a data center-based component. It could be traditionally-owned data centers or a co-location facility, but it’s something where the customer is responsible for control of the physical infrastructure, not a vendor.

To support that deployment scenario, you need workload mobility. You need the ability to quickly provision and manage the underlying infrastructure. You need visibility into the entire stack. Those are the biggest rocks among many factors that determine hybrid cloud success.

For typical enterprises, using larger building blocks on the infrastructure side makes the journey to hybrid cloud easier. There are fewer potential points of failure, fewer “moving pieces,” and increased simplification of the existing hybrid or existing physical infrastructure, whether it is deployed in a data center or in a co-location type of environment. This deployment model also can dramatically reduce overall storage estate CAPEX and OPEX.

But what happens when the building blocks for storage are small – under a petabyte or so each? There is inherently more orchestration overhead, and now vendors are increasingly dependent on an extra “glue” layer to put all these smaller pieces together.

Working with bigger pieces (petabytes) from the beginning can omit a significant amount of that complexity in a hybrid cloud. It’s a question of how much investment a CIO wants to put in different pieces of “glue” between different systems vs. getting larger building blocks conducive to a systems thinking approach.

The right places in the stack

A number of storage array vendors highlight an ability to snap data to public clouds, and there is value in this capability, but it’s less valuable than you might think when you’re thinking at a systems level. That is because large enterprises will most likely want backup software with routine, specific schedules across their entire infrastructure and coordination with their application stacks. IT managers are not going to want an array to move data when the application doesn’t know about it.

A common problem is that many storage array vendors focus on doing it within their piece of the stack. Yet, in fact, the right answer is most likely at the backup software layer somewhere − somewhere higher than the individual arrays in the stack. It’s about upleveling the overall thought process to systems thinking: what SLAs you want to achieve across your on-prem and public cloud environments. The right backup software can integrate with the underlying infrastructure pieces to provide that.

Hybrid cloud needs to be thought of holistically, not as a “spec checkbox” type activity. And you need to think about where the right places are in this stack to provide the functionality.

Paying twice for the same storage

Solutions that involve deploying another vendor’s software on top of storage that you already have to pay for from the hyperscaler means paying twice for the same storage, and this makes little sense in the long term.

Sure, it may be an okay transitional solution. Or if you’re really baked into the vendor’s APIs or way of doing things, then maybe that’s good accommodation. But the end state is almost never going to be a situation where the CIO is signing off on a check for two different vendors for the same bits of data. It simply doesn’t make sense.

Thinking at the systems level

Tactical issues get resolved when you apply systems thinking to enterprise storage. Keep in mind:

Consider where the data resiliency needs to be orchestrated and whether that needs to be within individual arrays or better positioned as part of an overall backup strategy or whatever strategy it isBeware of just running the same storage software in the public cloud because it’s a transitional solution at bestCost management is critical

On the last point, you should have a good look at the true economic profile your organization is getting on-premises. You can get cloud-like business models and the OPEX aspects from vendors, such as Infinidat, lowering costs compared to traditional storage infrastructure.

Almost all storage decisions are fundamentally economic decisions, whether it’s a direct price per GB cost, the overall operational costs, or cost avoidance/opportunity costs. It all comes back to costs at some level, but an important part of that is questioning the assumptions of the existing architectures.

If you’re coming from a world where you have 50 mid-range arrays, and you have a potential of reducing the quantity of moving pieces in that infrastructure, the consolidation and simplification alone could translate into significant cost benefits: OPEX, CAPEX, and operational manpower. And that’s before you even start talking about moving data outside of more traditional data center environments.

Leveraging technologies, such as Infinidat’s enterprise storage solutions, makes it more straightforward to simplify and consolidate on the on-prem side of the hybrid cloud environment, potentially allowing for incremental investment in the public cloud side, if that’s the direction for your particular enterprise.

How much are you spending maintaining these solutions, the incumbent solutions, both in terms of your standard maintenance or support subscription fees? Those fees, by the way, add up quite significantly. In terms of your staff time and productivity to support 50 arrays, when you could be supporting three systems or one system, you should look holistically at the real costs, not just what you’re paying the vendors. What are the opportunity costs of maintaining a more complex traditional infrastructure? 

On the public cloud side of things, leveraging cloud cost management tools, we’ve seen over a billion dollars of VC money that’s gone into that space, and many companies are not taking full advantage of this, particularly enterprises who are early in their cloud transformation. The cost management aspect and the automation around it − the degree of work that you can put into it for real meaningful financial results − are not always the highest priority when you’re just getting started. And the challenge with not baking it in from the beginning is that it’s harder to graft it in when processes become entrenched.

For more information, visit Infinidat here

Hybrid Cloud

Due to Nigeria’s fintech boom borne out of its open banking framework, the Central Bank of Nigeria (CBN) has published a much-awaited regulation draft to govern open banking procedures. And at its core is the need to secure customer data through a robust set of requirements.

The regulations streamline how entities who handle customer banking information will secure their systems and share details within protected application program interfaces. They’ll also seek to standardize policies for all open banking participants, and come at a time when the country is enjoying a boom of fintech and banking services that have attracted international funding in the startup space.

According to the Africa Funding Startup 2021 report, Nigerian fintech has brought in more than half of the US$4.6 billion of total African startups, which underpins the growing need for more financial products, and facilitates greater data sharing across banking and payments systems that open banking provides.

For Emmanuel Morka, CIO at Access Bank Ghana, open banking is the future and enterprises should seize on the opportunity.

“Traditional banking is fading away,” he says. “Open banking is the only way you can set systems like agency banking, mobile banking and use dollars.”

He notes that fintech has been at the forefront of the open banking system in the region and believes it will spread across the continent. But wherever there’s money, there’s insecurity and the free exchange of application programming interface (API) across banking platforms has opened opportunities and risks as well. Unsecured systems and API channels can be a point of vulnerability.

Securing customer data

“One of my headaches as a CIO is no one is fully protected,” Morka said, adding that open banking has to ensure that customer data and assets aren’t compromised, so all endpoints in his organization must be fortified. The Operational Guidelines for Open Banking in Nigeria published by the CBN stress that customer data security is critical for the safety of the open banking model. The preliminary draft will guide the industry discussion before the final guidelines are put in place by the end of the year.

The foremost thing to secure data, according to Morka, is to expose fit-for-purpose data for consumption. This means that CIOs need to limit data accessibility to what is requested and can be used.

“I see open banking as an exposure of some data over a secured standardized channel to third parties for consumer banking,” he said. “I am the bridge between business and technology.”

He also says that it’s not only the core banking products that need protection but also tools on CRM and other software that centers on customer data.

The framework provided by the CBN also considers constant monitoring of systems of third-party API users in the open banking system. TeamApt, a Nigeria-based fintech startup, has helped over 300,000 businesses use its digital banking platform and is anchored in open banking.

The company sees legislation such as the Nigeria Data Protection Regulation (NDPR) as a big consideration for companies dealing with personal data.

“Due to the sheer size of personally identifiable information being shared, in the hands of bad actors, this data can be used to pilfer bank accounts, erode credit ratings, and conduct identity theft on a large scale,” said Tosin Eniolorunda, founder and CEO of TeamApt.

Organizations like banks also suffer using resources to recover stolen data, losing customer trust in the process, he said.

“These regulations ensure that customers have some sort of control over how their data is collected, processed and shared,” he says.

The Central Bank’s regulation has also factored in the NDPR requirements to craft how financial institutions manage customer data, and the regulations outline that consent is needed for use of customer data in open banking to avail them of financial products and services.

Six steps to achieve a secure open data platform

There are several steps IT experts can take to make sure customer data are in line with privacy laws, and that security across all systems is in place to shield these data points from leakage.

1. Technology leaders must have their systems and processes adhere to privacy laws and the final guidelines to be published by the CBN. “It’s important that executive teams work closely with lawyers who have the necessary data experience to advise on the requirements and implications of applicable regulations and guidelines like those released by the CBN on open banking,” says Eniolorunda.

2. Morka suggests that only a customer’s information that’s relevant to a transaction should be used—something he calls fit-for-purpose data. Not all data points need to be exposed during transactions. CIOs need to ascertain what type of data can be enough for transactions to securely take place.

3. Eniolorunda encourages the use of technology in know your customer (KYC) processes. Morka also says that the use of artificial intelligence (AI) should be implemented to make the process of KYC easier on financial institutions while making it accurate and efficient.

4. There needs to be constant evaluation of banking systems and APIs used in transactions, according to Morka. In terms of supply chains, Eniolorunda adds that companies must ensure that third-party vendors they use have the highest possible security standards, and the security programs of these vendors must be routinely audited and validated.

5. Customer education is key. Morka agrees that some technologies like smartphones and internet access have not reached most rural regions in African countries. This hinders the appropriate use of banking technology and slows down its adoption. For those who have embraced digital banking, constant education on how to keep their accounts secure is essential.

6. The collaboration between stakeholders will make the banking ecosystem robust and guide its growth. The CBN, through its Open Banking Guidelines, seeks to ensure that its oversight affords more collaboration for superior digital banking products for customers.

Banking, Data and Information Security

DTN is more than just a weather forecaster: It also offers decision-support services to companies in agriculture, energy, commodities, and the finance industry. Its weather-related services can be as simple as helping utilities predict short-term demand for energy, or as complex as advising maritime transporters on routing ocean-going cargo ships around developing storms.

Over the years, DTN has bought up several niche data service providers, each with its own IT systems — an environment that challenged DTN IT’s ability to innovate.

“We had five different forecast engines running in the company through various acquisitions,” says Lars Ewe, who inherited the thorny IT environment when he joined as CTO in February 2020. “Very little innovation was happening because most of the energy was going towards having those five systems run in parallel.”

The forecasting systems DTN had acquired were developed by different companies, on different technology stacks, with different storage, alerting systems, and visualization layers.

“They had one thing in common,” jokes Ewe: “They all were trying to predict the weather!”

Working with his new colleagues, he quickly identified rebuilding those five systems around a single forecast engine as a top priority.

The merger playbook

Enterprises often make strategic errors when combining IT systems following an acquisition, Ewe says. “The number one mistake I see is, ‘Since we acquired you, clearly we win,’” he says. “Just because A bought B, you don’t want to assume that A has better technology than B.”

Another common mistake is to go solely by the numbers, picking one company’s IT system over the other’s because it has the highest revenue or profitability, he says: “The issue there is that you’re oversimplifying the process.”

Given the investment in time and money necessary to merge two companies’ IT systems, “it’s worthwhile spending an extra few weeks up-front to make a more thorough analysis of which solution or which pieces of which solutions should come together,” Ewe says. Jumping straight in and making a wrong decision can cost more in the long term.

Ewe consulted with product and sales management, and with customers, to identify the needs DTN’s single engine would have to satisfy, as well as the use cases it would serve, before evaluating the existing assets against those needs. He had other requirements as well, including that the system should run in the cloud. To ensure the success of the decision-making process, Ewe brought together the staff who were running each of the forecasting systems into one team.

“You’re starting with five teams, and everyone thinks that their baby is the best baby, and the other babies are all ugly. It’s natural,” he says. Also natural, he adds, is fear among IT workers that their employment is tied to the continued existence of the system they maintain.

To combat that, Ewe emphasized the potential for growth from the start. “We have so much opportunity here that there’s more than one solution where we can apply their talent,” he says, noting that there would be plenty of work building analytics and insight tools around whichever forecasting engine was chosen.

A succession of team-building exercises helped develop a trusted environment where staff saw themselves as part of the larger whole, in which they were willing to discuss the disadvantages of the system they worked on, as well as its advantages. This enabled the team to select one engine to carry forward and to identify capabilities that the other engines offered that DTN should consider reimplementing in its selected platform, Ewe says.

For example, Ewe didn’t want to lose the data those other engines worked with. So he had it all cleaned up and consolidated into a common store. “Historical data is very important for weather prediction because it provides a feedback loop into the models,” he says.

DTN staff did much of the implementation work. “I am a firm believer in in-house resources. They’re just more motivated; they have more incentives to make things successful,” he says. “When you think about what skill sets do you need, it’s a broad spectrum: data engineering, data storage, scientific experience, data science, front-end web development, devops, operational experience, and cloud experience.”

DTN did rely on external help in building the high-performance computing infrastructure in the cloud, partnering with Amazon Web Services: “They realized that there was a real market for high-performance computing in the cloud, and they wanted to find a partner that actually had clear requirements, a clear mission and clear knowledge of high-performance computing,” he says.

The results exceeded Ewe’s expectations, doubling the throughput of the forecasting system to the point where DTN can now run global models hourly. “In fact, we don’t even schedule them. Usually these systems are batch-driven, they’re scheduled, and we’re now event-driven: When underlying data changes in a meaningful way, we kick off a new model compute. That is sensational.”

Tuning for the customer

Ewe had to encourage other cultural changes in the team, beyond uniting it around one forecasting engine. “I had to help everyone understand that this engine we were building was just the underpinning of larger solutions that we were trying to build on top of that,” he says.

With access to easily scalable supercomputing resources, there’s a temptation to crank up the accuracy of the forecast model, but, as Ewe says, “You have to ask yourself, ‘Is what I’m now optimizing even having an impact on the consumption side?’”

In other words, is the output of the forecasting model good enough for customers’ use cases? That’s a tricky question, but easy to answer with the right data, he says: “You often can simulate it: If I were off by a half a degree, what impact would that even have on the ship routing algorithm?”

Forecasting merger success

Some post-merger IT challenges could be avoided — or at least more easily planned and budgeted for — if IT weighed more heavily in the negotiation process leading up to an acquisition. But getting a seat at the merger negotiating table is a challenge for IT leaders: Such discussions are often conducted with the utmost secrecy.

At DTN, says Ewe, “We have a sophisticated due-diligence checklist for technology. There’s a lot in there, but it gives us more visibility up front of what it is that we’re trying to merge or integrate.”

Among the areas the checklist invites the negotiating team to consider, he says, are the talent, “because you are buying people just as much as you’re buying technology,” and the interdependencies of the IT systems, to get a sense of what is required for the merger to work.

“If you’re not part of the process, then you are at least represented through a mechanism, a process,” he says.

After going through the process a few times, CIOs should have the data to demonstrate how important a good IT match is in a successful merger, Ewe says, “and hopefully earn a seat at the table.”

Analytics, Data Management