Data is the lifeforce of modern business: It accelerates revenue, fuels innovation, and enhances customer experiences that drive small and mid-size businesses forward, faster. Yet while data-driven modernization is a top priority, achieving it requires confronting a host of data storage challenges that slow you down: management complexity and silos, specialized tools, constant firefighting, complex procurement, and flat or declining IT budgets.

To truly modernize today, you need more than a faster and more powerful box. You need to break away from legacy infrastructure and elevate your storage experience with a platform that simplifies and automates everything, ensures apps are always on and always fast, and unlocks agility by delivering the cloud experience everywhere.

Sound intimidating? When your next mid-range storage refresh rolls around, here are five key strategies to successful modernization:

1. Go faster with a cloud experience everywhere

Public cloud has set the standard for agility with a cloud operational model that enables line of business (LOB) owners and developers to build and deploy new applications, services, and projects faster than ever before. Now, extending that idea further, small- and mid-sized organizations are racing to bring the benefits of the cloud operational experience to wherever their apps and data live, from edge to on-prem to colo.

Simplifying operations and moving faster with a cloud experience everywhere is critical to organizations: According to ESG, 91% of IT leaders identify mature cloud operations on-premises as the single most important step to eliminating complexity. The same survey reported that more than 4 in 5 IT leaders are under pressure to deliver more cloud experiences to end users, which is a challenge when up to 70% of enterprise apps and data remain on-prem due to data gravity, latency, application dependency, and compliance requirements.

Simplifying and automating on-prem storage with the self-service agility of a cloud operational experience changes the mid-range storage game entirely. Leveraging AIOps, a cloud-managed, on-prem mid-range storage platform makes underlying data infrastructure invisible — eliminating silos, complexity, and the burden of day-to-day storage administration while shifting operations from an infrastructure-focused to an app-focused model. Unlike traditional storage management, a cloud operational experience delivers the self-service storage provisioning agility that your LOB owners and developers need to accelerate app deployment while also freeing IT resources to work on strategic, higher-value initiatives.

2. Put storage on autopilot with an AI-managed service

A true cloud operational experience should be powered by the data-driven insights and intelligence provided by advanced AI for infrastructure. Why is that important? Because AI-powered autonomous operations ensure apps are always on and always fast — which means your organization can say goodbye to endless firefighting.

With an AI-managed service, you can predict and prevent disruptions before they occur across the stack, pinpoint issues between storage and VMs, and identify underutilized virtual resources. Your admins can rely on AI-driven recommendations to take the guesswork out of managing data infrastructure, and you eliminate time-consuming escalations through predictive support automation and direct access to experts.

3. Get optimum price performance for general-purpose workloads

As small- and mid-range organizations continue to experience flat or declining IT budgets, hybrid storage solutions have become increasingly efficient at delivering the performance companies need for their mix of primary workload, secondary backup, and disaster recovery workloads while helping to contain the cost of rapidly growing data volumes. Hybrid storage solutions balance performance and cost by storing active and frequently accessed data on flash storage while storing inactive or less important data on more affordable media.

The most advanced hybrid arrays combine flash performance with disk economics. They feature ultra-efficient architectures designed from the ground up to deliver fast, consistent performance and industry-leading data efficiency. Look for a hybrid storage solution that accelerates your apps with sub-millisecond latency, writes to cost-optimized disk at flash speeds via write serialization, and offers dynamic flash caching to speed your reads even as workloads change in real time.

Modern hybrid flash arrays can also help you increase storage efficiency while lowering costs and footprint. They do this with advanced, always-on data reduction — encompassing deduplication, compression, zero-pattern elimination, thin provisioning, and thin clones — which can deliver up to 5x the space savings without performance penalty.

4. Depend on a resilient, proven platform

Mid-size organizations increasingly rely on applications to handle everything from back-end operations to the delivery of products and services. That’s why proven availability, protecting data, and ensuring applications stay up are more important than ever before.

Advanced, AI-driven arrays eliminate the anxiety and disruption of unexpected downtime by guaranteeing resilient six-nines data availability based on real, achieved values (as opposed to theoretical projections) and measured across an entire installed base. Paired with advanced AI for infrastructure that predicts and prevents problems, your arrays should get smarter, better, and more reliable every day.

Don’t accept trade-offs between data resilience and performance. Your mid-range storage array should deliver Triple+ Parity RAID as standard, with zero performance impact. Triple+ Parity RAID can sustain three simultaneous drive failures without data loss and provides additional protection through intra-drive parity.

Ensure your recovery SLAs with fast, simple, and integrated app-aware backup and recovery — on premises and in the cloud. Natively replicate from all-flash arrays to hybrid arrays and leverage a SaaS-based backup and recovery service to simplify hybrid cloud data protection with instant restores, rapid recovery on premises and cost-effective, long-term retention in the cloud.

5. Consume as a service, on demand

Finally, you should be able to choose how to consume your mid-range storage array with either Capex or pay-per-use options. A flexible as-a-service consumption model enables you to avoid over- and under-provisioning concerns, Capex budget constraints, and complex procurement cycles.

How so? First, by easily getting the storage resources needed with workload-optimized storage tiers delivered in days. Second, by scaling on-demand and as necessary, with buffer capacity for unexpected workloads or usage demands. And third, by moving from heavy up-front costs to transparent, predictable monthly payments based on actual metered consumption — with complete visibility into your storage utilization at any time.

It’s time to rethink mid-range storage

Taken together, these five strategies point to a mid-range primary storage solution that makes it easy to focus on data-driven modernization and accelerating innovation — without having to deal with legacy storage headaches, disruptions, or high costs.

At HPE, we’re delivering exactly that: simple, reliable, and cost-efficient hybrid storage, adaptively designed for both general purpose and secondary workloads. Built from the DNA of the HPE Nimble Storage Adaptive Flash array, HPE Alletra 5000 brings the cloud experience to your on-premises storage and simplifies operations across the lifecycle, from deployment to provisioning to upgrades.

HPE Alletra 5000 speeds time to value by delivering 99% operational time savings via intent-based provisioning, which enables line of business and database admins to self-provision storage for faster app deployments. It also eliminates app disruptions and firefighting thanks to HPE InfoSight, the industry’s most advanced AIOps for infrastructure. You get absolute resiliency with guaranteed six-nines data availability, Triple+ Parity RAID, and simple hybrid cloud data protection — and optimal flash price performance for your general-purpose workloads with a unique, ultra-efficient architecture. Even better, you can consume HPE Alletra 5000 as a service via HPE GreenLake, enabling you to shift from owning and maintaining data infrastructure to simply accessing and utilizing it.

If your next storage refresh doesn’t offer capabilities like these, it’s time to rethink your mid-range storage and aim for true data-driven modernization.


About Simon Watkins

Simon Watkins is a Worldwide Senior Product Marketing Manager with HPE Storage. The 20-year veteran of the storage industry started his career in data protection, and spent the last few years working in primary storage, most recently helping drive the transformation of the HPE Storage business from a hardware vendor to a cloud data services provider.  He holds an Executive MBA from the London Business School as well as a Bachelor of Arts Degree from Cambridge University in the UK.

Data Management, HPE, IT Leadership

Employee experience has become a key factor in defining your company’s overall success. Positive or negative, employee experience can significantly impact your company’s productivity, efficiency, and its ability to recruit and retain talent. It can even impact your brand’s reputation long after an employee has exited the company.

The COVID-19 pandemic has drastically changed the future of work by normalizing remote work, placing a new emphasis on workplace flexibility, and introducing hybrid workforce environments. It has also seen drastic changes around employee expectations and engagement, and significant challenges to long-held workplace assumptions. Because of this, business leaders are making employee experience a top priority like never before.

In the past, employee experience was built around location, typically an office building, which served as a central point for all employees. Research firm Gartner argues that today a good employee experience is all about human-centered design, which “prioritizes the human as the core pillar of work design over location, requiring a new set of principles, norms, and thinking.” Without a human-centric approach, which includes integrating flexibility, intentionality, and empathy into work policies and practices, organizations will struggle to attract and retain talent in today’s talent marketplace, Gartner contends.

Employee experience definition

Employee experience encompasses everything your employees experience, from the moment they are recruited to their journeys onward as alumni of the company. While the steps and facets of the employee life cycle vary by company and industry, there are common milestones that define the employee experience across the board. These milestones include the recruitment process, onboarding, training, development, evaluation and promotion, exiting, and the alumni experience.

Why is employee experience important?

Employee experience has a significant influence on business success, especially around turnover and productivity. According to Gartner, when employees report a positive employee experience, they are 60% more likely to stay with the company, 69% more likely to be high performers, and 52% more likely to report “high discretionary effort,” which is work they do above and beyond their daily responsibilities. Embracing a human-centric approach to employee experience can also reduce work fatigue by 44%, increase “intent to stay” by 45%, and improve performance by 28%, according to Gartner.

Remote work has become more normalized since the COVID-19 pandemic, challenging long-held assumptions about when and where work is performed. Organizations can no longer rely on a top-down office culture alone to shape the employee experience. Instead, they must design workflows and business processes around human physical, cognitive, and emotional needs.

Employee experience strategy

A human-centric approach to the employee experience addresses the growing expectations of employees to have flexibility and empathy at work. That means acknowledging demands for hybrid work, accepting that the future of work has fundamentally changed, and embracing autonomy, visibility, and inclusion in the workplace.

While 14% of employees prefer to work from a corporate office exclusively, and 10% prefer to be fully remote, 76% want some type of flexibility between the two, according to Gartner data. Employees are also shown to be more productive when given the opportunity for flexibility and are more likely than their on-site peers to go above and beyond their job description, according to the research firm.

In addition to flexibility, employees want systems, tools, and software that make their jobs easier, without causing delays or impinging on productivity. It’s important to have a streamlined effort around technology in the company, ensuring everyone has access to the data or systems they need. All systems, networks, software, and hardware should also be as efficient as possible. Everyone needs to have the appropriate tools to effectively do their jobs, without running into headaches when using them.

But the most important facets of developing an employee experience strategy are ensuring that you know what employees want, have the means to measure challenges and progress, and put your employees at the center of every step of their employment journey.

Employee experience best practices

Organizations with “vision maturity,” the highest level of employee experience, according to Gartner, typically exhibit the following characteristics:

They take a holistic view of employees, seeing them as a “whole person,” including their personal and social experiences inside and outside of work.
They realize the overall contribution of employees outside of their job descriptions and time with the company.
They identify “moments that matter” in the employee experience and build objectives and goals that support all types of employees and personalities in the organization.
They have “clear cross-functional ownership and goal alignment” of the employee experience outside of just the HR department that’s aligned with the overall organizational goals and culture.
They implement an employee experience strategy that supports two-way communication and expectations with employees, allowing them to share their opinions and ideas openly.
They develop an architecture that enables IT, HR, and other leaders to plan and organize initiatives relevant to specific business roles, tasks, and other objectives.

Common employee experience mistakes

On the opposite end, organizations that rank on the lowest levels of employee experience have a limited focus, typically implementing one-off initiatives or relying too much on employee experience tools. According to Gartner, companies that have a lower-ranked employee experience typically struggle with:

A lack of understanding of the impact of employee experience and of the building blocks that go into employee experience
Having restrictive views on the overall employee journey, focusing only on “major career moments” rather than the more granular day-to-day responsibilities and work of employees
Being too depending on technology to improve the employee experience, and often having unrealistic expectations from the tools and software implemented
Fragmented and overlapping systems and processes, which introduce friction that impact employee satisfaction and productivity

Measuring employee experience

Employee experience platforms and tools help companies manage the employee experience while also getting feedback on what they’re doing right and what needs to change. These tools can also help enable employees to have a voice in the organization, giving them a platform to express how they feel about various initiatives or business processes.

You don’t want your employee experience data to be hidden away in a “black box,” says Tori Paulman, a senior director analyst at Gartner. It’s important that the information is accessible to all stakeholders and that it helps piece together a clear picture of what the employee experience is like within the organization.

Another way to measure the employee experience is through employee resource groups (ERGs). When it comes to ensuring technical resources are providing a positive experience, Paulman suggests that CIOs leverage ERGs to get broad feedback on “how the applications are being perceived and how effective they are for various groupings of employees that you might have in the workplace.”

Ultimately you need the tools that will supply the data to help identify all the pain points in the organization, initiatives that are working positively, and areas for improvement. There’s no one-size-fits-all to the employee experience, so it’s important to identify various departmental or even employee-specific needs within the organization. Employee experience platforms can help capture these.

IT’s role in the employee experience

Employee experience has historically fallen on the desks of HR staff, but as it grows increasingly digital, the CIO and IT department now have a bigger role than ever in the process, according Paulman.

Gartner states that by 2025, more than 50% of IT organizations will prioritize and measure the success of digital initiatives based off the digital employee experience — a significant jump from just 5% of companies who said the same in 2021. Similarly, by 2024, 60% of large global organizations will deploy no less than five human capital management and digital workplace technologies to address employee experience needs.

“The CIO and the leaders that report to them have to lean in and take ownership over employee experience. We see an imperative for the CIO to step into the circle and say, ‘I’m going to own the day-to-day employee experience and I’m going to support HR leaders and facilities leaders,’ because a huge part of [employee experience] is the connections and the collaboration of humans and the place in which it’s done. And it’s my position that the CIO has the greatest impact on that on a day-to-day basis,” says Paulman.

Technology is fundamental to the employee experience and it includes everything from what recruiting software you use, to the daily collaboration tools, to the software used to offboard employees can impact the digital employee experience. It’s even important to consider an employee’s lifelong experience with technology.

Paulman gives the example of an architect who started their career with pencils and paper and now works with fully digital programs and tools. Some employees may have a learning curve with technology used in the organization or may have used entirely different tools at their last company. It’s important to ensure that all considerations around technology are considered and made a central part of employee experience initiatives.

Aligning digital efforts so that they can support the overall employee experience strategy within the organization will allow digital leaders to effectively prioritize projects and resources. Whereas not aligning those efforts will only result in “siloed applications and unhappy employees,” according to Gartner. 

For more on what CIOs can do, see “How IT can improve the employee experience.”

Careers, IT Leadership, Staff Management

Data is now one of the most valuable enterprise commodities. According to’s State of the CIO 2022 report, 35% of IT leaders say that data and business analytics will drive the most IT investment at their organization this year, and 58% say their involvement with data analysis will increase over the next year.

While data comes in many forms, perhaps the largest pool of untapped data consists of text. Patents, product specifications, academic publications, market research, news, not to mention social feeds, all have text as a primary component and the volume of text is constantly growing. According to Foundry’s Data and Analytics Study 2022, 36% of IT leaders consider managing this unstructured data to be one of their biggest challenges. That’s why research firm Lux Research says natural language processing (NLP) technologies, and specifically topic modeling, is becoming a key tool for unlocking the value of data.

NLP is the branch of artificial intelligence (AI) that deals with training a computer to understand, process, and generate language. Search engines, machine translation services, and voice assistants are all powered by NLP. Topic modeling, for example, is an NLP technique that breaks down an idea into subcategories of commonly occurring concepts defined by groupings of words. According to Lux Research, topic modeling enables organizations to associate documents with specific topics and then extract data such as the growth trend of a topic over time. Topic modeling can also be used to establish a “fingerprint” for a given document and then discover other documents with similar fingerprints.

As interest in AI rises in business, organizations are beginning to turn to NLP to unlock the value of unstructured data in text documents, and the like. Research firm MarketsandMarkets forecasts the NLP market will grow from $15.7 billion in 2022 to $49.4 billion by 2027, a compound annual growth rate (CAGR) of 25.7% over the period.

Here are five examples of how organizations are using natural language processing to generate business results.

Eli Lilly operates at global scale with NLP

Pharmaceutical multinational Eli Lilly is using natural language processing to help its more than 30,000 employees around the world share accurate and timely information internally and externally. The firm has developed Lilly Translate, a home-grown IT solution that uses NLP and deep learning to generate content translation via a validated API layer.

Artificial Intelligence, Digital Transformation, IT Leadership, Natural Language Processing

Having managed and rescued dozens of projects, and helped others do so, I’ve noted that there is always one critical success factor (CSF) that has either been effectively addressed or missed/messed up: clarity around the roles and responsibilities for each project participant and key stakeholder. No matter how detailed and complete a project plan may be for any project, confusion or omission of participant roles and responsibilities will cause major problems.

Enter the RACI matrix. The simplest and most effective approach I’ve seen and used to define and document project roles and responsibilities is the RACI model. Integrating the RACI model into an organization’s project life cycle (PLC) creates a powerful synergy that enhances and improves project outcomes.

What is a RACI matrix?

The RACI matrix is a responsiblity assignment chart that maps out every task, milestone or key decision involved in completing a project and assigns which roles are Responsible for each action item, which personnel are Accountable, and, where appropriate, who needs to be Consulted or Informed. The acronym RACI stands for the four roles that stakeholders might play in any project.

In almost 100 percent of these rescue efforts, I have found that there is no shared understanding of participant roles and responsibilities, nor is there explicit documentation to support it. Establishing such a consensus by employing the RACI model almost always gets a stuck project moving again, and enables the key stakeholders to readily deal with the other issues that require resolution.

[ Learn why IT projects still fail at an alarming rate, beware the 10 project management myths to avoid, and find out how to pick the right project management methodology for your team. | Get the latest project management advice by signing up for our CIO newsletters. ]

RACI matrix rules and roles

The RACI model brings structure and clarity to describing the roles that stakeholders play within a project. The RACI matrix clarifies responsibilities and ensures that everything the project needs done is assigned someone to do it.

The four roles that stakeholders might play in any project include the following:

Responsible: People or stakeholders who do the work. They must complete the task or objective or make the decision. Several people can be jointly Responsible.Accountable: Person or stakeholder who is the “owner” of the work. He or she must sign off or approve when the task, objective or decision is complete. This person must make sure that responsibilities are assigned in the matrix for all related activities. Success requires that there is only one person Accountable, which means that “the buck stops there.”Consulted: People or stakeholders who need to give input before the work can be done and signed-off on. These people are “in the loop” and active participants.Informed: People or stakeholders who need to be kept “in the picture.” They need updates on progress or decisions, but they do not need to be formally consulted, nor do they contribute directly to the task or decision.

How to create a RACI matrix

The simple process for creating a RACI model includes the following six steps:

Identify all the tasks involved in delivering the project and list them on the left-hand side of the chart in completion order. For IT projects, this is most effectively addressed by incorporating the PLC steps and deliverables. (This is illustrated in the example below.)Identify all the project stakeholders and list them along the top of the chart.Complete the cells of the model identifying who has responsibility, accountability and who will be consulted and informed for each task.Ensure every task has at least one stakeholder Responsible for it.No tasks should have more than one stakeholder Accountable. Resolve any conflicts where there is more than one for a particular task.Share, discuss and agree the RACI model with your stakeholders at the start of the project. This includes resolving any conflicts or ambiguities.

RACI matrix example

For purposes of simplification, let’s say your project can be broken down into four discrete tasks, undertaken by a team of application developers, along with a sponsoring project executive, project manager, business analyst, and technical architect.

Step 1 of the process involves mapping out the project as a whole. For this, the project manager is both accountable and responsible for the work at hand. To determine the scope and deliverables of the project, the project manager consults with the project’s executive sponsor and with the business analyst about the process to be overhauled as part of the project. The technical architect and the application developers are subsequently informed of the project plan.

In Step 2, the business analyst must then delve more deeply into the process to help map out each facet of the business process to be overhauled. The business analyst is thus responsible for the task, with the project executive being accountable for signing off on this work. To better understand the technical underpinnings of the current process, the business analyst will consult with the technical architect. The project manager and application developers will then be informed of the conclusions derived from this portion of the project.

Here is an illustration of a simplified RACI model for this example project, taking these first two steps into account:

The subsequent third and fourth tasks involve shaping the new process, again with the business analyst responsible for this work, and the other roles on the team following their same responsibilities when the old process was being analyzed in Step 2. Step 4 sees the technical architect taking over, devising a new architecture that will support the new process, signed off by the executive sponsor, and held accountable by the project manager, who devised the scope and deliverables in Step 1.

RACI matrix template

Templates are available for free on the web for those looking to get started with the RACI model. These are typically geared toward Microsoft Excel or Google Sheets, but can also be available for more specialized software. Here are several popular possibilities:

Vertex 42 Excel RACI Excel RACI templatesSmartsheet Excel RACI templatesClickUp RACI templatesExcel Downloads RACI templates

RACI matrix best practices

Simply creating a RACI matrix is not enough. You must ensure that the matrix maps to a successful strategy. Here, conflicts and ambiguities in the plan must be hammered out.

Resolving conflicts and ambiguities in a RACI matrix involves looking across each row and up and down each column for the following:

Analysis for each stakeholder:

Are there too many R’s: Does one stakeholder have too much of the project assigned to them?No empty cells: Does the stakeholder need to be involved in so many of the activities? Can Responsible be changed to Consulted, or Consulted changed to Informed? I.e., are there too many “cooks in this kitchen” to keep things moving? (And if so, what does that say about the culture within which this project is being managed?)Buy-in: Does each stakeholder totally agree with the role that they are specified to play in this version of the model? When such agreement is achieved, that should be included in the project’s charter and documentation.

Analysis for each PLC step or deliverable:

No R’s: Who is doing the work in this step and getting things done? Whose role is it to take the initiative?

Too many R’s: Is this another sign of too many “cooks in this kitchen” to keep things moving?

No A’s: Who is Accountable? There must be one ‘A’ for every step of the PLC. One stakeholder must be Accountable for the thing happening — “the buck stops” with this person.

More than one A: Is there confusion on decision rights? Stakeholders with accountability have the final say on how the work should be done and how conflicts are resolved. Multiple A’s invite slow and contentious decision-making.

Every box filled in: Do all the stakeholders really need to be involved? Are there justifiable benefits in involving all the stakeholders, or is this just covering all the bases?

A lot of C’s: Do all the stakeholders need to be routinely Consulted, or can they be kept Informed and raise exceptional circumstances if they feel they need to be Consulted? Too many C’s in the loop really slows down the project.

Are all true stakeholders included in this model: Sometimes this is more of a challenge to ensure, as it’s an error of omission. This is often best addressed by a steering committee or management team.

RACI matrix in project management

It is the above analyses, which are readily enabled by the use of a RACI matrix, that deliver the real benefit of the model. It is the integration of the model with a specific PLC that ensures that the project is structured for success. Without either component, problems with the structure of the project management process may remain hidden until (or even while…) they cause the project to bog down. Making the time and effort to create a customized PLC/RACI for each significant project is an opportunity to design your project management process for project success.

More on project management:

Project management guide: Tips, strategies, best practices
What is a project manager? The lead role for project success
5 early warning signs of project failure
10 project management myths to avoid
16 tips for a smooth switch to agile project management
The 15 best project management tools for business
Scrum vs. Lean vs. Kanban: Comparing agile project management frameworks
How to pick the right project management methodology for success
Agile project management: A comprehensive guide
8 common project management mistakes — and how to avoid them
Top 11 project management certifications for 2017
6 traits of highly effective project managers
7 goals every project manager should aspire to achieve
Project management: 7 steps to on-time, on-budget, goal-based delivery
IT Governance Frameworks, IT Leadership, Project Management Tools

There are certain truths to be had:

92% percent of organizations have a multi-cloud strategy¹82% of large enterprises have adopted hybrid cloud¹37% of enterprise spend more than $12 million on cloud computing annually¹70% of digital transformation initiatives miss their objectives, often with profound consequences²CIOs continue to seek “hybrid cloud nirvana”Migration is a constant

While perhaps not a truth, experts predict that beyond 2022 there will be less focus on multi or hybrid cloud initial adoption, and more focus on matching workload to the right environment. Why? To cater for continued flux, growth, scalability, security, and cost control. All of that before we even think about managing data.


Figure 1: Triggers to migrate and modernize can be found across the organization and can come from either side: Strategy, Operational, or a mix of needs, driven by the desire to innovate and optimize.

I’d like to widen the migration pool here. A migration in the IT context can be defined as a change in location of application systems or moving data from one environment to another.  Broadly, there are four categories of migrations, which are identified as use-cases in Figure 2:


Migrating workloads can involve several steps that require a thorough planning effort, spanning different functions across the organization such as application developers, IT operations, business operations, and cyber security. But wait: a workload migration project can also involve multiple sub-component level migration workstreams for each infrastructure or software component that constitutes the workload. 

The key questions to ask include the following:

How do I move applications and data without impacting customers and users?What is the best deployment model for my applications and data?What should I consider in “People, Process, and Technology” to ensure a successful migration?

Q1: How do I move applications and data without impacting customers and users?

As soon as customers and users come into the picture it is essential to identify the reason for migration with business goals. Reflecting back to my initial list of truths, a core reason for hybrid or multicloud migration is a lack of preparation at the business level regarding what it means to have cloud as an IT provisioning platform.

The business case should be built on the competitive advantages (or the organization’s mandates), operational efficiency, and both direct and indirect cost savings. It’s easy enough to say “cloud” or hybrid cloud, but, without a unifying strategy, migrating to or towards cloud can just add complexity and operational impediments.

Q2: What is the best deployment model for my applications and data?

Consistency across their platforms is hardly a priority for the major cloud services providers.  In the same way, the cost promise of cloud is a broad-brush concept that isn’t necessarily met uniformly or even at all. Identify suitable workloads for migration, which means the importance of each application is the starting point. Next: identify the ideal platform for the application and why? Critical applications are not only notoriously the most difficult to move, but they support business operations, revenue, and profit, meaning many executives wince at the thought of migration.

A consistent infrastructure helps reduce migration challenges, which in turn enables organizations to move workloads at a much faster rate, should it be needed. The balance between applications and the appropriate platform for each, matched with portability modernization, means constant and ongoing workload rebalancing. Consistent infrastructure is better suited to consistent applications with a “build once, run anywhere” application architecture, whereas migration capabilities deliver two-way movement, as needs and business circumstances determine, subject to vendor capabilities or penalties!

In terms of migration approach, the secret sauce involves automating as much as possible and using a data-based approach in the planning and selection process. This can drastically reduce migration timelines by integrating and streamlining many parts of the process. Your approach should also include tooling for estimating migration costs. A “Right Mix” approach consists of setting a timeline with targets for migration, using software ranging from infrastructure-discovery tools, which can locate and map business processes and actions into vendor workload placement software, to automated questionnaires. Such questionnaires are meant to collate business process data and provide a way to accurately plan a migration. They can be completely agnostic regarding cloud, co-lo, or platform provider, or weighted to follow a predefined business strategy. The idea is to find the best execution venue for workloads after collecting all the relevant data automatically and make decisions based on parameters set by the business.


Q3: What should I consider in “People, Process, and Technology” to ensure a successful migration?

The compatibility of source and destination platforms and the selection of migration tools can impact the speed and cost of migration efforts. Enterprises have typically moved the easy stuff first, like email or CRM, for which there are very mature SaaS platforms. The remaining challenge involves untangling and trying to modernize the rest of their infrastructure or workloads.

Making an effective case for a migration project depends heavily on citing the right justifications, and those justifications are anchored in people, process, and technology aspects. The considerations should include a broad base of priorities, such as:

Agility, scalability, engineering development, and innovation as well as the geographic footprint of the business or its customersCost optimization and the cost benefits of workload migration in business optimization terms, such as business system availability, shorter time-to-market or project times, IT and technical flexibility, and heightened securityMeeting compliance requirements in highly regulated industriesOptimizing data center space, including attitudes and strategies around sustainabilityTechnical debt, perhaps twinned with a lack of server provisioning agility, plus increased reliability and availability


Diagram: illustrates migration criteria and impact comparisons from different stakeholder groups, mapping criteria to a level of confidence

Utilizing analysis tooling that assesses functional and operational impact is a great way to go. The results can then be plotted onto an “ease vs impact” graph to determine workload migration in appropriate waves.

Where does that leave us?

With the right plan, customized to your circumstances and using a tried-and-tested, standardized process, along with a steady, experienced hand at the tiller, your workload migration is more likely to be a painless process, while modernizing your IT operating model, capabilities, and business propensities. Choosing the right partner is as important as any of the criteria already I’ve laid out. Question them. Question their experience working with critical workloads. And question their methodologies.

Learn more about how to successfully migrate your workload here.



About Ian Jagger

Jagger is a content creator and narrator focused on digital transformation, linking technology capabilities expertise with business goals. He holds an MBA in marketing and is a Chartered Marketer. Today, he focuses on digital transformation narrative globally for HPE’s Advisory and Transformation Practice. His experience spans strategic development and planning for Start-ups through to content creation, thought leadership, AR/PR, campaign program building, and implementation for Enterprise. Successful solution launches include HPE Digital Next Advisory, HPE Right Mix Advisor, and HPE Micro Datacenter.

Digital Transformation, IT Leadership

We live in a highly connected world. Technology has broken down many barriers to trade.  Every aspect of retail has been disrupted, from the way shoppers research purchases to the methods they use to pay. However, despite the powerful forces of globalization, significant local differences exist. 

In some countries, the use of mobile phones is now an essential part of the physical shopping experience, while in other territories there’s a more obvious distinction between online and offline shopping. 

Payment innovations like buy-now-pay-later (BNPL) are popular in parts of the world but have yet to gain traction everywhere. And some countries are far more comfortable buying items like groceries online than others.

So, while it’s possible to sketch global trends, an understanding of local markets is vital if merchants are to create services, payment options, and communication strategies that will really resonate. The old saying ‘retail is detail’ is as relevant now as ever.


Every year, we publish reports about shopping trends around the world. This year’s reports, the Global Digital Shopping Index series, looked at six different markets: the UAE, Brazil, the USA, the UK, Australia, and Mexico. Here are some of the key local differences we’ve uncovered this year: 

Ringing the changes
Many of us got used to shopping online during the pandemic – and now that people are returning to stores, they’re using their phones to help them shop.

The overall use of mobile phones to enhance the in-store shopping experience is up 19% since 2020, but as the chart below illustrates, the use of phones varies considerably in different markets.


% of in-store shoppers who used mobile devices to assist with their shopping experiences *

Flexing up
One of the big developments in payments over the last few years has been the rise of flexible BNLP platforms. BNPL is used by the majority of consumers in Brazil but only a third of total shoppers in the UK.


Overall % of shoppers who use BNPL, by country *

Comparing the data on Brazil to the numbers in the UAE reveals some stark differences: over half of older consumers in Brazil use BNPL, whereas in the UAE it’s only around 1 in 20.

But while Brazil shows across-the-board adoption of BNPL, the biggest adopters are young Australians. They’re more than three times more likely to use flexible payment platforms than the oldest generation of consumers.


Share of consumers in selected markets who’ve used BNPL in the last year, by generation *

Food for thought
When the pandemic hit, much of modern life switched online – including activities like buying groceries. Today, just over 40% of consumers say they’re likely to order their groceries online. But that figure masks some big regional differences, with Brazilian shoppers half as likely to do so as consumers in the UAE.


Consumers who are “very” or “extremely likely” to buy groceries using a “digital-first” approach *

While on average people are less likely to buy groceries online than non-perishables like clothes or electronics, that’s not the case everywhere. Indeed, consumers in the UAE are far more likely to buy their groceries online than anything else.


Most likely categories to be bought using a “digital-first” approach, by country *

One size does not fit all
Digitalization, flexible payments and the use of mobile phones as part of the shopping experience are all factors no retailer can ignoreBut, as these stats show, merchants need to dig beneath the headline trends if they really want to succeed in individual markets.

A little local knowledge really could go a long way.

For the full picture, explore the Global Digital Shopping Index series now.

*  All data comes from the Global Digital Shopping Index and supporting research

IT Leadership

Humans have always gathered data to better understand the physical world around us. Today, companies are increasingly seeking to meld the digital world of data with the physical world through digital twins. Digital twins serve as a bridge between the two domains, providing a real-time virtual representation of physical objects and processes.

These virtual clones of physical operations can help organizations simulate scenarios that would be too time-consuming or expensive to test with physical assets. They can help organizations monitor operations, perform predictive maintenance, and provide insight for capital purchase decisions, creating long-range business plans, identifying new inventions, and improving processes.

In a forecast released in June 2022, research firm MarketsandMarkets said the global digital twin market is expected to grow from $6.9 billion in 2022 to $73.5 billion by 2027, a compound annual growth rate (CAGR) of 60.6% over the period.

Here are five examples of how organizations are using digital twins effectively today.

NTT Indycar puts fans behind the wheel

The NTT Indycar Series, comprised of five races including the Indianapolis 500, is using a combination of digital twin, data analytics, and artificial intelligence (AI) capabilities to give fans access to in-depth, real-time insights about races, including head-to-head overtaking, pit predictions, and other elements.

Partner NTT creates a digital twin for every car in the series. Historical data provides a foundation, and each car is equipped with more than 140 sensors that collect millions of points of data during each race to feed the digital twin. The data includes everything from speed to oil pressure to tire wear and G forces. NTT uses AI and predictive analytics on the digital-twin data to deliver fans insights that previously would only have been available to race team engineers, including race strategies and predictions, intercepts and battles for position, pit-stop performance impact, and effects of fuel levels and tire wear.

Analytics, Artificial Intelligence, Digital Transformation, Machine Learning, Predictive Analytics

A lot is being written about VSM, and for good reason: it offers the opportunity for organizations to benefit from increased alignment, accelerated innovation, reduced risk, and improved competitive advantage. In spite of the clear benefits, it remains a struggle for many leaders to feel confident in taking initial steps and knowing exactly where to start. In this regard, it is invaluable to hear from experts who have been doing this work and realizing success.

While each organization’s VSM initiative will be unique, there are nevertheless common principles and lessons that all can benefit from. In our work with global enterprises, we’ve had the opportunity to hear directly from the executives who are leading the VSM initiatives in their organizations, and who are leading the industry in terms of maximizing the benefits of VSM. We recently had the opportunity to chat with a leading VSM expert working in a large insurance firm. Several years ago, this executive helped launch the organization’s VSM initiative. Here are some of the key lessons they’ve learned in terms of breaking ground with VSM:

Start with clear definitions

In today’s business world there’s no shortage of acronyms and buzzwords. Ultimately, for the insurer, VSM has come to involve people from throughout the organization, especially when working with an international mix of stakeholders, executives, and participants. It is vital to start with a clear understanding of VSM so everyone’s grounded in a common understanding of what it is and why it’s important.

Clearly define value. The good news is that any organization that’s been in operation has value streams. It is essential to gain a clear, well understood, and agreed upon definition of the value being delivered. Ultimately, if teams aren’t clear on value, it is inherently a hit-or-miss proposition as to whether that value can be delivered and improved over time. The VSM leader took a purposeful approach in this case. Start with value, then look to back up from there in terms of decisions that enable that value, and how it is delivered.

Leverage data to gain transparency

Teams need to be empowered with data. Without a real-time view of what’s happening, it takes longer to pivot, and longer to deliver value. When you have fundamentals in place, decisions become faster, and you’re more likely to avoid having to make the same decisions repeatedly.

Quality data provides context, and helps teams navigate complexity. Teams have to navigate thousands of micro decisions. If we don’t have quality data, those decisions become bigger, more complex efforts; we then have to stop and gain input from others, etc. With quality data, we can validate decisions and move forward.

Are you constantly asking, “What can be done to inform an action?” See where there are disconnects and blockages in value streams. Focus on removing things that get in the way of making decisions, from tactical to roadmaps.

Data-based dashboards versus PowerPoint slides: Ensure your dashboards are based on real data, not interpretation. Automated dashboards are much less manual effort than aggregating data and building slides, and if you have thousands of people doing manual rollups – it really becomes a massive cost and drain on efficiency.

Be inclusive

For this insurance giant, VSM is very much a team sport; to succeed, many people need to be involved and engaged. Any time someone approaches, it is very useful to share knowledge around what the teams are trying to accomplish with VSM. Invariably, when hearing about the focus on delivering customer value, others get engaged and want to be a part of it.

Ultimately, people from legal, finance and executive leadership, and a range of other areas are part of the delivery of value. On a practical level, bringing in these other teams can be instrumental in avoiding potential roadblocks and speeding up initiatives. It is important to listen to different perspectives and be open to changing your mind.

Don’t be neutral

Value streams already exist – the question is how effectively they are being managed. Staying neutral, or not investing in VSM, means competitors will be getting in front of you. Often, if you put off dealing with VSM, problems don’t necessarily emerge right away. They may start emerging in six to eight months. Therefore, it is essential to take a proactive approach, and get started right away. You have to invest in an umbrella before it starts raining, just like you have to invest in systems, processes, and people. Build trust in those investments before you realize you are facing a catastrophe.


If focus is everywhere, you will be weak everywhere. We all only have a finite amount of time and resources. If your organization is not clearly focused on value, you will be diluting power that exists in an organization. Focusing on everything can be very detrimental. You’ll fall into a trap of focusing on efficiency, while losing sight of what really matters: whether you are gaining traction in value delivery.

Balance continuous improvement and innovation

Continuous improvement is inherently about improving existing processes, while important drivers are very different for innovation. You must be able to accommodate unstructured creativity, experimentation, etc. to foster innovation, without disrupting other workflows. For example, if you have scrum methodology in agile, you are effectively measuring velocity. Ultimately scrum is about stability, and the ability to predict and forecast productivity. You can’t expect to measure on this type of predictability, while at the same time requiring innovation. It creates conflict. If there’s misalignment and you try to scale, you will only scale misalignment. Before you try to scale value, you need to be certain you are delivering value.


VSM makes success far more likely for your organization. By leveraging clear definitions, real-time data, and using an inclusive approach, teams can ensure benefits of VSM are realized – as proven by this real-life example.

For more information, watch this webinar, Resilience Through Rainstorms – How Unum Weathers Any Storm with VSM.

Collaboration Software

The rise of business transformation initiatives has IT leaders rethinking the way they evaluate, select, and negotiate technology and IT services deals today. Pivoting away from a serial approach to evaluation and selection, forward-thinking IT leaders are instead employing an integrated sourcing strategy tailored to facilitate business and IT transformations.

Several dynamics are fueling this trend. On the buy side, business executives are looking to transform their ERP, SCM, CRM, HR, and ecommerce platforms to address inadequacies exposed by the COVID pandemic, global supply chain issues, changes in workforce dynamics, and industry-specific opportunities and challenges. At the same time, CIOs are working to reshape the business of IT, driving their organizations to the cloud and to new delivery and operating models.  

On the sell side, vendors have revamped their go-to-market strategies, service offerings, and partnerships. SAP, for example, has launched its RISE offering, including reimagined partnerships with AWS, Google, and Microsoft as well as its consulting partners such as Accenture and IBM to bring a vertically integrated solution to market. Meanwhile, AWS, GCP, and Microsoft are partnering with consultants to present holistic cloud migration and application modernization strategies beyond SAP.

Organizations undertaking business transformations involving SAP in particular are faced with a range of intertwined issues around vendor engagement, evaluation, selection, and negotiation, the interdependencies of which must be understood in order to drive sound decision-making and beneficial outcomes.

Following are eight strategic concerns and imperatives for establishing and executing an SAP transformation primed for success.

1. Choosing SAP S/4HANA RISE vs. perpetual license model

As organizations map their journey from SAP ECC to S/4HANA, they must determine whether SAP RISE or an SAP S/4HANA perpetual license model is the best fit. Key to this decision is understanding the implications of moving from a capital-intensive purchase model to an operating expense model.  

Organizations must also assess whether SAP RISE will deliver on their operational requirements. Many organizations may be reluctant to turn control over to SAP due to prior experiences with SAP HEC, or they may struggle to understand the true scope and services included as part of RISE. Organizations considering SAP RISE must also come to terms with putting a significant amount of their AWS, GCP, or Microsoft Azure spend behind their SAP relationship, versus maintaining a direct relationship with their hyperscaler of choice. Lastly, organizations must also carefully assess the SAP RISE cost and commercial model against the SAP S/4HANA perpetual license model and commercial terms.

2. Establishing an SAP partner strategy

In addition to selecting the SAP platform, organizations must also determine their consulting and implementation partner strategy. Here, several decisions are key, including whether to undertake a Phase 0 initiative to determine objectives and scope, as well as whether this phase should be awarded as a sole source event or as part of a competitive bid process.

Subsequent to Phase 0, organizations must determine early whether they will seek partners for the design, build, and deploy phases of the program. Organizations will be challenged with building a plan that drives the timeline necessary to achieve business objectives while employing a sourcing strategy that maximizes insights that can be obtained from the market via a competitive bid process. This is critical in today’s market given current levels of attrition, inflation, and demand for high-end consulting resources. A final consideration is devising a strategy with respect to the evaluation, selection, and award of any non-SAP cloud migration and application modernization support.  

3. Re-examining hyperscaler partnerships

Organizations must also re-evaluate their cloud strategy. In addition to determining their long-term infrastructure support strategy for SAP, organizations are also likely to be moving SAP workloads to the cloud, retiring certain applications, or determining which applications should remain on premises or in a hosted environment. This would entail an evaluation of AWS, GCP, and Microsoft, as well as the possible undertaking of a multicloud strategy, a preferred/challenger vendor model, and an approach to addressing the short- and long-term requirements of these relationships.

For example, many organizations that are in the process of making commitments to Microsoft Azure, or exceeding the commitments previously made, need to determine whether they’re going to open up the entire Microsoft relationship to renegotiation. In addition, organizations need to evaluate the role of the hyperscaler in supporting the migration effort as well as the associated investments they are willing to make to support not only the SAP migration but the non-SAP migration. Millions of dollars are on the table to be captured or potentially wasted if orchestration of the hyperscaler evaluation, selection, and negotiation is not well coordinated with the corresponding workstreams.

4. Defining a future managed service strategy

Organizations must also determine not only their managed services strategy but also whether the vendors that are part of this strategy will participate in supporting their future-state SAP environment. Typically, this would include deciding whether the SAP systems implementation provider will provide application maintenance and support (AMS) for the future-state environment versus using an incumbent AMS provider to provide support for the existing environment and future-state environment.

Organizations will also need to understand what complementary infrastructure management services are required to support an SAP RISE environment or an SAP on-premises environment, as they certainly differ. It is essential that an organization’s future managed service partner strategy be considered in concert with determining whether to commit to SAP RISE or a perpetual license.

5. Reassessing existing managed service relationships

In many cases, an organization’s future managed service strategy will require revisiting relationships with existing managed service providers. Such a realignment could necessitate the removal or addition of service towers, removal or addition of workloads, modification of governance and operating models, or renegotiation of service levels, pricing, commercial terms, and conditions. As organizations look to the future, it’s critical that they understand the impact the future strategy will have on existing relationships while maintaining operational continuity during the transformation.

6. Aligning vision and strategy

A key dependency to developing an integrated sourcing strategy is to have a foundational view of the vision and overall strategy of the business and IT transformation initiatives. Unfortunately, in many cases, one aspect of the transformation vision and strategy maybe more advanced than another. For example, organizations highly focused on a business transformation may have engaged a consulting provider to conduct a phase 0 that would enable an S/4HANA implementation. The natural focus of this initiative would include development of the scope in the business case associated with the implementation, but often this means the run side of the vision is given short shrift, placing the organization in catchup mode trying to close the space between the vision for the business transformation and the vision for the IT transformation during the sourcing process, which is certainly not ideal.  

Organizations that establish a view of their business and IT transformation in a holistic fashion are best positioned to develop a sourcing strategy to support that vision.  In addition, organizations that establish their governing principles and objectives are best positioned to empower their team with a framework for decision making.

7. Building an effective team  

Execution of an integrated sourcing strategy is highly dependent on the development of an effective team to support the transformation. The team must comprise a highly capable, collaborative set of individuals representing executive leadership, lines of business, IT, procurement, finance, and legal. This may seem facile advice, but the reality is there are material organizational challenges associated with aligning capable individuals across these different domains.

For example, most procurement organizations are aligned by category of spend such as software and services and they do not have an individual capable of executing across all workstreams except at the highest levels of the organization. In addition, major disconnects can exist between line of business executives, a designated transformation executive, and their IT counterparts with respect to strategy and approach. Even within IT, disconnects between the application team and the infrastructure team can derail a process. These realities must be recognized at the outset of the program as leadership strives to develop a team with large-scale transformational experience that will have the complete support of executive leadership from day one.

Moreover, this team must be empowered to drive the project and the associated vendor evaluation selection and negotiation processes. Their level of credibility must be impervious to the top-down divide-and-conquer tactics and strategies that will be employed by consulting and technology providers and the scrutiny of executive leadership.

8. Establishing an integrated plan 

Too often, there is a disconnect between the expectations of executive leadership, the project team, and the procurement team relative to the overarching timeline and approach to the program. It is essential that the sourcing strategy include a plan that integrates the vendor evaluation, selection, and negotiation process into the project plan. This plan must consider the key milestones and decision points for the entire program, including timing for business case finalization and presentation, timeframes for selection, and program commencement. This plan must also include a well-thought-out approach not only to the timing but the sequencing of the above workstreams in a manner that will enable good decision-making.

For example, presentation and analysis of SAP’s RISE proposal and perpetual license proposal must be coordinated in a manner that coincides with the presentation and evaluation of the hyperscaler proposals and total cost of ownership comparisons. Another example is determining whether the implementation provider is going to have an opportunity to provide associate application means and support. If so, your organization should be leveraging the full opportunity associated with both bodies of work to maximize results. Without this plan, internal misalignment can occur and your organization will be subject to vendors presenting their capabilities and commercial proposals at a time and place of their choosing.

The bottom line is that many organizations are caught flat-footed by implementing a serial approach to selecting technology platforms, consulting and implementation providers, and managed service providers. The reality is the service offerings of technology providers and service providers alike address all three aspects of the technology lifecycle and they must be evaluated in a well-orchestrated and parallel path manner. Organizations that create and execute on an integrated sourcing strategy will be well positioned to make good decisions that set their program up for success, while maximizing their leverage to drive the best-in-class commercial agreements with all providers.

ERP Systems, SAP

Great teams incorporate a variety of skill sets. For example, a football team consisting of 11 quarterbacks would get crushed in a game against talented linemen, running backs and receivers. It’s no different when building a team for an enterprise AI project; you can’t just throw a bunch of data scientists into a room and expect them to come up with a revenue-generating or efficiency-improving project without support from other members of the enterprise.

Interestingly, many companies do just that, creating a disconnect between data science teams and IT/DevOps when it comes to AI development. This gap is a significant reason why AI pilot projects fail.

“AI projects are a team sport and should include a multidisciplinary team spanning business analysts, data engineering, data science, application development, and IT operations and security,” according to  Moor Insights & Strategy in a September 2021 report titled “Hybrid Cloud is the Right Infrastructure for Scaling Enterprise AI.”

The biggest divide between data scientists and IT often centers around the tools necessary to develop AI models.

“Many IT organizations try to build a killer, one-stop solution that fits all needs,” says Michael Balint, principal product architect at NVIDIA. For example, many prefer to develop with deep learning frameworks such as PyTorch on a dedicated system, while others schedule their work using Slurm or Kubeflow. IT is often left scratching their heads about how they can consolidate everything into one solution.”

Yet, this can be a disaster when it comes to AI projects, Balint warns. “This is such a nascent area that if you’re in IT and you try to pull the trigger on one solution, you might be missing out on functionality that a data scientist or data engineer might need to get their job done. Data scientists would really love to just build models and do real core data science. They get frustrated when they don’t have the tools to do that, and the blame gets put on IT.”

MLOps to the rescue

The better approach is to have IT work with the data science groups on bridging the gap through processes and tools such as MLOps. These can provide enterprises with governance, security and collaboration through features such as tracking and repeatability. MLOps platforms can orchestrate the collection of artifacts, compute infrastructure and processes that are needed to deploy and maintain AI-based models. Many MLOps systems can also evaluate the accuracy of models in order to retrain and redeploy as needed.

“Organizations can increase the percentage of models that are successfully deployed in production by implementing MLOps tooling, which aids in managing data science users, data, model versions, and experiments,” says Moor Insights. “The tooling should also allow IT teams to manage the develop-to-deploy cycle with the same DevOps rigor as traditional enterprise apps.”

This approach can help companies bridge the divide between the data and IT sides.

“A few years ago there was emphasis on deep learning engineers and data scientists as the heroes of the industry,” says Balint. “I think the unsung heroes are the DevOps and MLOps engineers that sit in the IT group, because you need to build the right solutions and stacks for everybody else to do their job. If you don’t have that, you can’t move very quickly.”

Go here to get more information about AI model development using DGXTM-Ready Software on NVIDIA DGX Systems, powered by DGX A100 Tensor core GPUs and AMD EPYCTM CPUs.

Artificial Intelligence, IT Leadership