Purchase a cheap card swipe cloner off the Dark Web. Distract a hotel housekeeper for a moment and clone their master key.

Use your mark’s email address to access a login page. Choose to reset the password and have the code sent to the mark’s phone. Check their voicemail using the default last four digits of the number as the PIN.

Watch someone accessing their bank info or email account on their laptop in an airport lounge. They log off to get a drink but leave the laptop open. Quickly reset their password, sending the code to their phone which they conveniently left by their computer. Read the code off the phone screen without even unlocking the phone.

Or perhaps the easiest of all: wait for your victim to step away from their unlocked workstation and quickly copy down their plaintext passwords from their password manager app.

There are multiple takeaways from the examples above. First, attack surfaces continue to expand dramatically. The number and variety of endpoints are limited only by the imagination of the cybercriminal. 

Second, none of these attacks requires much technical sophistication. Even the Dark Web might be optional. Simply google for a variety of tools to accomplish the malicious goal.

But perhaps most importantly: no amount of expensive cybersecurity gear will keep someone from typing in their password in view of prying eyes, losing sight of their RFID badge for a moment, or unlocking their phone in the presence of a threat actor. In recent years, researchers have reported that 73% of mobile device users have (deliberately or accidentally) observed someone else’s PIN being entered.

Multifactor authentication and employee training help, but given time and opportunity, even less-experienced attackers can break into poorly secured accounts.

We call this a basic type of social engineering attack shoulder surfing

The simplest examples indeed involve looking over someone’s shoulder. The problem with shoulder surfing attacks is that there is no way to prevent all of them. Some of them are bound to succeed. 

As with the more widely known phishing attacks, all it takes is one vulnerable individual to break into an account—or into an entire organization.

Shoulder surfing mitigation: start with good cyber hygiene

Prevention will never stop all attacks, but an ounce of cyber hygiene still goes a long way. MFA is a must-have. Employee training should also include shoulder surfing awareness. 

You already have some form of social engineering mitigation (or if you don’t, then you should!). Shoulder surfing is technically a form of social engineering, but it differs from the more familiar approaches insofar as the target is often completely unaware they’re being pwned. 

Social engineering prevention techniques focus on awareness of social interactions and identifying suspicious behaviors. While this is an important piece of the puzzle, some attacks will still go unnoticed, no matter how diligent the victim is. 

Perhaps most important: adopt a zero-trust philosophy across your organization and cybersecurity roadmap. There is no longer any such thing as perimeter security. Do not grant trust without real-time evaluation of whatever network, device, or user account is accessing a resource. Trust, after all, is the most valuable asset an attacker can exploit.

The best solution: real-time detection of suspicious endpoint behavior

Regardless of the attack vector, or even the attacker’s level of stealth, shoulder surfing attacks are the beginning of an attack chain. All attack chains have one thing in common: the attacker wants to do something with their access that a compromised user wouldn’t normally do themselves.

In other words, fighting shoulder surfing and the attacks that it spawns depends upon behavioral analysis. What are the normal user behaviors when someone logs in or otherwise accesses an endpoint? Compare those to the actual behaviors for each attempt. Are they out of the norm?

Such behavioral analysis is a cybersecurity mainstay. When hunting or responding to abnormal behavior in your environment, there are some specific priorities to keep in mind:

Catching the perpetrators in real time is essential. Once the attacker has uploaded malware to the target system and begun the process of lateral movement, the scope of the attack (and cost of containment and recovery) has expanded. Effective behavioral analysis in real-time provides the opportunity to detect and respond to suspicious actions in seconds, not hours.The sorts of behaviors to look for are varied. It might be unfamiliar network traffic, newly installed software, or the plugging in of a new device. Suspicious behavior might also include unusual use of already installed apps or services, including uncommon usage patterns of common administrative tools like PowerShell.Something that is supposed to exist might be missing. Real-time awareness of health and configuration issues of critical security and incident response tooling is essential. Prime your environment operational efficacy at any moment by monitoring for disruptions to critical endpoint agents and endpoint detection and response (EDR) products.

Tools like the Tanium platform are adept at addressing all these priorities.

Be proactive

Despite huge investments in cybersecurity protection across the industry, breaches still occur and demand a multilayered approach to visibility, security policy enforcement, detection, and incident response. Security admins can then configure the appropriate endpoint security policies ahead of time, enabling the platform to evaluate behaviors in accordance with policies in real time.

Tanium can quickly assess your environment, and report on endpoint configuration and anomalies, apply configuration policies and automate updates and configuration to ensure that everything is in a ready state for rapid response when necessary. 

While social engineering and other shoulder surfing attacks may bypass much security tooling, the goal is to identify such anomalous use of access rapidly and evict the attacker before they accomplish their goals.

The Intellyx take

Endpoint protection has always been a cat-and-mouse game. The attackers are numerous, persistent, and imaginative.

Given the inexorable pace of technology innovation, with all the devices, applications, and protocols hitting the market every day, there are always new opportunities for hackers to find some new way to achieve their nefarious ends.

Individuals and their organizations must therefore take an active, multilayered approach to protecting themselves. Don’t trust any endpoint. Expect to be breached, nevertheless. And implement a platform like Tanium’s to keep one step ahead of the attackers.


As mobile work experiences redefine how business gets done, managing an increasing number of devices across a modern workforce has become a growing challenge. Imagine the retail associate using a tablet to check inventory and pricing for customers, the UPS driver recording deliveries and updating the system, and the construction foreman referring to a device for building specifications on-site.

A 2022 Future of Work study found that “94% of organizations shifted to some sort of hybrid work structure due to the pandemic which then forced the creation of new, more efficient and potentially long-lasting workflows and processes (62%).” These are just some of the transformational business activities making work intrinsically mobile across every industry, creating opportunities, challenges, and imperatives for IT leaders to reevaluate and improve their mobile device management processes.

Mobile devices: High-cost risk and the need for governance

While significant attention has been paid to the rise of mobile work, less has been paid to the ability to govern a mobile workforce that can become unwieldy. A high-profile example is the fintech industry – built on modern technologies with high reliance on smartphones to access financial records. In September 2022, the U.S. Securities and Exchange Commission (SEC) imposed over $1 billion in fines on 16 fintech firms for violating recordkeeping requirements related to federal securities laws. Concurrently, the Commodity Futures Trading Commission (CFTC) also enforced $710 million in penalties for “failing to diligently supervise matters related to their businesses.” At issue was how employees were using personal devices and unauthorized messaging apps for business matters and the inability to keep proper records to meet industry compliance.

Fintech companies and all types of businesses are reconsidering mobile device strategies to achieve higher levels of regulatory compliance and new Zero Trust precedents for security.

Most are modifying mobile management strategies

According to a recent report, 81% of companies plan to modify their mobile device ownership strategies to meet evolving business requirements for greater security and return on investment (ROI). While the adoption of bring your own device (BYOD) strategies grew to meet hybrid work demands during the pandemic, more than half of respondents cited security (53%) and data breaches (50%) among their biggest concerns with Bring Your Own Device (BYOD) approaches.

The report states, “The security concerns are worth reiterating because, at organizations with a BYOD policy in place, two-thirds (65%) of the devices used to access company information are likely to be personally owned. This demonstrates how intertwined BYOD is with employee workflow. Even in the financial services sector, known for strict vetting and compliance procedures, over half (58%) of the mobile devices utilized in this capacity are personally owned. While it is possible that further restrictions control access to confidential information, even seemingly harmless data can be exploited by cybercriminals more easily in this manner, thus highlighting a challenging predicament for organizations to navigate.”

Gaining visibility and control over your mobile fleet

When introducing mobile governance, it helps to address both mobile devices and cloud applications together, as the two are tightly intertwined. First take stock of your mobile devices, the ownership of each, and all applications in use. An accurate inventory is the primary step in gaining visibility and control for both recordkeeping compliance and security purposes. 

Glean intelligence from an accurate inventory: IT expense management platforms can identify all assets in the corporate fleet as well as all cloud applications (sanctioned and unsanctioned) in the IT environment. This will serve as a launchpad for policy decision-making and Shadow IT discovery processes that can reveal both monitored and unmonitored communication channels needing tighter control and necessary recordkeeping. Usage audits and application security intelligence can also be helpful in knowing not just what you have but also how information is flowing and the risk of current usage.

Simplify compliance using technology: Can’t see into your devices? Consider Mobile Device Management software, or Unified Endpoint Management tools to insert more control over mobile devices and their applications. These technologies make it easier to manage policies, security, and other aspects of both corporate-owned and employee-used mobile devices of all types. Businesses use this software to authorize and issue devices, track their use, monitor communications, enforce security policies, secure lost or stolen devices, and ensure compliance. In the case of BYOD, they also help partition personal applications from corporate ones.

Question your operating system: Whether you’re moving from a BYOD approach to a corporate-owned approach or tightening your existing policy, question whether standardizing your mobile device operational platforms will help ease the burdens of compliance. In response to the recent SEC news, for example, some financial firms are moving all mobile phones to one platform and one provider.  

Consistency is key: Compliance often slips through the cracks at key junctures in the mobile device lifecycle. This is particularly the case as employees enter and exit the firm or when newly purchased devices are set up or activated for service. As such, the key to consistent compliance is a disciplined approach across the full device lifecycle. 

The Confidence of mobile compliance

It’s easy to feel overwhelmed by the vast responsibilities of mobile compliance but take comfort in the fact that most CIOs describe themselves as in a “governance phase” in 2023. That’s no surprise given remote work and accelerated digital transformation have gone unconstrained over the past three years. With the possible threat of fines, clear lines now need to be drawn to keep all work-related conversations on corporate networks where communications are accessible, can be captured, and managed.

Drawing those lines is a step-by-step process that starts with evaluating your current approach, understanding what assets are in use, and seeing where your fleet is falling short of security requirements and industry regulations. Don’t be afraid to make drastic shifts in your strategy, establishing all-new mobile usage policies. This is far better than finding out the hard way, paying millions in fines to the federal government or to bad actors after a ransomware attack.

To learn more about mobile device management, visit us here.

Endpoint Protection, Master Data Management, Remote Access Security, Security Infrastructure

Most IT leaders have assets moved to the cloud to achieve some combination of better, faster, or cheaper compute and storage services. They also expect to benefit from the expertise of cloud providers—expertise that isn’t easy for companies to develop and maintain in house, unless your company happens to be a technology provider.

“While computing power and hardware costs are lower on the cloud, your approach may not allow you to enjoy these savings,” explains Neal Sample, consultant and former CIO of Northwestern Mutual. “For example, if you move the front end of an application to the cloud, but leave the back end in your data center, then all of a sudden you’re paying for two sets of infrastructure.”

Another common reason companies are disappointed is they put information assets on the cloud in a “lift and shift” operation so applications never benefit from the advantages of cloud, such as elasticity. “A good elastic app doesn’t happen magically,” says Sample. “It needs to be written native for AWS or for another platform.”

The dilemma is you never really benefit from going to the cloud until you start using native functions. And even then, you can get trapped—not just to the cloud, but to a single cloud vendor. “There are a lot of differences between an AWS, for example, and an Azure,” Sample adds. “Using the native functions of one versus the other can lock you in. However, you won’t benefit from what the cloud has to offer until you re-architect your application for the cloud—and that means using native functions.”

A third reason companies are disappointed is because of a lack of control over their information systems. This is particularly pronounced in heavily regulated industries, such as financial services and healthcare, where companies can be held liable for non-compliance—nobody wants to trust a third party to keep them from legal troubles. Similarly, large data aggregators feel the need for control because they don’t want to leave their core business in the hands of a cloud provider.

Overall, disappointment comes from poor planning most of the time. Gartner has been offering advice in this area—most recently in The Cloud Strategy Cookbook, 2023—which can be summed up as: develop a cloud strategy, ideally before moving to the cloud; regularly update the strategy, keeping a record in a living document; and align your cloud strategy with desired business outcomes. Many companies that ignored this advice failed to reap the benefits of the cloud. As a result, some have decided to repatriate information assets, and too many of them do so with equally poor planning.

Repatriating is not for the faint hearted

Migrating back from the cloud is not an easy process—no matter what region you’re in. “Cloud repatriation is generally a last-ditch effort to optimize the cost structure of a business,” observes Sumit Malhotra, CIO of Time Internet in India. “But pulling off such a transition requires a deep technical understanding of the applications, skills in multiple technologies, and executive sponsorship of possible negative impact on user experience at the time of this transition. The journey is not for the faint hearted.”

It’s particularly difficult for smaller companies to repatriate, simply because, at their scale, the savings aren’t worth the effort. Why buy real estate and hardware and pay extra salaries only to save a small amount? By contrast, very large companies have the scale to repatriate, But do they want to?

“Do Visa, American Express or Goldman Sachs want to be in the IT hardware business?” asks Sample, rhetorically. “Do they want to try to take a modest gain by moving far outside their competency?”

Switching can also be complicated when the cost of change isn’t considered part of the calculation. A marginal run rate savings gained from pulling an application back on-prem may be offset by the cost of change, which includes disrupting the business and missing out on opportunities to do other things, such as upgrading the systems that help generate revenue.

A major transition may also cause down time—sometimes planned and other times unplanned. “A seamless cutover is rarely possible when you’re moving back to private infrastructure,” says Sample. “And that’s a really big concern in an era where 24/7 access is expected.”

Irrespective of the details, when a big name repatriates, word gets around. Dropbox made a splash when they migrated away from AWS storage service to their own custom-designed infrastructure starting in 2015. The company reported a cost of revenue savings of nearly $75 million starting in the first two years after the transition ($39.5 million from 2015 to 2016 and an additional $35.1 million in 2017).

More recently, in October 2022, web software company 37signals made news when its CTO and co-founder David Heinemeier Hansson wrote in a blog post that they’ll move their two main platforms—Basecamp (a project management platform) and HEY (a subscription-based email service)—off the cloud. However, they don’t intend to run their own data center, but rather work with a company that has carved out a niche providing a hybrid environment as a service.

“There are companies that specialize in this work,” Hansson says. “If your budget is of a size that this is appealing, that is, most likely, millions of dollars, you can afford to do this several times over with the savings you reap.”
Both Dropbox and 37signals have the motivation and capacity to make a switch since tech companies often rely more on compute and storage, and have a higher need for control and performance. They also have the expertise to pull off a reverse migration. Even though 37signals is working with Deft.com to repatriate, the move back from the cloud will require significant changes to the apps and data structures to get similar functionality in the new environment—the kinds of changes not every company has the skills to make.

For the Dropboxes and 37signals of the world, the move might make sense. But for non-tech companies, the equation is different. The cloud is getting more efficient and cheaper in ways their private data centers could never match. As cloud providers become better, faster, cheaper, and more ubiquitous, doubling down on a temporary cost advantage might cause these companies to miss out on future proofing their applications.

Both tech and non-tech companies should be careful to avoid winding up with the worst of two worlds. This happens when they try to recreate cloud functions on-prem. “If you decide to repatriate, avoid the situation where engineering teams seek to imitate the public cloud environment when building on-prem counterparts,” says Malhotra.

The same kind of mistake in the opposite direction is often one of the reasons companies are disappointed in cloud services. This happens, for example, when a system that depends on an on-prem architecture, such as client server, is moved to the cloud without being redesigned. Applications written with an older, client-server architecture will wind up on the cloud with the processor in a different location than the database. The resulting latency could be unbearable.

A hybrid enterprise is often worse than either one that’s strictly cloud or strictly on-premises. “In a hybrid environment, web pages take longer to load, applications aren’t as snappy for clients, and batch processes take longer to run as they move data in and out of the enterprise,” says Sample. “If you haven’t redone your architecture, you may find that a hybrid environment is actually worse from a performance perspective.”

Two knee-jerk reactions don’t add up to good planning

“I think cloud repatriation will continue to happen, but it will be more like a ripple than an ocean wave,” predicts Sample. “Companies will continue to move workloads to the cloud without being ready to do so. Then they’ll be faced with the motivation to pull back.”

Over time, clouds will become easier to use. They’re already becoming more flexible, and cloud portability is more practical. And as cloud technology improves, repatriation will become even less attractive than it is today.

“The motivation that would turn this into a tsunami just isn’t there,” says Sample. “I’m sure repatriation will continue to happen, but only in spots. And all too often, it’ll be the result of poor planning.”

Cloud Management

Prior to joining research firm Gartner in 2008, Irving Tyler was a CIO at IMS Health, and VP and CIO at Quaker Chemical Corporation.

In the late 1990s, he was challenged to address the ‘year 2000’ problem, or Y2K scare, as computer systems were readied for the new millennium, and he saw his skillsets develop in the areas of data centre management and ERP implementation.

That role, he says, is now long gone.

“The role of the CIO has expanded to be a business leader, visionary and architect, someone who can work with other executives effectively, not as a supplier and vendor but as a leader,” said Tyler, leader of Gartner’s CIO research team, at the company’s recent Symposium in Barcelona.

CIO ‘plus’ roles to lead business transformation initiatives

A Gartner study of technology leaders at Global Fortune 500 companies found that approximately 26% still had fundamental ‘run the business’ IT roles overseeing applications and infrastructure, while 30% had ‘plussed’ their role into business responsibilities, from back-office functions to front-facing engineering, product management and research development.

Approximately 44% of the surveyed CIOs and CTOs were now leading business transformation initiatives.

“These were initiatives to change the very core of their enterprises: how they go to market, how they develop goods and services, and how they optimise their supply chain,” said Tyler. “So the role is expanding; the value proposition is changing.”

When asked what kind of work CIOs and CTOs had done beyond atypical technology responsibilities, Gartner’s research found 80% of global Fortune 500 technology leaders were leading business initiatives, with 39% accountable to land the change in areas such as monetizing data to create new revenue, supply chain optimization, talent strategies and creating new digital products.

“These are new value propositions,” said Tyler. “I never would’ve imagined when I became a CIO that someday I’d be expected to lead these kinds of efforts.”

How CIOs find their value proposition

Despite the growing breadth of the CIO’s role, Tyler believes that technology executives can go further still, extending their value and influence within the organization by thinking of themselves less of a service provider and more of a ‘powerful, valuable product’, which senior executives, partners and peers need to do their jobs effectively.

“Product value proposition in business terms is something we develop when we’re trying to create the next generation of goods and services for our customers or citizens—any stakeholder we’re working with,” he said, giving the example that streamlining money could be the value proposition for a start-up financial services firm.

“Your leadership is a product that all of your executive team, partners, peers, and all of the people in your organization needs,” he said.

Tyler also suggested that CIOs must build their own product value proposition to deliver the maximum value to the business, and make a promise to stakeholders of how technology will help them achieve their desired outcomes, adding that technology leaders can take simple steps to start by understanding who consumes IT (most notably the executive board, functional leaders and technologists in and out of the technology team), and by deeply understanding their jobs, and how IT can remove pains and create gains.

“This is what we call value-fit,” he said.

By speaking with these individuals, asking them questions and building a profile of where they are and where they want to get to, CIOs can move beyond simply solving their role to expand their capabilities beyond what they imagined was possible.

Tyler gave the example of working with the CMO, who may be focused on providing better customer experiences through ecommerce, data platforms, content management and utilising AI to personalise and optimise customer journeys. Further inquisition, however, found that the ability do so was constrained by a lack of market standards on customer data platforms, a lack of technical know-how, and an uncertainty of how to assess technology suppliers—all of which the CIO could help with.

Tyler suggested three steps for CIOs to build their own product value proposition.

Step 1: Recognize and define each segment (editor’s note: marketers would refer to this exercise as developing personas)

Step 2: Survey each of these individuals, asking tough questions to understand their jobs, pains and gains

Step 3: Map your offerings to match, exploring differing levels of value (from the here and now, to where they want to go)

Why CIOs should act as hostage negotiators

Tyler also advocated for a radically different approach to winning hearts and minds from the boardroom down.

He said IT leaders need to look at building relationships similar to hostage negotiators by understanding whom they work with, building trust and credibility, assessing the level of risk to drive business value, and working together to come to a shared understanding.

This is particularly key, he says, for a CIO who only has accountability for IT, and thus needs to partner internally to drive change.

“You have to learn to negotiate your role to deliver these incredible transformational things that your leaders are trying to do,” said Tyler, adding that key business projects in finance, HR and supply chain are not under the ownership of the CIO.

Citing Lewicki and Hiam’s negotiation matrix, Tyler said there are five strategies to collaborate along the ‘importance of relationship’ and ‘importance of outcome’ twin axes. Four are suboptimal with most offering no value or resulting in the individual accommodating another for the sake of maintaining the relationship. Compromise doesn’t work either, he says, because neither party gets what they’re looking for.

“The only real strategy is collaboration,” he says. “Build something more powerful, more valuable so together, you both win. But you need techniques. Hostage negotiators have this brilliant set of tactics. They talk about building bridges, bringing two parties together, connecting them to accomplish something that is best for both parties.”

Tyler believes this starts with building empathy, trust and changing minds to new ways of thinking.

“[Hostage negotiator] Chris Voss says that negotiation is not an act of battle,” says Tyler. “You have to look at it as a process of discovery, to spend the majority of your negotiation time learning, exploring, finding out what’s going on. The information you get gives you power.”

To do this, according to Tyler, CIOs must understand the value system of the individual or team they’re working ­with to identify their jobs, challenges and opportunities, as well as where the shared value between them lies. Listening is essential, too, but just as critical is being respectful (which is not the same as agreeing), likeable and credible—being true to your word can make or break the relationship.

Reciprocity can also build bridges, with Tyler revealing not only that criminals are more likely to share information if they’ve been treated well, but research shows the most successful hostage negotiations have been those where the negotiator has built an emotional connection with the hostage taker.

This is called empathy mapping, another tactic used in product development, and Tyler said it can ultimately result in the two parties coming together on a shared vision, objective, and set of commitments, as well as shared risk for both parties.

But it’s not as straightforward as it sounds, as Tyler gave a personal example of when he got it wrong earlier in his career. Asked by a marketing director to roll-out a global CRM system in 90 days across 1,700 associates and 150 countries, he flatly refused and called his colleague crazy. “He didn’t appreciate my position because he was in a quarter and had to get this done,” Tyler said. “What I should have done is think empathy, start to learn, explore and understand his feelings and his vision.

CIO, Digital Transformation, IT Leadership, Relationship Building, Roles

Lately glazing up in a clear night sky and identifying different star constellations (in these days with the support of a mobile app – of course!) I got unswervingly reminded that everything is related to and interconnected with each other. Stars, together with planets and asteroids, form the solar system we live in, which constitutes to the galaxy, which in turn presents a part of the universe as we know it today.

Although some still perceive it straightforward – the business world is dynamic, interconnected, ambiguous, and unpredictable. Such an interconnected constellation accounts for the broad range of endeavours from strategy development, buying decisions, digital innovations, and transformation realization to system modernization itself.

To see the dynamics between different components, “systems thinking” can help. Once your organization thinks in systems, it can better understand root challenges, implications from one component to another, and even innovate more with effective disruption to gain new revenues, reduce costs, or mitigate risks more effectively.

The definition of system thinking

What is system thinking? First, let me outline the context of what a system is before illustrating how your organization can propel the transformation towards a digital- and sustainable-first enterprise with system thinking. A system is a set or structure of things, activities, ideas, and information that interrelate and interact with each other. Systems consequently alter other systems, because every part itself forms a (sub) system consisting of further parts. Even businesses and humans themselves are systems! That being laid as foundation, let’s move beyond academia and make this more pragmatic:

Look at the transformation as a system and simplify

Digital transformation and sustainable transformation, or any other considerable change in an organization as response to evolving market and customer needs, presents a system: There is continuous effort of diverse stakeholders with initiatives, activities, investments, and ideas that leverage digital concepts and technologies to achieve desired outcomes, such as increased operational efficiency or faster innovation with new digital products.

In this structure, different stakeholders and teams are driving distinct agendas as part of their contribution to the gearing within the overall transformation engine, which can span from experience to intelligence to platform agendas, and others. Commonly, these agendas target distinct objectives such as productivity, agility, or efficiency, and they are related to and within each other.

Following the acknowledgment of this reasoning, simplification is imperative to make these interrelations and activities visible and to be able to articulate and communicate the complex system presenting the organization’s current journey. A model like the HPE Digital Journey Map offers a simplified representation of the digital transformation system intended to promote understanding of the real system and to seek answers to specific questions about the system.


Embed system thinking in your ambition & strategy

In an era in which computing and connectivity are ubiquitous, servitization increasingly becomes relevant as guiding principle keeping the transformation journey on path. Capabilities of more and more smart, connected, and service-enriched products evolve significantly, , towards the end of 2022 the forecast is around 29 billion devices, and their traditional industry boundaries blur and shift.

A famous example given by the renowned economist Michael Porter depicts a tractor company that evolves from smart, connected tractors into farm equipment offerings and eventually into farm management systems. Spotted the keyword? The evolution occurs seemingly naturally, from discrete products and their intelligent enhancements into so-called product systems. In the tractor example closely related products and adjacent services are integrated. Eventually, multiple of these product systems can be combined together and triangulated with further external data, e.g. soil or weather data, into powerful systems of systems: entire farm management systems.


Hence, ingraining system thinking into your organization’s ambition and consequently into the transformation strategy will boost your organization into a leading position to redefine market boundaries and drive disproportionately positive value for your customers, ecosystem, and certainly your own business. Embedding systems theory at the core of your game plan undoubtedly influences environmental factors, competitive advantages through differentiation, and co-creation and -production components. Beyond new offerings, this also accounts for purchase decisions – rather than done in vacuum, buying decisions take place related and dependent on other (business) needs.

Recognize different systems in modernizing effectively

From understanding the phenomena of transformations to the strategic perspective of an organization’s ambition and its plan of action, let’s cascade further into the actual application of digital technologies and IT modernization. Indeed, as a CIO or CTO who is principally responsible for the platform’s agenda, a core driver focuses on modernizing the IT landscape, including platforms and applications for increased agility and optimized costs in responding to the business. In particular, the organization’s use of the applications will expose different paces and requirements for the various options of modernizing, including re-platforming, re-hosting, re-engineering, and other modernization outcomes.


The varying rates of change, adoption, and implications on governance, operations, and data within application landscapes can be distinguished between systems of record, systems of differentiation, and systems of innovation (this notion is coined as PACE-layered application strategy by Gartner) according to their primary purpose. These different layers reflect the characteristics of the different software modules related to their use and data lifecycle from new business models (innovation) to best of breed (differentiation) to core transaction processing (record), recognizing the interrelations with their users, the information flows, or funding aspects. Retaining further depth of this part for a different article, the essential to take from here is that this approach can allow to navigate data-first modernization more effectively incorporating the concept of systems.

Leveraging deep technological and methodical expertise as well as the HPE Digital Journey Map, Digital Advisors from HPE can help you exploring the system of transformations in the digital era with new value propositions, leading use cases and successful modernization patterns to propel your efforts and activities next. Reach out to an advisor like me on digitaladvisors@hpe.com to start our conversation today.


About Ian Jagger

Jagger is a content creator and narrator focused on digital transformation, linking technology capabilities expertise with business goals. He holds an MBA in marketing and is a Chartered Marketer. Today, he focuses on digital transformation narrative globally for HPE’s Advisory and Transformation Practice. His experience spans strategic development and planning for Start-ups through to content creation, thought leadership, AR/PR, campaign program building, and implementation for Enterprise. Successful solution launches include HPE Digital Next Advisory, HPE Right Mix Advisor, and HPE Micro Datacenter.

Digital Transformation, IT Leadership

This article was co-authored by Duke Dyksterhouse, an Associate at Metis Strategy

Data & Analytics is delivering on its promise. Every day, it helps countless organizations do everything from measure their ESG impact to create new streams of revenue, and consequently, companies without strong data cultures or concrete plans to build one are feeling the pressure. Some are our clients—and more of them are asking our help with their data strategy. 

Often their ask is a thinly veiled admission of overwhelm. They struggle to even articulate their objective, or don’t know where to start. The variables seem endless: data—security, science, storage, mining, management, definition, deletion, integration, accessibility, architecture, collection, governance, and the ever-elusive, data culture. But for all that technical complexity, their overwhelm is more often a symptom of mindset. They think that when carving out their first formal data strategy, they must have all the answers up front—that all the relevant people, processes, and technologies must be lined up neatly, like dominos. 

We discourage that thinking. Mobilizing data is more like getting a flywheel spinning: it takes tremendous effort to get the wheel moving, but its momentum is largely self-sustaining; and thus, as you incrementally apply force, the wheel spins faster and faster, until fingertip touches are enough to sustain a blistering velocity. As the wheel builds to that speed, the people, processes, and technologies needed to support it make themselves apparent. 

In this article, we offer four things you can do to get your flywheel spinning faster, and examine each through the story of Alina Parast, Chief Information Officer of ChampionX, and how she is helping transform the company (which delivers solutions to the upstream and midstream oil and gas industry) into a data-driven powerhouse. 

Step 1: Choose the right problem 

When ChampionX went public, its cross-functional team (which included supply chain, digital/IT, and commercial experts) avoided or at least tempered any grandiose, buzzword-filled declarations about “transformations” and “data-driven cultures” in favor of real-world problem solving. But also, it didn’t choose just any problem: it chose the right problem—which is the first and most crucial step to getting your flywheel spinning. 

At the time, one of ChampionX’s costliest activities in its Chemical Technologies business was monitoring and maintaining customer sites, many of which were in remote parts of the country. “It was more than just labor and fuel,” Alina explained. “We had to spend a lot on maintaining vehicles capable of navigating the routes to those sites, and on figuring out what, exactly, those routes were. There were, and still are, no Google maps for where our field technicians need to go.” Those costs were the price of “keeping customers’ tanks full, not dry”– one of ChampionX’s guiding principles and the core of its value proposition to improve the lives of its customers. “And so, we wondered, ‘how can we serve that end?’” 

  The problem the team chose to solve—lowering the cost of site trips—might appear mundane, but it had all the right ingredients to get the flywheel moving. First, the problem was urgent, as it was among ChampionX’s most significant expenses. Second, the problem was simple (even if its solution was not). It was easy to explain: It costs us a lot to trek to these sites. How can we lower that cost? Third, it was tangible. It concerned real world objects—trucks, wells, equipment, and other things people could see, hear, or feel. Equally important, the team could point to the specific financial line items their efforts would move. Finally, the problem was shared by the enterprise at large. As part of the cross-functional leadership team, Alina didn’t limit herself to solving what were ostensibly CIO-related problems. She understood: if it was a problem she and her organization could help solve, then it was a CIO-related problem. 

IT executives talk often of people, processes, and technology as the cornerstones of IT strategy, but they sometimes forget to heed the nucleus of all strategy: solving real business problems. When you’re getting started, set aside your concerns about who you will hire, what tools you will use, and how your people will work together—those things will make themselves apparent in time. First get your leaders in a room. Forego the slides, the spreadsheets, and the roadmaps. Instead, ask, with all sincerity: What problem are we trying to solve? The answer will not come as easily as you expect, but the conversation will be invaluable. 

Step 2: Capture the right data 

Once you’ve identified a problem worthy of solving, the next step is to capture the data you need to solve it. If you’ve defined your problem well, you’ll know what that data is, which is key. Just as defining your problem narrows the variety of data you might capture, figuring out what data you need, where to get it, and how to manage it will narrow the vast catalog of people, processes, and technologies that could compose your data environment. 

Consider how this played out for Alina and ChampionX. Once the team knew the problem—site visits were costly—they quickly identified the logical solution: Reduce the number of required site visits. Most visits were routine, rather than in response to an active problem, so if ChampionX could glean what was happening at the site remotely, they could save considerable time, fuel, and money. That insight told them what data they would need, which in turn allowed ChampionX’s IT and Commercial Digital teams to discern who and what they needed to capture it. They needed IoT sensors, for example, to extract relevant data from the sites. And they needed a place to store that data—they lacked infrastructure that could manage both the terabytes pouring off the sensors and the coupling customer data (which resided within enterprise platforms such as ERP, transportation, and supply & demand planning). So, they built a data-lake.  

Each of these initiatives—standing up secure cloud infrastructure, the design of the data lake, the sensors, the storage, the necessary training—was a major undertaking and is continuing to evolve. But the ChampionX team not only solved the site-visit problem; they provided a foundation for the company’s data environment and the data-driven initiatives that would follow. The data lake, for example, came to serve as a home for an ever-growing volume and variety of data from ChampionX’s other business units, which in turn led to some valuable insights (more on that in the next section). 

Knowing what data to capture provides the context you need to start selecting people, tools, and processes. Whichever you select, they will lend themselves to unpredictable ends, so it’s a taxing and fruitless exercise to try and map every way in which one component of your data environment will tie to all others— and from that, to choose a toolkit. Instead, figure out what you need for the problem—and the data—in front of you. Because you’ll be making selections in relation to something real and important in your organization, odds are, your selections will end up serving something else real and important. But in this case, you’ll be able to specify the names, costs, and sequencing of the things you need—details that will make your data strategy real and get your flywheel spinning faster. 

Step 3: Connect dots that once seemed disparate 

As you begin to capture data and your flywheel spins faster, new opportunities and data will reveal themselves. It wasn’t long after ChampionX’s team had installed the IoT sensors to remotely monitor customer sites that they realized the same data could be applied elsewhere. ChampionX now had a wealth of topographical data that no one else did, and it used this data to move both the top and the bottom lines. It moved the bottom line by optimizing the routes that ChampionX’s vehicles took to sites—solving the no-Google-Maps-where-we’re-going problem—and it moved the top by monetizing the data as a new revenue stream. 

The data lake, too, took on new purpose. Other business initiatives began parking their data in it, which prompted cross-functional teams to contemplate the various kinds of information swirling around together and how they might amount to more than the sum of their parts. One type was customer, order, and supply chain data, which ChampionX was regularly required to pull and merge with site data to perform impact analyses—reports of which and how their customers were affected by a disruption in supply chain networks. Merging those data used to take weeks, largely because the two data had always lived in different ecosystems. Now, the same analyses took only hours. 

There are two takeaways here. The first is that it’s okay if your data flywheel spins slowly at the start—just get it going. Attracting even a few new opportunities or types of data will afford you the chance to draw connections between things that once seemed disparate. That pattern recognition will speed up your flywheel at an exponential rate and encourage an appropriately complex data environment to take shape around it. 

The second takeaway is similar to those of the first two steps: Choose wisely among the opportunities you could pursue. Not every insight that is interesting is useful; pursue the ones that are most valuable and real, the ones people can see, measure, and feel. These will overlap significantly with tedious and banal, recurring organizational activities (like pulling together impact reports). If you can solve these problems, you will prove the viability of data as a force for change in your organization, and a richer data culture will begin to emerge, pushing the flywheel to an intimidating pace. 

Step 4: Build outward from your original problem 

The story of ChampionX that we’ve examined is only one chapter of a much larger tale. As the company has collected more data and its people gleaned new insights, the problems that Alina and her business partners take on have grown in scope and complexity, and ChampionX’s flywheel has reached a speed capable of powering data-first problem-solving across the company’s entire supply chain. 

Yet, most of the problems in some way trace back to the simple question of how the company might spend less on site-checks. ChampionX’s team has not hopped willy-nilly from problems that concern the supply chain to those that concern Marketing, or HR, or Finance; the team is expanding outward in logical progression from their original problem. And because they have, their people, processes, and technologies, in terms of maturity, are only ever a stone’s throw from being able to tackle the next challenge—which is always built on the one before it. 

As your flywheel spins faster, you will have more problems to choose among. Prioritize those that are not only feasible and valuable but also thematically consistent with the problems you’ve already solved. That way, you’ll be able to leverage the momentum you’ve built. Your data environment will already include many of the people and tools you need for the job. You won’t feel as if you’re starting anew or have to argue a from-scratch case to your stakeholders. 

Building a data strategy is like spinning a flywheel. It’s cyclical, iterative, gradual, perpetual. There is no special line that, if crossed, will deem your organization “data-driven.” And likewise, there is no use in thinking of your data strategy as something binary, as if it were a building under construction that will one day be complete. The best thing you can do is focus on using your data to solve problems that are urgent, simple, tangible, and valuable. Assemble the people, processes, and technologies you need to tackle those problems. Then, move onto the next, and then the next, and then the next, allowing the elements of a vibrant data ecosystem to emerge along the way. You cannot will your data strategy into existence; you can only draw it in, by focusing on the flywheel. And when it appears, you, and everyone else, will know it. 

Analytics, Data Management

We live in a highly connected world. Technology has broken down many barriers to trade.  Every aspect of retail has been disrupted, from the way shoppers research purchases to the methods they use to pay. However, despite the powerful forces of globalization, significant local differences exist. 

In some countries, the use of mobile phones is now an essential part of the physical shopping experience, while in other territories there’s a more obvious distinction between online and offline shopping. 

Payment innovations like buy-now-pay-later (BNPL) are popular in parts of the world but have yet to gain traction everywhere. And some countries are far more comfortable buying items like groceries online than others.

So, while it’s possible to sketch global trends, an understanding of local markets is vital if merchants are to create services, payment options, and communication strategies that will really resonate. The old saying ‘retail is detail’ is as relevant now as ever.


Every year, we publish reports about shopping trends around the world. This year’s reports, the Global Digital Shopping Index series, looked at six different markets: the UAE, Brazil, the USA, the UK, Australia, and Mexico. Here are some of the key local differences we’ve uncovered this year: 

Ringing the changes
Many of us got used to shopping online during the pandemic – and now that people are returning to stores, they’re using their phones to help them shop.

The overall use of mobile phones to enhance the in-store shopping experience is up 19% since 2020, but as the chart below illustrates, the use of phones varies considerably in different markets.


% of in-store shoppers who used mobile devices to assist with their shopping experiences *

Flexing up
One of the big developments in payments over the last few years has been the rise of flexible BNLP platforms. BNPL is used by the majority of consumers in Brazil but only a third of total shoppers in the UK.


Overall % of shoppers who use BNPL, by country *

Comparing the data on Brazil to the numbers in the UAE reveals some stark differences: over half of older consumers in Brazil use BNPL, whereas in the UAE it’s only around 1 in 20.

But while Brazil shows across-the-board adoption of BNPL, the biggest adopters are young Australians. They’re more than three times more likely to use flexible payment platforms than the oldest generation of consumers.


Share of consumers in selected markets who’ve used BNPL in the last year, by generation *

Food for thought
When the pandemic hit, much of modern life switched online – including activities like buying groceries. Today, just over 40% of consumers say they’re likely to order their groceries online. But that figure masks some big regional differences, with Brazilian shoppers half as likely to do so as consumers in the UAE.


Consumers who are “very” or “extremely likely” to buy groceries using a “digital-first” approach *

While on average people are less likely to buy groceries online than non-perishables like clothes or electronics, that’s not the case everywhere. Indeed, consumers in the UAE are far more likely to buy their groceries online than anything else.


Most likely categories to be bought using a “digital-first” approach, by country *

One size does not fit all
Digitalization, flexible payments and the use of mobile phones as part of the shopping experience are all factors no retailer can ignoreBut, as these stats show, merchants need to dig beneath the headline trends if they really want to succeed in individual markets.

A little local knowledge really could go a long way.

For the full picture, explore the Global Digital Shopping Index series now.

*  All data comes from the Global Digital Shopping Index and supporting research

IT Leadership