Six out of ten organizations today are using a mix of infrastructures, including private cloud, public cloud, multi-cloud, on-premises, and hosted data centers, according to the 5th Annual Nutanix Enterprise Cloud Index. Managing applications and data, especially when they’re moving across these environments, is extremely challenging. Only 40% of IT decision-makers said that they have complete visibility into where their data resides, and 85% have issues managing cloud costs. Addressing these challenges will require simplification, so it’s no surprise that essentially everyone (94%) wants a single, unified place to manage data and applications in mixed environments.

In particular, there are three big challenges that rise to the top when it comes to managing data across multiple environments. The first is data protection.

“Because we can’t go faster than the speed of light, if you want to recover data, unless you already have the snapshots and copies where that recovered data is needed, it’ll take some time,” said Induprakas Keri, SVP of Engineering for Nutanix Cloud Infrastructure. “It’s much faster to spin up a backup where the data is rather than moving it, but that requires moving backups or snapshots ahead of time to where they will be spun up, and developers don’t want to think about things like that. IT needs an automated solution.”

Another huge problem is managing cost—so much so that 46% of organizations are thinking about repatriating cloud applications to on-premises, which would have been unthinkable just a few years ago.

“I’m familiar with a young company whose R&D spend was $18 million and the cloud spend was $23 million, with utilization of just 11%,” Keri said. “This wasn’t as much of a concern when money was free, but those days are over, and increasingly, organizations are looking to get their cloud spend under control.”

Cloud data management is complex, and without keeping an eye on it, costs can quickly get out of control.

The final big problem is moving workloads between infrastructures. It’s especially hard moving legacy applications to the cloud because of all the refactoring, and it’s easy for that effort to get far out of scope. Keri has experienced this issue firsthand many times in his career. 

“What we often see with customers at Nutanix is that the journey of moving applications to the cloud, especially legacy applications, is one that many had underestimated,” Keri said. “For example, while at Intuit as CISO, I was part of the team that moved TurboTax onto AWS, which took us several years to complete and involved several hundred developers.”

Nutanix provides a unified infrastructure layer that enables IT to seamlessly run applications on a single underlying platform, whether it’s on-premises, in the cloud, or even a hybrid environment. And data protection and security are integral parts of the platform, so IT doesn’t have to worry about whether data will be local for recovery or whether data is secure—the platform takes care of it.

“Whether you’re moving apps which need to be run on a platform or whether you’re building net-new applications, Nutanix provides an easy way to move them back and forth,” Keri said. “If you start with a legacy application on prem, we provide the tools to move it into the public cloud. If you want to start in the cloud with containerized apps and then want to move them on-prem or to another cloud service provider, we provide the tools to do that. Plus, our underlying platform offers data protection and security, so you don’t have to worry about mundane things like where your data needs to be. We can take the pain away from developers.”

For more information on how Nutanix can help your organization control costs, gain agility, and simplify management of apps and data across multiple environments, visit Nutanix here.

Data Management

Companies capture more data and compute capacity at the edge. At the same time, they are laying the groundwork for a distributed enterprise that can capitalize on a multiplier effect to maximize intended business outcomes.

The number of edge sites — factory floors, retail shops, hospitals, and countless other locations — is growing. This gives businesses more opportunity to gain insights and make better decisions across the distributed enterprise. Data follows the activities of customers, employees, patients, and processes. Pushing computing power to the distributed edge ensures that data can be analyzed in near real time — a model not possible with cloud computing. 

With centralized cloud computing, due to bandwidth constraints, it takes too long to move large data sets and analyze the data. This introduces unwanted decision latency, which, in turn, destroys the business value of the data. Edge computing addresses this need for immediate processing by leaving the data where it is created by instead moving compute resources next to such data streams. This strategy enables real-time analysis of data as it is being captured and eliminates decision delays. Now the next level of operational efficiency can be realized with real-time decision-making and automation. At the edge: where activity takes place. 

Industry experts are projecting that 50 billion devices will be connected worldwide this year, with the amount of data being generated at the edge slated to increase by over 500% between 2019 and 2025, amounting to a whopping 175 zettabytes worldwide. The tipping point comes in 2025, when, experts project, roughly half of all data will be generated and processed at the edge, soon overtaking the amount of data and applications addressed by centralized cloud and data center computing.

The deluge of edge data opens opportunities for all kinds of actionable insights, whether it’s to correct a factory floor glitch impacting product quality or serving up a product recommendation based on customers’ past buying behavior. On its own, such individual action can have genuine business impact. But when you multiply the possible effects across thousands of locations processing thousands of transactions, there is a huge opportunity to parlay insights into revenue growth, cost reduction, and even business risk mitigation.

“Compute and sensors are doing new things in real time that they couldn’t do before, which gives you new degrees of freedom in running businesses,” explains Denis Vilfort, director of Edge Marketing at HPE. “For every dollar increasing revenue or decreasing costs, you can multiple it by the number of times you’re taking that action at a factory or a retail store — you’re basically building a money-making machine … and improving operations.”

The multiplier effect at work

The rise of edge computing essentially transforms the conventional notion of a large, centralized data center into having more data centers that are much smaller and located everywhere, Vilfort says. “Today we can package compute power for the edge in less than 2% of the space the same firepower took up 25 years ago. So, you don’t want to centralize computing — that’s mainframe thinking,” he explains. “You want to democratize compute power and give everyone access to small — but powerful — distributed compute clusters. Compute needs to be where the data is: at the edge.”

Each location leverages its own insights and can share them with others. These small insights can optimize operation of one location. Spread across all sites, these seemingly small gains can add up quickly when new learnings are replicated and repeated. The following examples showcase the power of the multiplier effect in action:

Foxconn, a large global electronics manufacturer, moved from a cloud implementation to high-resolution cameras and artificial intelligence (AI) enabled at the edge for a quality assurance application. The shift reduced pass/fail time from 21 seconds down to one second; when this reduction is multiplied across a monthly production of thousands of servers, the company benefits from a 33% increase in unit capacity, representing millions more in revenue per month.

A supermarket chain tapped in-store AI and real-time video analytics to reduce shrinkage at self-checkout stations. That same edge-based application, implemented across hundreds of stores, prevents millions of dollars of theft per year.

Texmark, an oil refinery, was pouring more than $1 million a year into a manual inspection process, counting on workers to visually inspect 133 pumps and miles of pipeline on a regular basis. Having switched to an intelligent edge compute model, including the installation of networked sensors throughout the refinery, Texmark is now able to catch potential problems before anyone is endangered, not to mention benefit from doubled output while cutting maintenance costs in half.

A big box retailer implemented an AI-based recommendation engine to help customers find what they need without having to rely on in-store experts. Automating that process increased revenue per store. Multiplied across its thousands of sites, the edge-enabled recommendation process has the potential to translate into revenue upside of more than $350 million for every 1% revenue increase per store. 

The HPE GreenLake Advantage

The HPE GreenLake platform brings an optimized operating model, consistent and secure data governance practices, and a cloudlike platform experience to edge environments — creating a robust foundation upon which to execute the multiplier effect across sites. For many organizations, the preponderance of data needs to remain at the edge, for a variety of reasons, including data gravity issues or because there’s a need for autonomy and resilience in case a weather event or a power outage threatens to shut down operations.

HPE GreenLake’s consumption-based as-a-service model ensures that organizations can more effectively manage costs with pay-per-use predictability, also providing access to buffer capacity to ensure ease of scalability. This means that organizations don’t have to foot the bill to build out costly IT infrastructure at each edge location but can, rather, contract for capabilities according to specific business needs. HPE also manages the day-to-day responsibilities associated with each environment, ensuring robust security and systems performance while creating opportunity for internal IT organizations to focus on higher-value activities.

As benefits of edge computing get multiplied across processes and locations, the advantages are clear. For example, an additional monthly increase in bottom-line profits of $2,000 per location per month is easily obtained by a per-location HPE GreenLake compute service at, say, $800 per location per month. The net profit, then, is $1,200. When that is multiplied across 1,000 locations, the result is an aggregated profit of an additional $1.2 million per month — or $14.4 million per year. Small positive changes across a distributed enterprise quickly multiply, and tangible results are now within reach.  

As companies build out their edge capabilities and sow the seeds to benefit from a multiplier effect, they should remember to:

Evaluate what decisions can benefit from being made and acted upon in real time as well as what data is critical to delivering on those insights so the edge environments can be built out accordingly

Consider scalability — how many sites could benefit from a similar setup and how hard it will be to deploy and operate those distributed environments

Identify the success factors that lead to revenue gains or cost reductions in a specific edge site and replicate that setup and those workflows at other sites

In the end, the multiplier effect is all about maximizing the potential of edge computing to achieve more efficient operations and maximize overall business success. “We’re in the middle of shifting from an older way of doing things to a new and exciting way of doing things,” Vilfort says. “At HPE we are helping customers find a better way to use distributed technology in their distributed sites to enable their distributed enterprise to run more efficiently.”

For more information, visit https://www.hpe.com/us/en/solutions/edge.html

Edge Computing

Both customer and employee experience have seen an accelerated transformation with the introduction of cloud technologies, were significantly affected by the pandemic, and now see a remarkable shift in terms of new approaches in leadership.

In customer experience, the introduction of cloud-based solutions has accelerated automation and enhanced client success prediction accuracy. As a result, numerous essential concerns like data collecting and reporting, decision-making, and data optimization are being addressed. Tiffany Willcox, CTO at Marie CurieKaren Bach, and Laura Dawson, Founder of Leaderly will discuss the challenges around the Omnichannel Customer Experience, looking at the challenges faced when managing multiple touchpoints and building optimal architecture to support the expectations of the ‘modern customer’.  

Tech leaders are focusing more on developing the right technologies to support and enhance customer experience. Challenges for teams include finding effective strategies for using customer data to fuel experience decisions and developing tech evaluation criteria for their organisation. Measuring investment and determining ROI plays a big part in this and proving the ROI for customer experience is essential for organisations to continue investment and growth. Mattias Goehler, CTO EMEA for Zendesk will share how some of the key customer experience trends play their role and what to consider when looking at future investment. Leon Gauhman, Chief Product and Strategy Officer at Elsewhen will share how data can improve productivity and support customers, highlighting strategies for using data and design to modernize operational models and transform organisations. 

As customer experience continues to improve, support is needed for the workforce that drives it. By 2025, more than 50% of IT organizations will use digital employee experience to prioritize and measure digital initiative success. This is a significant increase from fewer than 5% in 2021 (Gartner). The Keynote speaker is Bruce Daisley, Former EMEA VP of Twitter and best-selling author on workplace culture. He will explore how to rethink hybrid work and investigate resilience in leadership.

Victoria Williams, CEO of Terptree, will speak with Arif Mohamed at CIO UK on how to create deaf employee and customer experiences. There are 12 million people who are deaf or have a hearing loss in the UK, and sadly they are massively underserved by most businesses, retailers, and organisations. She will share insights in what steps organisations can take to open opportunities to deaf customers and employees.

Every business now needs to be digital, and every employee now needs to depend on workplace technology to succeed. As a result, the employee digital experience has become a vital part of staff retention, team productivity, and ultimately business growth. The role of the CIO has evolved as tech plays a key role in collaboration and support teams to continue to optimise productivity. Marie Hill, CDIO at DB CargoKatie Nykanen, CTO at QA, and Sarah Cunningham, Senior VP Enterprise IT at Arm, sit with Dax Grant, CEO of Global Transform to discuss the CI/TO’s changing role in employee engagement.

Distributed teams and hybrid work models are becoming standard. Efforts to engage and keep employees are at an all-time high and now external economic and geopolitical pressures affecting supply chains and partners are escalating the challenges of running a business. According to Gartner, 52% of employees say flexible work policies will affect the decision to stay at their organisation which means staff turnover is likely to increase. Tony Healey, Group CTO at TicketerMegan Dooley, CDO at Openwork, and Bev White, CEO at Nash Squared will take a closer look at hybrid work, understanding how we can connect hybrid employees to the organization’s culture. Georgina Owens, CTO at Liberis will host a discussion with Helen Wright, Group Head of IT at Amber RiverDax Grant, CEO at Global TransformRajat Dhawan, CTO at Soho House, and Emma Smith Director of Transformation at the University of Bath to discuss what Digital Experience means for the business and the tooles needs to acheive effective digital experiences across the business.

The Closing Keynote is Amanda Brock, CEO of OpenUK. Amanda will share her work on open source data and technology, highlighting how it can optimise hybrid work and attract and retain talent. 

Held at the Nobu Hotel London Portman Square on Thursday 23rd March, the forum is free to attend for qualified attendees. You can find the full programme here. Register here to join.  

CIO

“Land never deceives” is a common slogan of farmers around Africa. Many people go into farming entirely, or as a side endeavor, with a high certainty they’ll make money and produce more good for all. And when technology is added to the mix, opportunities multiply.

Having the largest area of uncultivated arable land in the world, sub-Saharan Africa, with a young population—nearly 60% is under 25—and a wealth of natural resources, has unparalleled advantages that could double or even triple its current agricultural productivity, according to the Status of Agriculture in 47 Sub-Saharan African Countries, a report the Food and Agriculture Organization (FAO) jointly published with the International Technology Union (ITU) in March 2022.

Some African countries depend almost entirely on agriculture, like Ethiopia, for example, with 80% of its economy based on it. Jermia Bayisa Lulu, CEO and co-founder of start-up Debo Engineering Agritech, has consolidated his knowledge and experience in computer networking, engineering, and Artificial Intelligence (AI) research to go all in on agritech to solve the problems that affect 85% of community life in his native Ethiopia.

“Our economy is based on agriculture and I believe it should be further supported by technology to increase agricultural productivity,” he says. “Plus, about 20.4 million people in Ethiopia are in need of food aid, which motivates us to solve the problem of agriculture to ensure the lives of millions of people. The same is true for most African countries that need to be supported by technological solutions.”

Like Bayisa Lulu, many believe that technology mixed with agriculture is essential to develop the agricultural sector and improve people’s lives, including Michael Hailu. He is the director of the ACP-EU Technical Centre for Agricultural and Rural Cooperation, which brings together 79 African, Caribbean and Pacific countries and European Union member states.

“In agriculture, digitization could be a game changer by boosting productivity, profitability and resilience to climate change,” says Hailu.

In last year’s Digitization of African Agriculture report, which the ACP compiles, it details how 33 million small-scale farmers and pastoralists registered with Digital for Agriculture (D4Ag) solutions across the continent in 2019, adding that it’s expected to rise to 200 million by 2030.

“The stakes are so high it’s not surprising most African countries have made agricultural transformation a major focus of their national strategies,” he adds.

Diverse problems as solutions

On the ground, things are already changing with a multitude of start-ups solving a variety of agricultural problems with drone technology, precision agriculture and Internet of Things (IoT) solutions. The scope of technology in this sphere is vast and is an important driver of change.

Youth innovation in Ghana, for instance, continues to exceed expectations according to Kenneth Abdulai Nelson, co-founder and MD of Farm360 Global, a crowdfunding and consulting company dedicated to smart farming projects. He believes that the days when agriculture was not “sexy” to most young people are over, thanks to the technology revolution.

“With the keen interest in developing agriculture through technology, leading centers have initiated and supported training young entrepreneurs to challenge the status quo and develop innovative technological solutions to solve key problems in the agricultural sector,” he says.

He listed AI solutions that today improve the quality of crops or precision agriculture found in the use of drones, robotics, hydroponics, and more.

“AI technology can detect plant diseases, pests and nutritional deficiencies on farms,” he says. “AI sensors can detect and target weeds and areas of poor nutrition and then decide which herbicide or fertilizer to apply in the area.”

Abdulai Nelson also has a personal interest in drones as a proven and effective way to improve agriculture. Indeed, in Ghana and others across the continent, drones are used for mapping, pesticide spraying, soil and data analysis, and farm monitoring to improve productivity while maximizing the use of labor.

He also appreciates the expensive but valuable irrigation technologies being used by some businesses on the continent.

“Anticipating the impacts of climate change emphasizes the need for irrigation technologies to ensure year-round production,” he says. “More than 50% of farmers rely heavily on seasonal rainfall, which continues to change dramatically.”

He also finds that IoT solutions stimulate productivity in the agrarian sector by effectively analyzing data, both historical and current, to inform well thought out activities. Applications are wide-ranging and include deep sensors to help predict rainfall and drought; soil sensors to determine fertilizer application areas; storage sensors to make sure products are stored at favorable temperatures; and input tracking and logistics to reduce post-harvest losses.

Walid Gaddas is a Tunisian consultant in strategy and international development in the agritech sector. He manages STECIA International, a consulting firm in strategy and international development in agriculture that works with global partners around the world, and working with several agritech projects in North Africa and sub-Saharan Africa, he has observed great potential.

“More countries are aware that agritech is not the agriculture of tomorrow but of today,” he says. “In countries such as Ivory Coast in West Africa, the government has already put in place all the strategies to digitize agriculture. Many activities are being carried out to digitalize the cocoa and rubber sectors.”

Strength of creativity

One of the crucial issues that agriculture in Africa is currently solving, according to Gaddas, is a lack of water. He says that in Senegal, Tunisia and many other countries, companies are working hard on intelligent irrigation, and on how to optimize water resources that are becoming increasingly scarce, especially in the context of climate change and unpredictable rainfall.

“Managing water is becoming crucial,” he says. “We’ve met start-ups that use drones, which, through their precision devices, help to collect data that can be used by farmers, such as the levels of nitrogen from the fields, precise mapping of areas with fertiliser deficits,and others that solve plant disease problems by making diagnoses. There are also ERP systems for farm management and to know what is happening in real time—the management of inputs, fertilizers and more.”

He also appreciated the digital aquaculture companies that allow for very rational management of aquaculture farms, while praising the impressive diversity of solutions.

“The diversity of problems that farmers face in Africa is very wide but creativity is not the weak point of Africans,” he says. “Farmers also generally have issues with small plots, low yields and low productivity, so they often lack the know-how to optimize what little they have.”

So these digital solutions are aimed more at small-scale farmers who are used to working like their parents or grandparents and don’t necessarily have all the knowledge, so technologies can provide them with research results and tell them what they need for their crops, Gaddas says.

Coupled with these data analytics solutions and other related technologies, the most complex problems in agriculture are being solved according to computer scientist Bayisa Lulu.

“Emerging technologies are solving complex problems that seemed to go unsolved in the past decades and without much user involvement, which is a very important, especially for the disadvantaged.”

Success relies on tech

IT leaders are now making their mark in this transformation by helping to identify and develop solutions through implementing agritech accelerator and incubator programs to reduce pressure, risk, food safety and waste.

By doing so, they take a broad and long-term view of key issues in the agricultural space, and serve as the engine behind effecting solutions.

“An operations manager can identify the problem,” says Abdulai Nelson, “but it’s up to the CIO to listen, design and develop the most appropriate technology solution.”

Others agree that the development of agriculture and technology has unlimited possibilities, and it’s the right time to build better bridges between them so agriculture can further benefit from cutting-edge technologies quicker and on a larger scale, according to Gaddas. Education, of course, is key.

“The fact that agronomists are associated with computer scientists makes all the difference because the contribution of technology to agriculture is enormous, and also the agricultural logic integrated by computer scientists transforms things,” he says. “They must be able to enrich each other’s capacities. It’s great to see them working hand in hand changing things in Africa.”

Also, in Tunisia where Gaddas is based, there are many schools for computer engineers geared toward agritech because it’s a booming sector.

“In addition, it’s thanks to the legal framework created in Tunisia four years ago with the Startup act, a law created to encourage the development of Tunisian start-ups with several financial and fiscal support measures,” he says. “So there’s a favorable ecosystem, evidenced by the dozens of agritech companies launched since the creation of this law.”

While most experts like him believe that agriculture is capable of radical and rapid change due to technology, they also believe it transcends the difficulties that slow down the process.

But in Central Africa, for instance, things are a bit different than in other sub-regions. The transformation potential of digital innovations for agri-food systems is poorly initiated there, with less than 5% of the digital agricultural services identified in Africa coming from this region, according to the FAO. “Existing barriers still need to be addressed, including the lack of rural infrastructure, funding for agriculture and investment in research and development, agri-innovation, and agricultural entrepreneurship,”the specialized UN agency says.

Other observers lament digital illiteracy, limited internet access in some rural areas, and electricity difficulties.

But all these problems have solutions, according to Gaddas. Today, farmers who can’t read or write receive audio messages in local languages, and messages in image form via mobile phones in order to overcome the problem of educating farmers.

“For the problems of electricity and internet access, there are also many solutions such as mini solar panels, or 4G and 3G, which cover internet issues in some remote areas,” he says. He’s convinced that technology is now overcoming all these difficulties.  “To receive market prices, for example, you just have to open your phone,” he says. “Even the most basic one can receive the technology and it doesn’t require a PhD in computer science.”

Agriculture Industry, CIO, IT Leadership, Startups

Cybersecurity threats and their resulting breaches are top of mind for CIOs today. Managing such risks, however, is just one aspect of the entire IT risk management landscape that CIOs must address.

Equally important is reliability risk – the risks inherent in IT’s essential fragility. Issues might occur at anytime, anywhere across the complex hybrid IT landscape, potentially slowing or bringing down services.

Addressing such cybersecurity and reliability risks in separate silos is a recipe for failure. Collaboration across the respective responsible teams is essential for effective risk management.

Such collaboration is both an organizational and a technological challenge – and the organizational aspects depend upon the right technology.

The key to solving complex IT ops problems collaboratively, in fact, is to build a common engineering approach to managing risk across the concerns of the security and operations (ops) teams – in other words, a holistic approach to managing risk. 

Risk management starting point: site reliability engineering

By engineering, we mean a formal, quantitative approach to measuring and managing operational risks that can lead to reliability issues. The starting point for such an approach is site reliability engineering (SRE). 

SRE is a modern technique for managing the risks inherent in running complex, dynamic software deployments – risks like downtime, slowdowns, and the like that might have root causes anywhere, including the network, the software infrastructure, or deployed applications.

The practice of SRE requires dealing with ongoing tradeoffs. The ops team must be able to make fact-based judgments about whether to increase a service’s reliability (and hence, its cost), or lower its reliability and cost to increase the speed of development of the applications providing the service.

Error budgets: the key to site reliability engineering

Instead of targeting perfection – technology that never fails – the real question is just how far short of perfect reliability should an organization aim for. We call this quantity the error budget.

The error budget represents the total number of errors a particular service can accumulate over time before users become dissatisfied with the service.

Most importantly, the error budget should never equal zero. The operator’s goal should never be to entirely eliminate reliability issues, because such an approach would both be too costly and take too long – thus impacting the ability for the organization to deploy software quickly and run dynamic software at scale.

Instead, the operator should maintain an optimal balance among cost, speed, and reliability. Error budgets quantify this balance.

Bringing SRE to cybersecurity        

In order to bring the SRE approach to mitigating reliability risks to the cybersecurity team, it’s essential for the team to calculate risk scores for every observed event that might be relevant to the cybersecurity engineer. 

Risk scoring is an essential aspect of cybersecurity risk management. “Risk management… involves identifying all the IT resources and processes involved in creating and managing department records, identifying all the risks associated with these resources and processes, identifying the likelihood of each risk, and then applying people, processes, and technology to address those risks,” according to Jennifer Pittman-Leeper, Customer Engagement Manager for Tanium.

Risk scoring combined with cybersecurity-centric observability gives the cybersecurity engineer the raw data they need to make informed threat mitigation decisions, just as reliability-centric observability provides the SRE with the data they need to mitigate reliability issues.

Introducing the threat budget

Once we have a quantifiable, real-time measure of threats, then we can create an analogue to SRE for cybersecurity engineers.

We can posit the notion of a threat budget which would represent the total number of unmitigated threats a particular service can accumulate over time before a corresponding compromise adversely impacts the users of the service.

The essential insight here is that threat budgets should never be zero, since eliminating threats entirely would be too expensive and would slow the software effort down, just as error budgets of zero would. “Even the most comprehensive… cybersecurity program can’t afford to protect every IT asset and IT process to the greatest extent possible,” Pittman-Leeper continued. “IT investments will have to be prioritized.”

Some threat budget greater than zero, therefore, would reflect the optimal compromise among cost, time, and the risk of compromise. 

We might call this approach to threat budgets Service Threat Engineering, analogous to Site Reliability Engineering.

What Service Threat Engineering really means is that based upon risk scoring, cybersecurity engineers now have a quantifiable approach to achieving optimal threat mitigation that takes into account all of the relevant parameters, instead of relying upon personal expertise, tribal knowledge, and irrational expectations for cybersecurity effectiveness.

Holistic engineering for better collaboration

Even though risk scoring uses the word risk, I’ve used the word threat to differentiate Service Threat Engineering from SRE. After all, SRE is also about quantifying and managing risks – except with SRE, the risks are reliability-related rather than threat-related.

As a result, Service Threat Engineering is more than analogous to SRE. Rather, they are both approaches to managing two different, but related kinds of risks.

Cybersecurity compromises can certainly lead to reliability issues (ransomware and denial of service being two familiar examples). But there is more to this story.

Ops and security teams have always had a strained relationship, as they work on the same systems while having different priorities. Bringing threat management to the same level as SRE, however, may very well help these two teams align over similar approaches to managing risk.

Service Threat Engineering, therefore, targets the organizational challenges that continue to plague IT organizations – a strategic benefit that many organizations should welcome.

Learn how Tanium is bringing together teams, tools, and workflows with a Converged Endpoint Management platform.

Risk Management

Cybersecurity threats and their resulting breaches are top of mind for CIOs today. Managing such risks, however, is just one aspect of the entire IT risk management landscape that CIOs must address.

Equally important is reliability risk – the risks inherent in IT’s essential fragility. Issues might occur at anytime, anywhere across the complex hybrid IT landscape, potentially slowing or bringing down services.

Addressing such cybersecurity and reliability risks in separate silos is a recipe for failure. Collaboration across the respective responsible teams is essential for effective risk management.

Such collaboration is both an organizational and a technological challenge – and the organizational aspects depend upon the right technology.

The key to solving complex IT ops problems collaboratively, in fact, is to build a common engineering approach to managing risk across the concerns of the security and operations (ops) teams – in other words, a holistic approach to managing risk. 

Risk management starting point: site reliability engineering

By engineering, we mean a formal, quantitative approach to measuring and managing operational risks that can lead to reliability issues. The starting point for such an approach is site reliability engineering (SRE). 

SRE is a modern technique for managing the risks inherent in running complex, dynamic software deployments – risks like downtime, slowdowns, and the like that might have root causes anywhere, including the network, the software infrastructure, or deployed applications.

The practice of SRE requires dealing with ongoing tradeoffs. The ops team must be able to make fact-based judgments about whether to increase a service’s reliability (and hence, its cost), or lower its reliability and cost to increase the speed of development of the applications providing the service.

Error budgets: the key to site reliability engineering

Instead of targeting perfection – technology that never fails – the real question is just how far short of perfect reliability should an organization aim for. We call this quantity the error budget.

The error budget represents the total number of errors a particular service can accumulate over time before users become dissatisfied with the service.

Most importantly, the error budget should never equal zero. The operator’s goal should never be to entirely eliminate reliability issues, because such an approach would both be too costly and take too long – thus impacting the ability for the organization to deploy software quickly and run dynamic software at scale.

Instead, the operator should maintain an optimal balance among cost, speed, and reliability. Error budgets quantify this balance.

Bringing SRE to cybersecurity        

In order to bring the SRE approach to mitigating reliability risks to the cybersecurity team, it’s essential for the team to calculate risk scores for every observed event that might be relevant to the cybersecurity engineer. 

Risk scoring is an essential aspect of cybersecurity risk management. “Risk management… involves identifying all the IT resources and processes involved in creating and managing department records, identifying all the risks associated with these resources and processes, identifying the likelihood of each risk, and then applying people, processes, and technology to address those risks,” according to Jennifer Pittman-Leeper, Customer Engagement Manager for Tanium.

Risk scoring combined with cybersecurity-centric observability gives the cybersecurity engineer the raw data they need to make informed threat mitigation decisions, just as reliability-centric observability provides the SRE with the data they need to mitigate reliability issues.

Introducing the threat budget

Once we have a quantifiable, real-time measure of threats, then we can create an analogue to SRE for cybersecurity engineers.

We can posit the notion of a threat budget which would represent the total number of unmitigated threats a particular service can accumulate over time before a corresponding compromise adversely impacts the users of the service.

The essential insight here is that threat budgets should never be zero, since eliminating threats entirely would be too expensive and would slow the software effort down, just as error budgets of zero would. “Even the most comprehensive… cybersecurity program can’t afford to protect every IT asset and IT process to the greatest extent possible,” Pittman-Leeper continued. “IT investments will have to be prioritized.”

Some threat budget greater than zero, therefore, would reflect the optimal compromise among cost, time, and the risk of compromise. 

We might call this approach to threat budgets Service Threat Engineering, analogous to Site Reliability Engineering.

What Service Threat Engineering really means is that based upon risk scoring, cybersecurity engineers now have a quantifiable approach to achieving optimal threat mitigation that takes into account all of the relevant parameters, instead of relying upon personal expertise, tribal knowledge, and irrational expectations for cybersecurity effectiveness.

Holistic engineering for better collaboration

Even though risk scoring uses the word risk, I’ve used the word threat to differentiate Service Threat Engineering from SRE. After all, SRE is also about quantifying and managing risks – except with SRE, the risks are reliability-related rather than threat-related.

As a result, Service Threat Engineering is more than analogous to SRE. Rather, they are both approaches to managing two different, but related kinds of risks.

Cybersecurity compromises can certainly lead to reliability issues (ransomware and denial of service being two familiar examples). But there is more to this story.

Ops and security teams have always had a strained relationship, as they work on the same systems while having different priorities. Bringing threat management to the same level as SRE, however, may very well help these two teams align over similar approaches to managing risk.

Service Threat Engineering, therefore, targets the organizational challenges that continue to plague IT organizations – a strategic benefit that many organizations should welcome.

Learn how Tanium is bringing together teams, tools, and workflows with a Converged Endpoint Management platform.

Risk Management

Coding has been an educational trend in Africa for many years, and schools and movements have been created in response to a pressing need and necessity in the digital age. It’s still the case today, except entrepreneurs and companies are now beginning to adopt tools to create applications and develop services that don’t require coding. Those who have taken the plunge are trying to maximize the vast potential of these tools by further educating as many people as possible about them in a continent where the familiarization of digital techniques is not advanced.

Some African entrepreneurs have embarked on a mission to universalize these tools since many ICT professionals report that digital illiteracy in Africa is still a concerning reality.

In its 2021 study on the state of low-code/no-code development around the world and how different regions are approaching it, US cloud computing company Rackspace Technology said that in the EMEA region, the use level is below the global average.

The report shows that the biggest barrier to adoption in this region may be skepticism about the benefits, and of all regions, EMEA is the least likely to say that low-code/no-code is a key trend. It’s also the only region where unclear benefits constitute one of the top three reasons for not adopting low-code/no-code.

“It’s possible that organizations in EMEA don’t have as many models for successful low-code/no-code implementation because EMEA organizations that have implemented it may not be seeing its biggest benefit,” the report says. “Forty-four percent say the ability to accelerate the delivery of new software and applications is a benefit — the lowest percentage of any region.”  

Some West African countries, such as Benin, understand that low-code/no-code tools are innovative and disruptive to the CIO community, but not universally trusted. “The general idea is that low-code/no-code is not yet mature enough to be used on a large scale because of its application to specific cases where security needs and constraints are low,” says Maximilien Kpodjedo, president of the CIO Association of Benin and digital adviser to Beninese president Patrice Talon. “These technologies are at the exploratory or low-use stage.”

However, he notices interest in these technologies is growing among CIOs.

“We have commissions working and thinking about innovative concepts, including a commission of CIOs,” says Kpodjedo. “Even if there were projects, they’re marginal at this stage. This could change in the near future, though, thanks to the interest generated.”

But some other entrepreneurs have taken advantage of it and want others to benefit from what they’ve seen in these tools. Actors and leaders of incubators and educational movements are doing what they can for the sake of those in both technological and non-technological sectors.

Many become coaches or consultants of low-code/no-code for companies while others within incubators or movements lead awareness and training on these technologies.

It’s almost child’s play for some entrepreneurs who use it to automate trivial tasks or create internal software for their companies. They don’t need to be experts in coding or even have deep knowledge of ICT. They sometimes come across a technology by chance and end up adopting it because they see its importance and benefits.

This is a reality described by Kenyan Maureen Esther Achieng, CEO of Nocode Apps, Inc., who got into non-coding technologies by following the advice of Mike Williams, otherwise known as Yoroomie, a friend who built and launched online marketplace community for music studio rentals Studiotime in one night using no code.

“Since then, through constant self-study and countless mentorships from some of the best coaches in the global no code space, I’ve helped hundreds of people get started in technology,” she said.

Achieng has now taken up technology as her “divine mission.” Her company specializes in training non-technical entrepreneurs and start-ups, and she teaches people how to leverage no code technology to launch their applications and websites in hours without writing code or hiring developers.

In the Democratic Republic of Congo in Central Africa, software engineer Bigurwa Buhendwa Dom also discovered no code from a relative.

“I had no idea that such technology could exist or at least be so advanced,” he says. “As a software engineer, setting up a working application or even a demo is a real challenge. It takes months or years in some cases. I was fascinated by the speed with which you can build a prototype or a trial version with such a technology, which immediately reduces costs and allows you to test the idea in the market.”

He now offers independent consultations in his country where he has noticed that most people don’t know what it’s about.

A simple environment for companies

Public and private companies also see an opportunity for these services. In Cameroon, for instance, the land credit authority is banking on an agile platform in low code adapted to the needs of application development, as well as the supply of licenses necessary to implement such a platform, how it’s operated, and the production of reports.

In Senegal and Gabon, the French multinational Bolloré Transports and Logistics uses Microsoft’s Power Platform solution to provide employees who use it with a simple environment to create application software without going through traditional computer programming, according to Microsoft, which supported teams with training workshops beforehand, adding that this low-code/no-code approach has enabled Bolloré employees to develop their creativity by appropriating the application creation tools, and move toward faster, more intelligent and optimized processes.

For Jean-Daniel Elbim, director of digital transformation at Bolloré, these tools allow the operational staff to be given more control, but also to bring more agility to the local teams.

“Obviously, the data must be managed,” he says. “We need to define a framework, and there needs to be a group of experts at central level, available to respond to local issues.”

Evangelization and altruistic services

In Chad, ICT expert Salim Alim Assani is co-founder and manager of WenakLabs, a media lab and tech hub incubator described as a niche of Chadian geek talent. According to him, low code is part of daily life of the group’s entrepreneurs.

“We use this tool to set up websites and minimum viable products for our entrepreneurs,” he says. “It’s a real success on projects that don’t require a lot of customization in terms of functionality, from showcase sites to simple mobile applications, for example. We offer a lot of training in this area too. In the framework of certain projects, we’ve initiated 50 young women to the use of low code, particularly the design of websites with WordPress. In the same context, we’re training 25 digital referents, whose daily professional life will be centered on low code. We also regularly organize awareness-raising events on the issue.”

Sesinam Dagadu also makes extensive use of no code at SnooCode, a digital localization solution in Nigeria. Based in London, he’s the founder and CTO of this alphanumeric system that allows addresses to be stored, shared and navigated, even without internet or cellular access.

“I think the biggest place we haven’t used code is on our website,” he says. “We’re creating systems to allow people who build on top of SnooCode to do so using no-code technologies.”

Going ahead without expensive developers

Dagadu appreciates he could do without a developer to use these tools even though he initially used one, which incurred a lot of cost.

“At first we had to employ a web developer who did a lot of work, but it looks horrible using technology like Square Space,” he says. “In Africa, development costs are very high and can only be tackled by companies with a lot of funding. But with the growth of low-code/no-code, more people with bright ideas can bring them to life without the need for expensive developers.”

He noted that because of the popularity problem in Africa of these tools, people believe that every time they have an idea to implement an application or technology, they have to resort to an application developer. But by coding less or not at all, there’s an easier entry into hard code according to WenakLabs’ Assani. “It’s a way to be visible quickly, to offer your services to the world without resorting to the skills of a developer. Above all, you learn through experimentation.”

He sees this as an opportunity to widen the pathway to digital access and entry across Africa. Indeed, entrepreneurs believe these tools will democratize technology and resolve many issues. “This democratization could allow Nocode Apps to be used to solve the most difficult problems not only in Kenya but in Africa in general,” says Achieng. “African problems need technology because the population is young, tech-savvy and uses the internet a lot, so it’s in the interest of Africans to get on board and have more proactive and knowledgeable leadership, especially in IT, to make wise decisions that reflect the speed at which technology and business are changing.”

Africa, Emerging Technology, Innovation, No Code and Low Code

Google on Tuesday said it would be adding new cloud regions across five countries to meet growing computing demand from customers across the globe.

The new regions, announced at Google’s Cloud Next conference, will be made available across Austria, Greece, Norway, South Africa and Sweden, and will supplement new regions announced in August for New Zealand, Malaysia, Thailand and Mexico. However, Google did not confirm when each of these regions would be operational.

The company has already added five new regions this year in Milan, Paris, Madrid, Columbus (Ohio, US) and Dallas, Gupta said.

The addition of the new regions will take Google’s total cloud region tally to 35 regions and 106 zones compared with 34 regions and 103 zones in August this year. Zones offer high-bandwidth, low-latency network connections to other zones in the same region, and regions are collections of zones.

As of December last year, that number stood at 29 cloud regions and 88 cloud zones globally.

Google and other major cloud service providers such as AWS, Microsoft and Oracle have been investing heavily into expanding their cloud regions.

In July, Microsoft CEO Satya Nadella said the company will launch 10 new cloud regions this fiscal year.

Similarly, in June, Oracle CEO Safra Catz said the company expects to add another six regions in fiscal 2023. By July, the company had already launched two of these new sovereign regions for the European Union.

Data sovereignty adds fuel to cloud region construction

Data sovereignty regulations—rules about data that companies must keep in-country for security and privacy reasons—has given impetus to construction of cloud regions around the world. The EU has taken the lead in advancing data privacy rules with GDPR regulations, and during the Cloud Next conference, Google also announced that it was expanding its portfolio of Sovereign Solutions that can support European customers’ current and emerging sovereignty needs. 

Google Cloud Sovereign Solutions comprise Sovereign Controls, designed to help organizations manage data sovereignty requirements, as well as Supervised Cloud and Hosted Cloud options to help address operational and software sovereignty concerns. To make these controls available, Google has teamed up with a nunber of telecom companies in the EU, including T-Systems in Germany, S3NS in France, Minsait in Spain, and Telecom Italia in Italy.

Economic impact of cloud regions

Google claims that opening up new cloud regions contribute to local economic and job growth.

“These cloud regions help bring innovations from across Google closer to our customers around the globe and provide a platform that enables organizations to transform the way they do business,” Sachin Gupta, vice president of infrastructure at Google Cloud, wrote in a blog post.

The nine new cloud regions that were announced this year are expected to collectively contribute $40 billion to global GDP by 2030 and create 314,400 jobs, a Google commissioned study by consulting firm AlphaBeta showed.

At the regional level, the three cloud regions—New Zealand, Malaysia and Thailand—announced in APAC are projected to contribute $10 billion to the region’s GDP by 2030 and create 86,500 jobs, the study showed.

Similarly, the five regions announced this year across Europe, the Middle East and Africa, are projected to contribute a cumulative $18.9 billion to EMEA’s GDP by 2030, and support creation of more than 110,500 jobs, AlphaBeta said in its report.

Cloud Computing

The benefits of analyzing vast amounts of data, long-term or in real-time, has captured the attention of businesses of all sizes. Big data analytics has moved beyond the rarified domain of government and university research environments equipped with supercomputers to include businesses of all kinds that are using modern high performance computing (HPC) solutions to get their analytics jobs done. Its big data meets HPC ― otherwise known as high performance data analytics. 

Bigger, Faster, More Compute-intensive Data Analytics

Big data analytics has relied on HPC infrastructure for many years to handle data mining processes. Today, parallel processing solutions handle massive amounts of data and run powerful analytics software that uses artificial intelligence (AI) and machine learning (ML) for highly demanding jobs.

A report by Intersect360 Research found that “Traditionally, most HPC applications have been deterministic; given a set of inputs, the computer program performs calculations to determine an answer. Machine learning represents another type of applications that is experiential; the application makes predictions about new or current data based on patterns seen in the past.”

This shift to AI, ML, large data sets, and more compute-intensive analytical calculations has contributed to the growth of the global high performance data analytics market, which was valued at $48.28 billion in 2020 and is projected to grow to $187.57 billion in 2026, according to research by Mordor Intelligence. “Analytics and AI require immensely powerful processes across compute, networking and storage,” the report explained. “As a result, more companies are increasingly using HPC solutions for AI-enabled innovation and productivity.”

Benefits and ROI

Millions of businesses need to deploy advanced analytics at the speed of events. A subset of these organizations will require high performance data analytics solutions. Those HPC solutions and architectures will benefit from the integration of diverse datasets from on-premise to edge to cloud. The use of new sources of data from the Internet of Things to empower customer interactions and other departments will provide a further competitive advantage to many businesses. Simplified analytics platforms that are user-friendly resources open to every employee, customer, and partner will change the responsibilities and roles of countless professions.

How does a business calculate the return on investment (ROI) of high performance data analytics? It varies with different use cases.

For analytics used to help increase operational efficiency, key performance indicators (KPIs) contributing to ROI may include downtime, cost savings, time-to-market, and production volume. For sales and marketing, KPIs may include sales volume, average deal size, revenue by campaign, and churn rate. For analytics used to detect fraud, KPIs may include number of fraud attempts, chargebacks, and order approval rates. In a healthcare environment, analytics used to improve patient outcomes might include key performance indicators that track cost of care, emergency room wait times, hospital readmissions, and billing errors.

Customer Success Stories

Combining data analytics with HPC:

A technology firm applies AI, machine learning, and data analytics to client drug diversion data from acute, specialty, and long-term care facilities and delivers insights within five minutes of receiving new data while maintaining a HPC environment with 99.99% uptime to comply with service level agreements (SLAs).A research university was able to tap into 2 petabytes of data across two HPC clusters with 13,080 cores to create a mathematical model to predict behavior during the COVID-19 pandemic.A technology services provider is able to inspect 124 moving railcars ― a 120% reduction in inspection time ― and transmit results in eight minutes, based on processing and analyzing 1.31 terabytes of data per day.A race car designer is able to process and analyze 100,000 data points per second per car ― one billion in a two-hour race ― that are used by digital twins running hundreds of different race scenarios to inform design modifications and racing strategy.  Scientists at a university research center are able to utilize hundreds of terabytes of data, processed at I/O speeds of 200 Gbps, to conduct cosmological research into the origins of the universe.

Data Scientists are Part of the Equation

High performance data analytics is gaining stature as more and more data is being collected.  Beyond the data and HPC systems, it takes expertise to recognize and champion the value of this data. According to Datamation, “The rise of chief data officers and chief analytics officers is the clearest indication that analytics has moved from the backroom to the boardroom, and more and more often it’s data experts that are setting strategy.” 

No wonder skilled data analysts continue to be among the most in-demand professionals in the world. The U.S. Bureau of Labor Statistics predicts that the field will be among the fastest-growing occupations for the next decade, with 11.5 million new jobs by 2026. 

For more information read “Unleash data-driven insights and opportunities with analytics: How organizations are unlocking the value of their data capital from edge to core to cloud” from Dell Technologies. 

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

Data Management

Cyber hygiene describes a set of practices, behaviors and tools designed to keep the entire IT environment healthy and at peak performance—and more importantly, it is a critical line of defense. Your cyber hygiene tools, as with all other IT tools, should fit the purpose for which they’re intended, but ideally should deliver the scale, speed, and simplicity you need to keep your IT environment clean.

What works best is dependent on the organization. A Fortune 100 company will have a much bigger IT group than a firm with 1,000 employees, hence the emphasis on scalability. Conversely, a smaller company with a lean IT team would prioritize simplicity.

It’s also important to classify your systems. Which ones are business critical? And which ones are external versus internal facing? External facing systems will be subject to greater scrutiny.

In many cases, budget or habit will prevent you from updating certain tools. If you’re stuck with a tool you can’t get rid of, you need to understand how your ideal workflow can be supported. Any platform or tool can be evaluated against the scale, speed and simplicity criteria.

An anecdote about scale, speed and complexity

Imagine a large telecom company with millions of customers and a presence in nearly every business and consumer-facing digital service imaginable. If your organization is offering an IT tool or platform to customers like that, no question you’d love to get your foot in the door.

But look at it from the perspective of the telecom company. No tool they’ve ever purchased can handle the scale of their business. They’re always having to apply their existing tools to a subset of a subset of a subset of their environment. 

Any tool can look great when it’s dealing with 200 systems. But when you get to the enterprise size, those three pillars are even more important. The tool must work at the scale, speed, and simplicity that meets your needs.

The danger of complacency

With all the thought leadership put into IT operations and security best practices, why is it that many organizations are content with having only 75% visibility into their endpoint environment? Or 75% of endpoints under management? 

It’s because they’ve accepted failure as built into the tools and processes they’ve used over the years. If an organization wants to stick with the tools it has, it must:

Realize their flaws and limitationsMeasure them on the scale, speed and simplicity criteriaDetermine the headcount required to do things properly

Organizations cannot remain attached to the way they’ve always done things. Technology changes too fast. The cliché of “future proof” is misleading. There’s no future proof. There’s only future adaptable.

Old data lies

To stay with the three criteria of strong cyber hygiene—scale, speed and simplicity—nothing is more critical than the currency of your data. Any software or practice that supports making decisions on old data should be suspect. 

Analytics help IT and security teams make better decisions. When they don’t, the reason is usually a lack of quality data. And the quality issue is often around data freshness. In IT, old data is almost never accurate. So decisions based on it are very likely to be wrong. Regardless of the data set, whether it’s about patching, compliance, device configuration, vulnerabilities or threats, old data is unreliable.

The old data problem is compounded by the number of systems a typical large organization relies on today. Many tools we still use were made for a decades-old IT environment that no longer exists. Nevertheless, today tools are available to give us real-time data for IT analytics.

IT hygiene and network data capacity

Whether you’re a 1,000-endpoint or 100,000-endpoint organization, streaming huge quantities of real-time data will require network bandwidth to carry it. You may not have the infrastructure to handle real-time data from every system you’re operating. So, focus on the basics. 

That means you need to understand and identify the core business services and applications that are most in need of fresh data. Those are the services that keep a business running. With that data, you can see what your IT operations and security posture look like for those systems. Prioritize. Use what you have wisely.

To simplify gathering the right data, streamline workflows

Once you’ve identified your core services, getting back to basics means streamlining workflows. Most organizations are in the mindset of “my tools dictate my workflow.” And that’s backward.

You want a high-performance network that has low vulnerability and strong threat response.  You want tools that can service your core systems, do efficient patching, perform antivirus protection and manage recovery should there be a breach. That’s what your tooling should support. Your workflows should help you weed out the tools that are not a good operational fit for your business.

Looking ahead

It’s clear the “new normal” will consist of remote, on-premises, and hybrid workforces. IT teams now have the experience to determine how to update and align processes and infrastructure without additional disruption.

Part of this evaluation process will center on the evaluation and procurement of tools that provide the scale, speed and simplicity necessary to manage operations in a hyper converged world while:

Maintaining superior IT hygiene as a foundational best practiceAssessing risk posture to inform technology and operational decisions Strengthening cybersecurity programs without impeding worker productivity

Dive deeper into cyber hygiene with this eBook.

Analytics