“Land never deceives” is a common slogan of farmers around Africa. Many people go into farming entirely, or as a side endeavor, with a high certainty they’ll make money and produce more good for all. And when technology is added to the mix, opportunities multiply.

Having the largest area of uncultivated arable land in the world, sub-Saharan Africa, with a young population—nearly 60% is under 25—and a wealth of natural resources, has unparalleled advantages that could double or even triple its current agricultural productivity, according to the Status of Agriculture in 47 Sub-Saharan African Countries, a report the Food and Agriculture Organization (FAO) jointly published with the International Technology Union (ITU) in March 2022.

Some African countries depend almost entirely on agriculture, like Ethiopia, for example, with 80% of its economy based on it. Jermia Bayisa Lulu, CEO and co-founder of start-up Debo Engineering Agritech, has consolidated his knowledge and experience in computer networking, engineering, and Artificial Intelligence (AI) research to go all in on agritech to solve the problems that affect 85% of community life in his native Ethiopia.

“Our economy is based on agriculture and I believe it should be further supported by technology to increase agricultural productivity,” he says. “Plus, about 20.4 million people in Ethiopia are in need of food aid, which motivates us to solve the problem of agriculture to ensure the lives of millions of people. The same is true for most African countries that need to be supported by technological solutions.”

Like Bayisa Lulu, many believe that technology mixed with agriculture is essential to develop the agricultural sector and improve people’s lives, including Michael Hailu. He is the director of the ACP-EU Technical Centre for Agricultural and Rural Cooperation, which brings together 79 African, Caribbean and Pacific countries and European Union member states.

“In agriculture, digitization could be a game changer by boosting productivity, profitability and resilience to climate change,” says Hailu.

In last year’s Digitization of African Agriculture report, which the ACP compiles, it details how 33 million small-scale farmers and pastoralists registered with Digital for Agriculture (D4Ag) solutions across the continent in 2019, adding that it’s expected to rise to 200 million by 2030.

“The stakes are so high it’s not surprising most African countries have made agricultural transformation a major focus of their national strategies,” he adds.

Diverse problems as solutions

On the ground, things are already changing with a multitude of start-ups solving a variety of agricultural problems with drone technology, precision agriculture and Internet of Things (IoT) solutions. The scope of technology in this sphere is vast and is an important driver of change.

Youth innovation in Ghana, for instance, continues to exceed expectations according to Kenneth Abdulai Nelson, co-founder and MD of Farm360 Global, a crowdfunding and consulting company dedicated to smart farming projects. He believes that the days when agriculture was not “sexy” to most young people are over, thanks to the technology revolution.

“With the keen interest in developing agriculture through technology, leading centers have initiated and supported training young entrepreneurs to challenge the status quo and develop innovative technological solutions to solve key problems in the agricultural sector,” he says.

He listed AI solutions that today improve the quality of crops or precision agriculture found in the use of drones, robotics, hydroponics, and more.

“AI technology can detect plant diseases, pests and nutritional deficiencies on farms,” he says. “AI sensors can detect and target weeds and areas of poor nutrition and then decide which herbicide or fertilizer to apply in the area.”

Abdulai Nelson also has a personal interest in drones as a proven and effective way to improve agriculture. Indeed, in Ghana and others across the continent, drones are used for mapping, pesticide spraying, soil and data analysis, and farm monitoring to improve productivity while maximizing the use of labor.

He also appreciates the expensive but valuable irrigation technologies being used by some businesses on the continent.

“Anticipating the impacts of climate change emphasizes the need for irrigation technologies to ensure year-round production,” he says. “More than 50% of farmers rely heavily on seasonal rainfall, which continues to change dramatically.”

He also finds that IoT solutions stimulate productivity in the agrarian sector by effectively analyzing data, both historical and current, to inform well thought out activities. Applications are wide-ranging and include deep sensors to help predict rainfall and drought; soil sensors to determine fertilizer application areas; storage sensors to make sure products are stored at favorable temperatures; and input tracking and logistics to reduce post-harvest losses.

Walid Gaddas is a Tunisian consultant in strategy and international development in the agritech sector. He manages STECIA International, a consulting firm in strategy and international development in agriculture that works with global partners around the world, and working with several agritech projects in North Africa and sub-Saharan Africa, he has observed great potential.

“More countries are aware that agritech is not the agriculture of tomorrow but of today,” he says. “In countries such as Ivory Coast in West Africa, the government has already put in place all the strategies to digitize agriculture. Many activities are being carried out to digitalize the cocoa and rubber sectors.”

Strength of creativity

One of the crucial issues that agriculture in Africa is currently solving, according to Gaddas, is a lack of water. He says that in Senegal, Tunisia and many other countries, companies are working hard on intelligent irrigation, and on how to optimize water resources that are becoming increasingly scarce, especially in the context of climate change and unpredictable rainfall.

“Managing water is becoming crucial,” he says. “We’ve met start-ups that use drones, which, through their precision devices, help to collect data that can be used by farmers, such as the levels of nitrogen from the fields, precise mapping of areas with fertiliser deficits,and others that solve plant disease problems by making diagnoses. There are also ERP systems for farm management and to know what is happening in real time—the management of inputs, fertilizers and more.”

He also appreciated the digital aquaculture companies that allow for very rational management of aquaculture farms, while praising the impressive diversity of solutions.

“The diversity of problems that farmers face in Africa is very wide but creativity is not the weak point of Africans,” he says. “Farmers also generally have issues with small plots, low yields and low productivity, so they often lack the know-how to optimize what little they have.”

So these digital solutions are aimed more at small-scale farmers who are used to working like their parents or grandparents and don’t necessarily have all the knowledge, so technologies can provide them with research results and tell them what they need for their crops, Gaddas says.

Coupled with these data analytics solutions and other related technologies, the most complex problems in agriculture are being solved according to computer scientist Bayisa Lulu.

“Emerging technologies are solving complex problems that seemed to go unsolved in the past decades and without much user involvement, which is a very important, especially for the disadvantaged.”

Success relies on tech

IT leaders are now making their mark in this transformation by helping to identify and develop solutions through implementing agritech accelerator and incubator programs to reduce pressure, risk, food safety and waste.

By doing so, they take a broad and long-term view of key issues in the agricultural space, and serve as the engine behind effecting solutions.

“An operations manager can identify the problem,” says Abdulai Nelson, “but it’s up to the CIO to listen, design and develop the most appropriate technology solution.”

Others agree that the development of agriculture and technology has unlimited possibilities, and it’s the right time to build better bridges between them so agriculture can further benefit from cutting-edge technologies quicker and on a larger scale, according to Gaddas. Education, of course, is key.

“The fact that agronomists are associated with computer scientists makes all the difference because the contribution of technology to agriculture is enormous, and also the agricultural logic integrated by computer scientists transforms things,” he says. “They must be able to enrich each other’s capacities. It’s great to see them working hand in hand changing things in Africa.”

Also, in Tunisia where Gaddas is based, there are many schools for computer engineers geared toward agritech because it’s a booming sector.

“In addition, it’s thanks to the legal framework created in Tunisia four years ago with the Startup act, a law created to encourage the development of Tunisian start-ups with several financial and fiscal support measures,” he says. “So there’s a favorable ecosystem, evidenced by the dozens of agritech companies launched since the creation of this law.”

While most experts like him believe that agriculture is capable of radical and rapid change due to technology, they also believe it transcends the difficulties that slow down the process.

But in Central Africa, for instance, things are a bit different than in other sub-regions. The transformation potential of digital innovations for agri-food systems is poorly initiated there, with less than 5% of the digital agricultural services identified in Africa coming from this region, according to the FAO. “Existing barriers still need to be addressed, including the lack of rural infrastructure, funding for agriculture and investment in research and development, agri-innovation, and agricultural entrepreneurship,”the specialized UN agency says.

Other observers lament digital illiteracy, limited internet access in some rural areas, and electricity difficulties.

But all these problems have solutions, according to Gaddas. Today, farmers who can’t read or write receive audio messages in local languages, and messages in image form via mobile phones in order to overcome the problem of educating farmers.

“For the problems of electricity and internet access, there are also many solutions such as mini solar panels, or 4G and 3G, which cover internet issues in some remote areas,” he says. He’s convinced that technology is now overcoming all these difficulties.  “To receive market prices, for example, you just have to open your phone,” he says. “Even the most basic one can receive the technology and it doesn’t require a PhD in computer science.”

Agriculture Industry, CIO, IT Leadership, Startups

Cybersecurity threats and their resulting breaches are top of mind for CIOs today. Managing such risks, however, is just one aspect of the entire IT risk management landscape that CIOs must address.

Equally important is reliability risk – the risks inherent in IT’s essential fragility. Issues might occur at anytime, anywhere across the complex hybrid IT landscape, potentially slowing or bringing down services.

Addressing such cybersecurity and reliability risks in separate silos is a recipe for failure. Collaboration across the respective responsible teams is essential for effective risk management.

Such collaboration is both an organizational and a technological challenge – and the organizational aspects depend upon the right technology.

The key to solving complex IT ops problems collaboratively, in fact, is to build a common engineering approach to managing risk across the concerns of the security and operations (ops) teams – in other words, a holistic approach to managing risk. 

Risk management starting point: site reliability engineering

By engineering, we mean a formal, quantitative approach to measuring and managing operational risks that can lead to reliability issues. The starting point for such an approach is site reliability engineering (SRE). 

SRE is a modern technique for managing the risks inherent in running complex, dynamic software deployments – risks like downtime, slowdowns, and the like that might have root causes anywhere, including the network, the software infrastructure, or deployed applications.

The practice of SRE requires dealing with ongoing tradeoffs. The ops team must be able to make fact-based judgments about whether to increase a service’s reliability (and hence, its cost), or lower its reliability and cost to increase the speed of development of the applications providing the service.

Error budgets: the key to site reliability engineering

Instead of targeting perfection – technology that never fails – the real question is just how far short of perfect reliability should an organization aim for. We call this quantity the error budget.

The error budget represents the total number of errors a particular service can accumulate over time before users become dissatisfied with the service.

Most importantly, the error budget should never equal zero. The operator’s goal should never be to entirely eliminate reliability issues, because such an approach would both be too costly and take too long – thus impacting the ability for the organization to deploy software quickly and run dynamic software at scale.

Instead, the operator should maintain an optimal balance among cost, speed, and reliability. Error budgets quantify this balance.

Bringing SRE to cybersecurity        

In order to bring the SRE approach to mitigating reliability risks to the cybersecurity team, it’s essential for the team to calculate risk scores for every observed event that might be relevant to the cybersecurity engineer. 

Risk scoring is an essential aspect of cybersecurity risk management. “Risk management… involves identifying all the IT resources and processes involved in creating and managing department records, identifying all the risks associated with these resources and processes, identifying the likelihood of each risk, and then applying people, processes, and technology to address those risks,” according to Jennifer Pittman-Leeper, Customer Engagement Manager for Tanium.

Risk scoring combined with cybersecurity-centric observability gives the cybersecurity engineer the raw data they need to make informed threat mitigation decisions, just as reliability-centric observability provides the SRE with the data they need to mitigate reliability issues.

Introducing the threat budget

Once we have a quantifiable, real-time measure of threats, then we can create an analogue to SRE for cybersecurity engineers.

We can posit the notion of a threat budget which would represent the total number of unmitigated threats a particular service can accumulate over time before a corresponding compromise adversely impacts the users of the service.

The essential insight here is that threat budgets should never be zero, since eliminating threats entirely would be too expensive and would slow the software effort down, just as error budgets of zero would. “Even the most comprehensive… cybersecurity program can’t afford to protect every IT asset and IT process to the greatest extent possible,” Pittman-Leeper continued. “IT investments will have to be prioritized.”

Some threat budget greater than zero, therefore, would reflect the optimal compromise among cost, time, and the risk of compromise. 

We might call this approach to threat budgets Service Threat Engineering, analogous to Site Reliability Engineering.

What Service Threat Engineering really means is that based upon risk scoring, cybersecurity engineers now have a quantifiable approach to achieving optimal threat mitigation that takes into account all of the relevant parameters, instead of relying upon personal expertise, tribal knowledge, and irrational expectations for cybersecurity effectiveness.

Holistic engineering for better collaboration

Even though risk scoring uses the word risk, I’ve used the word threat to differentiate Service Threat Engineering from SRE. After all, SRE is also about quantifying and managing risks – except with SRE, the risks are reliability-related rather than threat-related.

As a result, Service Threat Engineering is more than analogous to SRE. Rather, they are both approaches to managing two different, but related kinds of risks.

Cybersecurity compromises can certainly lead to reliability issues (ransomware and denial of service being two familiar examples). But there is more to this story.

Ops and security teams have always had a strained relationship, as they work on the same systems while having different priorities. Bringing threat management to the same level as SRE, however, may very well help these two teams align over similar approaches to managing risk.

Service Threat Engineering, therefore, targets the organizational challenges that continue to plague IT organizations – a strategic benefit that many organizations should welcome.

Learn how Tanium is bringing together teams, tools, and workflows with a Converged Endpoint Management platform.

Risk Management

Cybersecurity threats and their resulting breaches are top of mind for CIOs today. Managing such risks, however, is just one aspect of the entire IT risk management landscape that CIOs must address.

Equally important is reliability risk – the risks inherent in IT’s essential fragility. Issues might occur at anytime, anywhere across the complex hybrid IT landscape, potentially slowing or bringing down services.

Addressing such cybersecurity and reliability risks in separate silos is a recipe for failure. Collaboration across the respective responsible teams is essential for effective risk management.

Such collaboration is both an organizational and a technological challenge – and the organizational aspects depend upon the right technology.

The key to solving complex IT ops problems collaboratively, in fact, is to build a common engineering approach to managing risk across the concerns of the security and operations (ops) teams – in other words, a holistic approach to managing risk. 

Risk management starting point: site reliability engineering

By engineering, we mean a formal, quantitative approach to measuring and managing operational risks that can lead to reliability issues. The starting point for such an approach is site reliability engineering (SRE). 

SRE is a modern technique for managing the risks inherent in running complex, dynamic software deployments – risks like downtime, slowdowns, and the like that might have root causes anywhere, including the network, the software infrastructure, or deployed applications.

The practice of SRE requires dealing with ongoing tradeoffs. The ops team must be able to make fact-based judgments about whether to increase a service’s reliability (and hence, its cost), or lower its reliability and cost to increase the speed of development of the applications providing the service.

Error budgets: the key to site reliability engineering

Instead of targeting perfection – technology that never fails – the real question is just how far short of perfect reliability should an organization aim for. We call this quantity the error budget.

The error budget represents the total number of errors a particular service can accumulate over time before users become dissatisfied with the service.

Most importantly, the error budget should never equal zero. The operator’s goal should never be to entirely eliminate reliability issues, because such an approach would both be too costly and take too long – thus impacting the ability for the organization to deploy software quickly and run dynamic software at scale.

Instead, the operator should maintain an optimal balance among cost, speed, and reliability. Error budgets quantify this balance.

Bringing SRE to cybersecurity        

In order to bring the SRE approach to mitigating reliability risks to the cybersecurity team, it’s essential for the team to calculate risk scores for every observed event that might be relevant to the cybersecurity engineer. 

Risk scoring is an essential aspect of cybersecurity risk management. “Risk management… involves identifying all the IT resources and processes involved in creating and managing department records, identifying all the risks associated with these resources and processes, identifying the likelihood of each risk, and then applying people, processes, and technology to address those risks,” according to Jennifer Pittman-Leeper, Customer Engagement Manager for Tanium.

Risk scoring combined with cybersecurity-centric observability gives the cybersecurity engineer the raw data they need to make informed threat mitigation decisions, just as reliability-centric observability provides the SRE with the data they need to mitigate reliability issues.

Introducing the threat budget

Once we have a quantifiable, real-time measure of threats, then we can create an analogue to SRE for cybersecurity engineers.

We can posit the notion of a threat budget which would represent the total number of unmitigated threats a particular service can accumulate over time before a corresponding compromise adversely impacts the users of the service.

The essential insight here is that threat budgets should never be zero, since eliminating threats entirely would be too expensive and would slow the software effort down, just as error budgets of zero would. “Even the most comprehensive… cybersecurity program can’t afford to protect every IT asset and IT process to the greatest extent possible,” Pittman-Leeper continued. “IT investments will have to be prioritized.”

Some threat budget greater than zero, therefore, would reflect the optimal compromise among cost, time, and the risk of compromise. 

We might call this approach to threat budgets Service Threat Engineering, analogous to Site Reliability Engineering.

What Service Threat Engineering really means is that based upon risk scoring, cybersecurity engineers now have a quantifiable approach to achieving optimal threat mitigation that takes into account all of the relevant parameters, instead of relying upon personal expertise, tribal knowledge, and irrational expectations for cybersecurity effectiveness.

Holistic engineering for better collaboration

Even though risk scoring uses the word risk, I’ve used the word threat to differentiate Service Threat Engineering from SRE. After all, SRE is also about quantifying and managing risks – except with SRE, the risks are reliability-related rather than threat-related.

As a result, Service Threat Engineering is more than analogous to SRE. Rather, they are both approaches to managing two different, but related kinds of risks.

Cybersecurity compromises can certainly lead to reliability issues (ransomware and denial of service being two familiar examples). But there is more to this story.

Ops and security teams have always had a strained relationship, as they work on the same systems while having different priorities. Bringing threat management to the same level as SRE, however, may very well help these two teams align over similar approaches to managing risk.

Service Threat Engineering, therefore, targets the organizational challenges that continue to plague IT organizations – a strategic benefit that many organizations should welcome.

Learn how Tanium is bringing together teams, tools, and workflows with a Converged Endpoint Management platform.

Risk Management

Coding has been an educational trend in Africa for many years, and schools and movements have been created in response to a pressing need and necessity in the digital age. It’s still the case today, except entrepreneurs and companies are now beginning to adopt tools to create applications and develop services that don’t require coding. Those who have taken the plunge are trying to maximize the vast potential of these tools by further educating as many people as possible about them in a continent where the familiarization of digital techniques is not advanced.

Some African entrepreneurs have embarked on a mission to universalize these tools since many ICT professionals report that digital illiteracy in Africa is still a concerning reality.

In its 2021 study on the state of low-code/no-code development around the world and how different regions are approaching it, US cloud computing company Rackspace Technology said that in the EMEA region, the use level is below the global average.

The report shows that the biggest barrier to adoption in this region may be skepticism about the benefits, and of all regions, EMEA is the least likely to say that low-code/no-code is a key trend. It’s also the only region where unclear benefits constitute one of the top three reasons for not adopting low-code/no-code.

“It’s possible that organizations in EMEA don’t have as many models for successful low-code/no-code implementation because EMEA organizations that have implemented it may not be seeing its biggest benefit,” the report says. “Forty-four percent say the ability to accelerate the delivery of new software and applications is a benefit — the lowest percentage of any region.”  

Some West African countries, such as Benin, understand that low-code/no-code tools are innovative and disruptive to the CIO community, but not universally trusted. “The general idea is that low-code/no-code is not yet mature enough to be used on a large scale because of its application to specific cases where security needs and constraints are low,” says Maximilien Kpodjedo, president of the CIO Association of Benin and digital adviser to Beninese president Patrice Talon. “These technologies are at the exploratory or low-use stage.”

However, he notices interest in these technologies is growing among CIOs.

“We have commissions working and thinking about innovative concepts, including a commission of CIOs,” says Kpodjedo. “Even if there were projects, they’re marginal at this stage. This could change in the near future, though, thanks to the interest generated.”

But some other entrepreneurs have taken advantage of it and want others to benefit from what they’ve seen in these tools. Actors and leaders of incubators and educational movements are doing what they can for the sake of those in both technological and non-technological sectors.

Many become coaches or consultants of low-code/no-code for companies while others within incubators or movements lead awareness and training on these technologies.

It’s almost child’s play for some entrepreneurs who use it to automate trivial tasks or create internal software for their companies. They don’t need to be experts in coding or even have deep knowledge of ICT. They sometimes come across a technology by chance and end up adopting it because they see its importance and benefits.

This is a reality described by Kenyan Maureen Esther Achieng, CEO of Nocode Apps, Inc., who got into non-coding technologies by following the advice of Mike Williams, otherwise known as Yoroomie, a friend who built and launched online marketplace community for music studio rentals Studiotime in one night using no code.

“Since then, through constant self-study and countless mentorships from some of the best coaches in the global no code space, I’ve helped hundreds of people get started in technology,” she said.

Achieng has now taken up technology as her “divine mission.” Her company specializes in training non-technical entrepreneurs and start-ups, and she teaches people how to leverage no code technology to launch their applications and websites in hours without writing code or hiring developers.

In the Democratic Republic of Congo in Central Africa, software engineer Bigurwa Buhendwa Dom also discovered no code from a relative.

“I had no idea that such technology could exist or at least be so advanced,” he says. “As a software engineer, setting up a working application or even a demo is a real challenge. It takes months or years in some cases. I was fascinated by the speed with which you can build a prototype or a trial version with such a technology, which immediately reduces costs and allows you to test the idea in the market.”

He now offers independent consultations in his country where he has noticed that most people don’t know what it’s about.

A simple environment for companies

Public and private companies also see an opportunity for these services. In Cameroon, for instance, the land credit authority is banking on an agile platform in low code adapted to the needs of application development, as well as the supply of licenses necessary to implement such a platform, how it’s operated, and the production of reports.

In Senegal and Gabon, the French multinational Bolloré Transports and Logistics uses Microsoft’s Power Platform solution to provide employees who use it with a simple environment to create application software without going through traditional computer programming, according to Microsoft, which supported teams with training workshops beforehand, adding that this low-code/no-code approach has enabled Bolloré employees to develop their creativity by appropriating the application creation tools, and move toward faster, more intelligent and optimized processes.

For Jean-Daniel Elbim, director of digital transformation at Bolloré, these tools allow the operational staff to be given more control, but also to bring more agility to the local teams.

“Obviously, the data must be managed,” he says. “We need to define a framework, and there needs to be a group of experts at central level, available to respond to local issues.”

Evangelization and altruistic services

In Chad, ICT expert Salim Alim Assani is co-founder and manager of WenakLabs, a media lab and tech hub incubator described as a niche of Chadian geek talent. According to him, low code is part of daily life of the group’s entrepreneurs.

“We use this tool to set up websites and minimum viable products for our entrepreneurs,” he says. “It’s a real success on projects that don’t require a lot of customization in terms of functionality, from showcase sites to simple mobile applications, for example. We offer a lot of training in this area too. In the framework of certain projects, we’ve initiated 50 young women to the use of low code, particularly the design of websites with WordPress. In the same context, we’re training 25 digital referents, whose daily professional life will be centered on low code. We also regularly organize awareness-raising events on the issue.”

Sesinam Dagadu also makes extensive use of no code at SnooCode, a digital localization solution in Nigeria. Based in London, he’s the founder and CTO of this alphanumeric system that allows addresses to be stored, shared and navigated, even without internet or cellular access.

“I think the biggest place we haven’t used code is on our website,” he says. “We’re creating systems to allow people who build on top of SnooCode to do so using no-code technologies.”

Going ahead without expensive developers

Dagadu appreciates he could do without a developer to use these tools even though he initially used one, which incurred a lot of cost.

“At first we had to employ a web developer who did a lot of work, but it looks horrible using technology like Square Space,” he says. “In Africa, development costs are very high and can only be tackled by companies with a lot of funding. But with the growth of low-code/no-code, more people with bright ideas can bring them to life without the need for expensive developers.”

He noted that because of the popularity problem in Africa of these tools, people believe that every time they have an idea to implement an application or technology, they have to resort to an application developer. But by coding less or not at all, there’s an easier entry into hard code according to WenakLabs’ Assani. “It’s a way to be visible quickly, to offer your services to the world without resorting to the skills of a developer. Above all, you learn through experimentation.”

He sees this as an opportunity to widen the pathway to digital access and entry across Africa. Indeed, entrepreneurs believe these tools will democratize technology and resolve many issues. “This democratization could allow Nocode Apps to be used to solve the most difficult problems not only in Kenya but in Africa in general,” says Achieng. “African problems need technology because the population is young, tech-savvy and uses the internet a lot, so it’s in the interest of Africans to get on board and have more proactive and knowledgeable leadership, especially in IT, to make wise decisions that reflect the speed at which technology and business are changing.”

Africa, Emerging Technology, Innovation, No Code and Low Code

Google on Tuesday said it would be adding new cloud regions across five countries to meet growing computing demand from customers across the globe.

The new regions, announced at Google’s Cloud Next conference, will be made available across Austria, Greece, Norway, South Africa and Sweden, and will supplement new regions announced in August for New Zealand, Malaysia, Thailand and Mexico. However, Google did not confirm when each of these regions would be operational.

The company has already added five new regions this year in Milan, Paris, Madrid, Columbus (Ohio, US) and Dallas, Gupta said.

The addition of the new regions will take Google’s total cloud region tally to 35 regions and 106 zones compared with 34 regions and 103 zones in August this year. Zones offer high-bandwidth, low-latency network connections to other zones in the same region, and regions are collections of zones.

As of December last year, that number stood at 29 cloud regions and 88 cloud zones globally.

Google and other major cloud service providers such as AWS, Microsoft and Oracle have been investing heavily into expanding their cloud regions.

In July, Microsoft CEO Satya Nadella said the company will launch 10 new cloud regions this fiscal year.

Similarly, in June, Oracle CEO Safra Catz said the company expects to add another six regions in fiscal 2023. By July, the company had already launched two of these new sovereign regions for the European Union.

Data sovereignty adds fuel to cloud region construction

Data sovereignty regulations—rules about data that companies must keep in-country for security and privacy reasons—has given impetus to construction of cloud regions around the world. The EU has taken the lead in advancing data privacy rules with GDPR regulations, and during the Cloud Next conference, Google also announced that it was expanding its portfolio of Sovereign Solutions that can support European customers’ current and emerging sovereignty needs. 

Google Cloud Sovereign Solutions comprise Sovereign Controls, designed to help organizations manage data sovereignty requirements, as well as Supervised Cloud and Hosted Cloud options to help address operational and software sovereignty concerns. To make these controls available, Google has teamed up with a nunber of telecom companies in the EU, including T-Systems in Germany, S3NS in France, Minsait in Spain, and Telecom Italia in Italy.

Economic impact of cloud regions

Google claims that opening up new cloud regions contribute to local economic and job growth.

“These cloud regions help bring innovations from across Google closer to our customers around the globe and provide a platform that enables organizations to transform the way they do business,” Sachin Gupta, vice president of infrastructure at Google Cloud, wrote in a blog post.

The nine new cloud regions that were announced this year are expected to collectively contribute $40 billion to global GDP by 2030 and create 314,400 jobs, a Google commissioned study by consulting firm AlphaBeta showed.

At the regional level, the three cloud regions—New Zealand, Malaysia and Thailand—announced in APAC are projected to contribute $10 billion to the region’s GDP by 2030 and create 86,500 jobs, the study showed.

Similarly, the five regions announced this year across Europe, the Middle East and Africa, are projected to contribute a cumulative $18.9 billion to EMEA’s GDP by 2030, and support creation of more than 110,500 jobs, AlphaBeta said in its report.

Cloud Computing

The benefits of analyzing vast amounts of data, long-term or in real-time, has captured the attention of businesses of all sizes. Big data analytics has moved beyond the rarified domain of government and university research environments equipped with supercomputers to include businesses of all kinds that are using modern high performance computing (HPC) solutions to get their analytics jobs done. Its big data meets HPC ― otherwise known as high performance data analytics. 

Bigger, Faster, More Compute-intensive Data Analytics

Big data analytics has relied on HPC infrastructure for many years to handle data mining processes. Today, parallel processing solutions handle massive amounts of data and run powerful analytics software that uses artificial intelligence (AI) and machine learning (ML) for highly demanding jobs.

A report by Intersect360 Research found that “Traditionally, most HPC applications have been deterministic; given a set of inputs, the computer program performs calculations to determine an answer. Machine learning represents another type of applications that is experiential; the application makes predictions about new or current data based on patterns seen in the past.”

This shift to AI, ML, large data sets, and more compute-intensive analytical calculations has contributed to the growth of the global high performance data analytics market, which was valued at $48.28 billion in 2020 and is projected to grow to $187.57 billion in 2026, according to research by Mordor Intelligence. “Analytics and AI require immensely powerful processes across compute, networking and storage,” the report explained. “As a result, more companies are increasingly using HPC solutions for AI-enabled innovation and productivity.”

Benefits and ROI

Millions of businesses need to deploy advanced analytics at the speed of events. A subset of these organizations will require high performance data analytics solutions. Those HPC solutions and architectures will benefit from the integration of diverse datasets from on-premise to edge to cloud. The use of new sources of data from the Internet of Things to empower customer interactions and other departments will provide a further competitive advantage to many businesses. Simplified analytics platforms that are user-friendly resources open to every employee, customer, and partner will change the responsibilities and roles of countless professions.

How does a business calculate the return on investment (ROI) of high performance data analytics? It varies with different use cases.

For analytics used to help increase operational efficiency, key performance indicators (KPIs) contributing to ROI may include downtime, cost savings, time-to-market, and production volume. For sales and marketing, KPIs may include sales volume, average deal size, revenue by campaign, and churn rate. For analytics used to detect fraud, KPIs may include number of fraud attempts, chargebacks, and order approval rates. In a healthcare environment, analytics used to improve patient outcomes might include key performance indicators that track cost of care, emergency room wait times, hospital readmissions, and billing errors.

Customer Success Stories

Combining data analytics with HPC:

A technology firm applies AI, machine learning, and data analytics to client drug diversion data from acute, specialty, and long-term care facilities and delivers insights within five minutes of receiving new data while maintaining a HPC environment with 99.99% uptime to comply with service level agreements (SLAs).A research university was able to tap into 2 petabytes of data across two HPC clusters with 13,080 cores to create a mathematical model to predict behavior during the COVID-19 pandemic.A technology services provider is able to inspect 124 moving railcars ― a 120% reduction in inspection time ― and transmit results in eight minutes, based on processing and analyzing 1.31 terabytes of data per day.A race car designer is able to process and analyze 100,000 data points per second per car ― one billion in a two-hour race ― that are used by digital twins running hundreds of different race scenarios to inform design modifications and racing strategy.  Scientists at a university research center are able to utilize hundreds of terabytes of data, processed at I/O speeds of 200 Gbps, to conduct cosmological research into the origins of the universe.

Data Scientists are Part of the Equation

High performance data analytics is gaining stature as more and more data is being collected.  Beyond the data and HPC systems, it takes expertise to recognize and champion the value of this data. According to Datamation, “The rise of chief data officers and chief analytics officers is the clearest indication that analytics has moved from the backroom to the boardroom, and more and more often it’s data experts that are setting strategy.” 

No wonder skilled data analysts continue to be among the most in-demand professionals in the world. The U.S. Bureau of Labor Statistics predicts that the field will be among the fastest-growing occupations for the next decade, with 11.5 million new jobs by 2026. 

For more information read “Unleash data-driven insights and opportunities with analytics: How organizations are unlocking the value of their data capital from edge to core to cloud” from Dell Technologies. 

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

Data Management

Cyber hygiene describes a set of practices, behaviors and tools designed to keep the entire IT environment healthy and at peak performance—and more importantly, it is a critical line of defense. Your cyber hygiene tools, as with all other IT tools, should fit the purpose for which they’re intended, but ideally should deliver the scale, speed, and simplicity you need to keep your IT environment clean.

What works best is dependent on the organization. A Fortune 100 company will have a much bigger IT group than a firm with 1,000 employees, hence the emphasis on scalability. Conversely, a smaller company with a lean IT team would prioritize simplicity.

It’s also important to classify your systems. Which ones are business critical? And which ones are external versus internal facing? External facing systems will be subject to greater scrutiny.

In many cases, budget or habit will prevent you from updating certain tools. If you’re stuck with a tool you can’t get rid of, you need to understand how your ideal workflow can be supported. Any platform or tool can be evaluated against the scale, speed and simplicity criteria.

An anecdote about scale, speed and complexity

Imagine a large telecom company with millions of customers and a presence in nearly every business and consumer-facing digital service imaginable. If your organization is offering an IT tool or platform to customers like that, no question you’d love to get your foot in the door.

But look at it from the perspective of the telecom company. No tool they’ve ever purchased can handle the scale of their business. They’re always having to apply their existing tools to a subset of a subset of a subset of their environment. 

Any tool can look great when it’s dealing with 200 systems. But when you get to the enterprise size, those three pillars are even more important. The tool must work at the scale, speed, and simplicity that meets your needs.

The danger of complacency

With all the thought leadership put into IT operations and security best practices, why is it that many organizations are content with having only 75% visibility into their endpoint environment? Or 75% of endpoints under management? 

It’s because they’ve accepted failure as built into the tools and processes they’ve used over the years. If an organization wants to stick with the tools it has, it must:

Realize their flaws and limitationsMeasure them on the scale, speed and simplicity criteriaDetermine the headcount required to do things properly

Organizations cannot remain attached to the way they’ve always done things. Technology changes too fast. The cliché of “future proof” is misleading. There’s no future proof. There’s only future adaptable.

Old data lies

To stay with the three criteria of strong cyber hygiene—scale, speed and simplicity—nothing is more critical than the currency of your data. Any software or practice that supports making decisions on old data should be suspect. 

Analytics help IT and security teams make better decisions. When they don’t, the reason is usually a lack of quality data. And the quality issue is often around data freshness. In IT, old data is almost never accurate. So decisions based on it are very likely to be wrong. Regardless of the data set, whether it’s about patching, compliance, device configuration, vulnerabilities or threats, old data is unreliable.

The old data problem is compounded by the number of systems a typical large organization relies on today. Many tools we still use were made for a decades-old IT environment that no longer exists. Nevertheless, today tools are available to give us real-time data for IT analytics.

IT hygiene and network data capacity

Whether you’re a 1,000-endpoint or 100,000-endpoint organization, streaming huge quantities of real-time data will require network bandwidth to carry it. You may not have the infrastructure to handle real-time data from every system you’re operating. So, focus on the basics. 

That means you need to understand and identify the core business services and applications that are most in need of fresh data. Those are the services that keep a business running. With that data, you can see what your IT operations and security posture look like for those systems. Prioritize. Use what you have wisely.

To simplify gathering the right data, streamline workflows

Once you’ve identified your core services, getting back to basics means streamlining workflows. Most organizations are in the mindset of “my tools dictate my workflow.” And that’s backward.

You want a high-performance network that has low vulnerability and strong threat response.  You want tools that can service your core systems, do efficient patching, perform antivirus protection and manage recovery should there be a breach. That’s what your tooling should support. Your workflows should help you weed out the tools that are not a good operational fit for your business.

Looking ahead

It’s clear the “new normal” will consist of remote, on-premises, and hybrid workforces. IT teams now have the experience to determine how to update and align processes and infrastructure without additional disruption.

Part of this evaluation process will center on the evaluation and procurement of tools that provide the scale, speed and simplicity necessary to manage operations in a hyper converged world while:

Maintaining superior IT hygiene as a foundational best practiceAssessing risk posture to inform technology and operational decisions Strengthening cybersecurity programs without impeding worker productivity

Dive deeper into cyber hygiene with this eBook.

Analytics

Becoming a sustainable enterprise is no longer a “nice to have” priority – reducing a company’s carbon footprint and fighting climate change is now mainstream. In fact, more than 3,200 companies have set science-based carbon targets, and thousands of companies from around the world are pledging to reach net-zero emissions by either 2040 or 2050. While CIOs and CTOs play critical roles in a company’s digital transformation efforts, only about half of them are part of their organization’s sustainability goal leadership team, and even fewer are assessed on a company’s achievement of its sustainability goals.

“Worldwide IT leaders are adopting and integrating a sustainable approach to their business models,” says Sanjay Singh, executive vice president and head, Alphabet and HCL Google Ecosystem, HCL. “A sustainable model is built on an entrepreneurial approach to collaboration and building together, while making sure that the impact on the ecosystem is reduced steadily. Adopting sustainable innovation practices demands a change in the outlook and the organizational culture of the company, including the current services and practices.” Adopting a sustainable model mindset across the enterprise fosters an environment for collaboration, innovation, and entrepreneurship.

It will be key for leaders to understand how technology can help in their sustainability transformation. In a recent survey of 1,500 global executives, about three in four executives (78%) cite technology as critical for their future sustainability efforts, attesting that it helps transform operations, socialize their initiatives more broadly, and measure and report on the impact of their efforts. It’s imperative that sustainability teams, tech experts and executives come together to make the authentic, impactful progress we need to make.

“We’re entering a new era of sustainability-driven business transformation – where organizations that embrace sustainability as core to their business will be the ones that succeed. Cloud is key to enabling and accelerating that transformation,” said Justin Keeble, managing director of global sustainability at Google Cloud. “As the cleanest cloud in the industry, every one of our customers immediately transforms their IT carbon footprint the moment they operate on Google Cloud. But, we know that’s not enough. Embedding sustainability across every part of the organization will result in not just better business practices, it will transform many industries, and create entirely new businesses.”

Here are seven key ways that IT leaders can contribute to sustainability efforts that go beyond just “turning the data center green”:

Technology: Improve software efficiency to reduce hardware energy costs, including the use of cloud software; adopting Internet of Things sensors to improve efficiency; exploring artificial intelligence and machine learning to better predict ways to become more sustainable; and utilizing Big Data analytics to monitor energy usage – you can’t change what you can’t measure.Natural resources: In addition to reducing their carbon footprint, companies need to address water usage and improve waste management practices. As a first step, companies can adopt data analytics to help reduce food or product waste.      Circular economy: Re-use infrastructure for new technology initiatives instead of retiring equipment. This involves moving from a “cradle-to-grave” strategy to more of a “cradle-to-cradle” approach, which intelligently recycles and reuses IT assets into the next generation of products to create a closed loop.Supply chain: Work with IT vendors and suppliers to ensure they have sustainability practices and build visibility across the supply chain. If a supplier is not working on their own sustainability goals, find one who is more in line with your company’s goals.Data centers: When considering a move to the cloud, choose a green cloud provider that has a sustainability strategy that reduces the environmental impact of their data centers.Data: Use data to share information around sustainability efforts. For example, the Carbon Footprint tool from Google Cloud lets companies accurately measure, track, and report on the gross carbon emissions associated with the electricity of their cloud usage.Financial reporting: Assist the finance groups within the enterprise around sustainable finance and Environmental, Social, and Governance for banking, financial services, and insurance partners.

As you can see, the list of ideas goes beyond just adding recycling bins in the data center. These initiatives can help CIOs and CTOs take a strong leadership role in their enterprise’s sustainability initiatives and help validate their place in a digital and sustainable transformation effort.

To learn more about how HCL Technologies and Google Cloud can assist your company with its sustainability journey, click here.

Cloud Computing

Due to social distancing imposed over the last couple of years, the cyber industry has accelerated in all areas of life across Africa, especially in health in a relentless quest for solutions for the Covid-19 crisis. Africa has advanced with entrepreneurs who have tried to make the most of digital opportunities in a sector that has major shortcomings. Most notably, the chronic shortage of skilled personnel on the continent, which the WHO detailed in a June 2022 report predicting a shortage of millions of health professionals in Africa by 2030, an increase of 45% since 2013, when last estimates were made. Yet the report also envisaged a “promising future” for e-health on the continent, noting that a new wave of mobile technology is radically changing the way health care is delivered in urban and rural communities.

However, numbers and opinions of the overall situation aren’t encouraging.

As of November 2020, 34 member states in the WHO African region have developed digital health strategies, but these have so far only been implemented in 12 countries.

Africa has low maturity, is the least advanced in infrastructure, and lags behind the global average in legislation, policy and compliance, standards and operability, and infrastructure according to the Global Digital Health Index Indicator 2019.

Between telemedicine, awareness and prevention through mobile health promotion applications, monitoring of patients and epidemics via electronic medical records, the scope of e-health is wide and there are many levels of difficulty to overcome so called “medical deserts” in Africa, according to Hadi Zarzour, manager of Africa and the Middle East at Evolucare, a French company that publishes health software and an expert in health information systems.

“Today in Africa, there’s a growing ambition to go digital because it allows us to secure data,” says Zarzour. “We no longer lose the patient’s data as we used to do with paper. The information is preserved and digital allows us to store, trace and archive this data for better medical follow-up and to avoid bad communication of medical information.”

CIO, Digital Transformation, Medical Devices, Startups