Enterprise resource planning (ERP) software vendor IFS has agreed to acquire Falkonry, the developer of an AI-based time-series data analytics tool, to boost its enterprise asset management (EAM) services portfolio.

IFS has an eye on the growing number of connected machines in factories, and will add Falkonry’s self-learning Time Series AI Suite, which can help enterprises manage and maintain manufacturing equipment, to its existing enterprise simulation and AI-based scheduling and optimization capabilities.  

EAM can be considered a subset of ERP software, providing tools and applications to manage the lifecycle of physical assets in an enterprise, in order to maximize their value. The global EAM market is expected to grow at a compound annual growth rate (CAGR) of 8.7% to reach $5.5 billion by 2026, from $3.3 billion in 2020, according to research from MarketsandMarkets.

Cupertino-headquartered Falkonry, which was founded in 2012 by CEO Nikunj Mehta, has customers across North America, South America, and Europe, including the US Navy and Air Force, Ternium, North American Stainless, and Harbour Energy, among others. It has raised $13.3 million in funding from investors including Zetta Venture Partners, SparkLabs Accelerator, Polaris Partners, Presidio Ventures, Basis Set Ventures, Fortive, and Next47. IFS expects to complete the acquisition of Falkonry by the fourth quarter of 2023. In June, it announced the acquisition of Poka — a connected worker software services provider — in order to boost the productivity of an overall factory. And last year it scooped up Netherlands-based Ultimo to help meet demand for cloud-based enterprise asset management technology.

Asset Management Software, Enterprise Applications, Mergers and Acquisitions

Offering an extensive portfolio of ICT solutions and services in conjunction with its high-available data centers, fastest broadband internet and telecommunications networks for consumers and businesses, Dialog Enterprise is one of the most trusted information and communication technology brands in Asia. Now it is also the first provider in Sri Lanka to earn the VMware Sovereign Cloud distinction.

“More than 35,000 enterprises rely on us for the compute, storage and networking power they need to excel,” says Venura Mendis,  Head of ICT Business at Dialog Enterprise. “The cloud of course is a game-changer and our Dialog Enterprise Cloud gives customers in a wide range of industries the capabilities and flexibility inherent in a software-defined data center, complete with a self-service portal and hybrid cloud capabilities.”

Based on VMware technology and featuring numerous capabilities, among them Container PaaS service, Backup-as-a-Service and Disaster Recovery-as-a-Service, the Dialog Enterprise Cloud is the choice of many of the country’s industry leaders. This includes stalwarts in the nation’s banking, construction, education, government, healthcare, hospitality and dining, manufacturing, retail and transportation industries.

“We understand that different industries require different business solutions,” adds Mendis. “For that reason, we offer a broad array of solutions and services designed for various industries that can be customized for any business, along with completely bespoke offerings that draw on the extensive design and development expertise our team offers. Increasingly, we’ve seen a lot of demand for sovereign cloud offerings, particularly in highly regulated industries where the stakes are high and data privacy demands are great.”

To address this, Dialog Enterprise sought to earn VMware Sovereign Cloud distinction, becoming the first provider in the entire region to do so – a feat that echoed its earlier honor of being the first company in Sri Lanka to provide VMware Cloud Verified services. Mendis is quick to stress that while the company is currently the only company to have done so, the demand for sovereign clouds is great and growing rapidly.

“When a solution is a build on a robust framework like VMware’s, it simply and seamlessly works across VMware-based multi-cloud platforms, which makes it easier to lift and shift workloads from on-premises environments to the cloud while allowing for continual modernization at a much lower cost of ownership,” he says. “Now enterprises also increasingly want and demand the added ability to ensure full data sovereignty and jurisdictional control at all times. This includes making sure that data is never accessed by foreign parties in the context of maintenance or service.”

Mendis notes that because the data in its sovereign cloud is subject to the full jurisdictional control of Sri Lanka, full compliance with the country’s privacy laws can be guaranteed. This stands in stark contrast to the clouds offered by hyperscalers, but it is not the only reason customers are choosing a Sri Lankan sovereign cloud.

“Yes, customers are concerned about hosting their sensitive data in public clouds due to issues like data confidentiality, data loss, data storage needs, security and transparency issues,” says Mendis. “But it’s not all about safeguards, or the ability we have to guarantee compliance with local laws and regulations. Enterprises also turn to us because our sovereign Infrastructure-as-a-Service offers low-latency connectivity options, an intuitive portal, pricing models that are based consumption and high performance.”

Notably, it also integrates seamlessly with the myriad innovative solutions the company offers, from those that harness the potential of the Internet of Things, to broader Enterprise solutions platforms. This includes managed SDWAN and SASE solutions, Data as a Service offerings and multiple other Cyber and physical security solutions  D – among many others.

“As organizations across industries look to digital transformation to drive growth, the cloud will by necessity play a core role,” says Mendis. “And it is no surprise that it is increasingly important to ensure that sensitive data be stored and processed in a secure and compliant environment. The sovereign cloud solution we offer gives our clients much-needed peace of mind and the knowledge that their data is securely stored in a physically and logically isolated environment, managed by a team of experts. It’s only natural that more organizations will want that.”

Learn more about Dialog Enterprise and its partnership with VMware here.

Cloud Management

In the face of structural change and rampant crises, the world—and the technologies reshaping it—is experiencing a drastic shift. Even the very nature of disruption is evolving, with challenges such as talent gaps and inflationary pressures frequently demanding our immediate attention. To outpace these events, CIOs need to leverage resilience capabilities as a competitive advantage. Seizing opportunities to differentiate the business through financial, supply chain, the business ecosystem, and sustainability can help them thrive in this new digital era.

With this outlook, some may expect IT spending to shrink drastically. Yet this hasn’t seemed to be the case. On the contrary, spending on digital technology is set to grow by 3.5 times the economy in 2023. It’s clear that more organisations are looking to establish a foundation for operational excellence, competitive differentiation, and long-term growth.

A handful of businesses are already doing so as they transition from just adopting digital transformation to embodying what it means to be a digital business. Whereas digital transformation in its earliest iteration—digital transformation 1.0—focuses on driving mobility and tapping on the then-nascent Internet of Things, the subsequent phase prominently features technology such as artificial intelligence and machine learning and ways to extend their use across every aspect of the business.

Digital businesses are the next step in this evolution; they are staunchly digital-first and are hyper-focused on delivering business value and outcomes through digital innovation. This will become a crucial transition for businesses. After all, 40% of total revenue for Global 2000 organisations will be generated by digital products, services, and experiences by 2026.

Understanding the successes of past IDC Future Enterprise Awards winners

Amidst the disruptive forces in the business landscape, forward-looking digital businesses are leveraging digital technologies to optimise their operations and performance.

We look at Midea Industrial Internet (MIOT), the recipient of the Future Enterprise of the Year at the IDC Future Enterprise Awards 2022. The company is actively deploying AI and advanced digital technologies to enhance the user experience while empowering their employees and partners with the right digital tools. This has resulted in a flexible and efficient supply chain that has reduced their delivery cycle by 56% and enhanced the way they collaborate with partners. At the same time, they also increased the inventory turnover of finished products by 125%, paving the way for other enterprises in the manufacturing industry to become the next digital business.

Visionary leaders, too, play a significant role in every digital business as they are instrumental in leading the business toward an ambitious vision. Suresh Sundararajan, the recipient of the CDO of the Year Award (Singapore) in 2022, is the then-Chief Officer and now CEO of Olam Ventures and Olam Technology Services.

Sundararajan has redefined the company’s purpose of reimagining global agriculture and food systems, and this is buoyed by his beliefs in good leadership: the ability to manage ambiguity and look at things through an experimental framework, which can open doors to more possibilities for the business. This is in line with his advice for future leaders, which is to invest in people without putting constraints on their suggestions and ideas. At the same time, transformation is dependent on using the right technologies for the problems a business is solving. According to Sundararajan, businesses should not be led by the most current technologies to embrace transformation. A better approach is to identify what the issues are, and then look into the technology that can be deployed to solve them.

Taking risks with emerging, cutting-edge technologies is another approach of many digital businesses. This is what Zuellig Pharma did which it to becoming the recipient of the Best in Future of Intelligence and Special Award for Digital Resiliency in 2022.

The pharmaceutical company wanted to build an ecosystem of solutions that will better connect patients and their pharma clients. That’s why Zuellig Pharma had invested heavily in data and data analytics, becoming a pioneer in the use of blockchain as part of their solution on traceability. These technologies are used to develop eZTracker, which helps bring their pharma clients and customers together by delivering data-driven insights, such as ensuring that healthcare products being delivered are from an authorised source. Zuellig Pharma has also created eZRx, a B2B eCommerce platform for buying and selling healthcare products, and eZHealth, an app that offers patients access to comprehensive healthcare services.

Recognising the next digital business for Asia Pacific Japan IDC Future Enterprise Awards

To recognise the next generation of digital businesses and leaders, the 2023 Asia Pacific Japan IDC Future Enterprise Awards is now open for nominations. Any end-user organisation can nominate their project or initiative, or be nominated by a 3rd party organisation—agencies, associations, and IT suppliers—to gain recognition in the execution of the initiative in one of the categories.

The IDC Future Enterprise Awards follows a two-phased approach to determine country and regional winners. Each nomination is evaluated by IDC’s country and regional analysts against a standard assessment framework based on IDC’s Future Enterprise taxonomy. All country winners will then qualify for the regional competition, where a regional panel of judges comprising IDC Worldwide analysts, industry thought leaders, and members of the academia, will determine the regional winners.

Entries will be judged based on the eight building blocks organizations need to successfully close the digital gap and become Future Enterprises in a digital-first world: Connectedness, Customer Experience, Digital Infrastructure, Industry Ecosystems, Intelligence, Operations, Trust, and Work.

Individual awards for CEO and CIO/CDO of the Year will also be handed out, as well as Special Awards for Digital Innovation, Digital Resiliency, and Sustainability. New in 2023, Special Awards for the best Digital Native Business as well as Smart Cities initiatives for Citizen Wellbeing, Connected City, and Digital Policies will also be given out.

Learn more at IDC Future Enterprise Awards 2023.

Register for an account at IDC Future Enterprise Awards Platform to submit an entry.

CIO, Enterprise Applications, Enterprise Architecture

There’s no denying the fact that cloud technology is headed in many different directions, all aimed at providing rapid, scalable access to computing resources and IT services.

Yet as cloud technology evolves, many organizations are becoming more thoughtful and intentional in their transformation journey as they look to close the gap between simply running on the cloud and creating enterprise-wide value, observes Cenk Ozdemir, cloud and digital leader at business consulting firm PwC. “Organizations are really focused on achieving the elusive ROI of cloud that only a minority have been able to secure,” he says.

Here’s a quick rundown of the top enterprise cloud trends that promise to lead to greater ROI through innovation and enhanced performance.

1. AI/ML takes center stage

All the major cloud providers are rolling out AI/ML features and products, many designed for use with their core cloud offerings, says Scott W. Stevenson, technology partner at national law firm Culhane Meadows. He notes that most providers are also using AI/ML to improve provisioning of their own services.

While no one wants to be left behind if the promises of AI/ML hold true, there are varying levels of concern about reliability, security, and bias, particularly on the customer side, Stevenson says.

“There’s little doubt that adoption will continue at a fast pace overall, but larger enterprise customers — particularly in highly regulated industries — will be more measured,” he observes. Yet Stevenson doesn’t expect to see many enterprises sitting on the sideline. “It may be that the lessons they learned when migrating to cloud solutions in recent years will serve as a partial road map for adoption of AI/ML technologies — although on an accelerated timeline.”

Technology-driven organizations that prioritize innovation and digital transformation will be the most likely early AI/ML adopters in the cloud, says Michael Ruttledge, CIO and head of technology services at Citizens Financial Group. “Additionally, organizations that are data-driven and rely heavily on data analysis and insights will be able to leverage the best AI/ML services from different providers to enhance decision-making, automate processes, and personalize customer experiences,” he predicts.

Ruttledge notes that his enterprise’s cloud and AI/ML transition is driving stability, resiliency, sustainability, and speed to market. “Our AI/ML capabilities are increasing our ability to stay lean and drive insights into our internal and external customer services,” he says.

2. Industry clouds fuel innovation

Industry clouds are composable building blocks — incorporating cloud services, applications, and other key tools — built for strategic use cases in specific industries. Industry clouds enable greater flexibility when allocating resources, helping adopters make strategic choices on where to differentiate, explains Brian Campbell, a principal with Deloitte Consulting. “This ecosystem is evolving rapidly, driving the need to consistently monitor what exists and what works.”

By leveraging the ever-expanding number of cloud players serving industry-specific business needs in a composable way, industry clouds provide an opportunity to accelerate growth, efficiency, and customer experience. “Allowing for further differentiation on top of these solutions forges a close collaboration between business and technology executives on where to focus differentiation and resources,” Campbell says.

Enterprises looking to lead or stay ahead of their industry peers drove the first wave of industry cloud adopters. The success experienced by those organizations generated a rapid follower wave sweeping across a broader market. “Industry clouds are also leveling the playing field, so midmarket clients now have access to advanced capabilities they no longer need to build internally from the ground up to compete against their larger global competitors,” Campbell says.

3. Modernizing core apps for the cloud

Most large enterprises have sought quick wins on their digital transformation and cloud adoption journeys. They’ve brought smaller, less critical workloads to the cloud, containerizing legacy applications to make them more cloud friendly, and have adopted a cloud-first strategy for any new application development, observes Eric Drobisewski, senior enterprise architect at Liberty Mutual Insurance.

Yet an early emphasis on quick wins has left many vital business applications and related data stuck in enterprise data centers or private cloud ecosystems still in need of eventual migration. “Often, these workloads are tightly coupled to costly hardware and software [platforms] that were built at a time when all that was available was a vertically bound architecture,” Drobisewski explains.

Drobisewski warns that continuing to maintain parallel ecosystems with applications and data splintered across data centers, private clouds, public clouds, physical infrastructures, mainframes, and virtualized infrastructure is both complex and costly. “Simplification through modernization will reduce costs, address operational complexity, and introduce horizontal scale and elasticity to dynamically scale to meet emerging business needs,” he advises.

4. Making the most of the multicloud hybrid-edge continuum

The multicloud hybrid-edge continuum marks a crucial step forward for enterprises looking to drive ongoing reinvention by leveraging the convergence of disparate technologies. “Enterprises must focus on defining their business reinvention agenda and using the cloud continuum as an operating system to bring together data, AI, applications, infrastructure, and security to optimize operations and accelerate business value,” says Nilanjan Sengupta, cloud and engineering lead with Accenture Federal Services.

This trend will enable organizations to steer clear of an overreliance on a single public-cloud provider, Sengupta says. “It satisfies a multitude of business demands while unlocking innovation advancements in data, AI, cyber, and other fields, aligning capabilities to mission and business outcomes.” Hybrid architectures are rapidly becoming the only viable option for most organizations, he notes, since they provide the flexibility, security, and agility necessary to adapt to rapidly changing business needs.

The multicloud hybrid-edge continuum will impact CIOs and their enterprises by forcing them to address several key issues holistically, such as determining the right operating model, integrating and managing different technology platforms, finding the right talent, and managing costs, Sengupta says. “CIOs will need to develop strategies and roadmaps to transition to hybrid cloud environments, while also fostering a culture of agility and continuous innovation within their organizations,” he adds.

5. Reaping the rewards of cloud maturity

After years of aggressive adoption, the cloud is now firmly embedded in the IT and enterprise mainstream. “Cloud maturity is not something an organization gains overnight, but when taken seriously, it becomes a distinct competitive advantage,” says Drew Firment, vice president of enterprise strategies and chief cloud strategist at online course and certification firm Pluralsight.

Firment believes that cloud maturity typically starts with creating a Cloud Center of Excellence (CCoE) to establish a clear business intent, and gain experience with a single cloud before adding others. “Once an organization masters one cloud environment and is firmly established in the cloud-native maturity level, they can begin using other cloud providers for specific workloads,” he explains.

For example, Firment says, a customer service application might be built on Amazon Web Services while leveraging artificial intelligence services from Google Cloud Platform. “The goal is to align the strengths of each cloud provider to better support your specific business or customer needs.”

A purposeful and deliberate approach to a multicloud strategy gives CIOs and their organizations great power, Firment says. “While many technologists in 2023 will be focused on investments in multicloud tools like Kubernetes and Terraform, leaders will be focused on investing in the multicloud fluency of their workforce.”

6. The rise of FinOps and cloud cost optimization

Cloud FinOps offers a governance and strategic framework for organizations to manage and optimize their cloud expenditures transparently and effectively.

“By implementing a holistic FinOps strategy, an organization can drive financial accountability by increasing the visibility of cloud spending across the organization, reducing redundant services, and forecasting future cloud expenditures, allowing for more accurate planning,” says Douglas Vargo, vice president, emerging technologies practice lead at IT and business services firm CGI. “Driving more visibility and fiscal accountability around cloud costs will enable organizations to refocus that spending on innovation initiatives and realize more business value for their cloud investments.”

Organizations that effectively deploy FinOps governance and strategies will reduce cloud costs by as much as 30%, Vargo predicts, enabling them to re-invest those savings into innovation initiatives. “An effectively executed FinOps framework will improve the ROI of cloud spend and open up funding for other expenditures such as increased innovation funding,” he adds.

7. Hyperscalers adjust to slower growth

The three major hyperscalers — Amazon Web Services, Microsoft Azure, and Google Cloud Platform — have grown rapidly over the past few years, observes Bernie Hoecker, partner and enterprise cloud transformation leader with technology research and advisory firm ISG. Meanwhile, many enterprises have accelerated their digital transformation to meet the emerging demands created by remote work teams, as well as to provide customers with improved digital experiences.

“In many cases, however, enterprises overinvested in IT and cloud capabilities,” he notes, “and they’re now focused on optimizing the investments they’ve made rather than moving new workloads to the cloud.”

Yet enterprises weren’t the only overinvestors. “The Big Three hyperscalers also are going through some rightsizing after each of them overhired during the pandemic, and are now forced to deal with some bloat in their workforce,” Hoecker says. He reports that Amazon recently cut 9,000 more jobs in addition to the 18,000 they announced in January. Microsoft laid off 10,000 employees in January and Google, among other cost-cutting measures, has dismissed 12,000 staffers.

Cloud Computing, Cloud Management, Hybrid Cloud, Innovation, Private Cloud, Technology Industry

From quality control to revenue growth and workplace safety, digital transformation strengthens almost every aspect of the business. Those who fail to keep up with the pace of digital technology run serious risks of falling behind. 

To fully leverage digital transformation, businesses today are turning to edge computing. Edge computing allows you to process data at the edge of the network, closer to the source of the data, instead of sending it to a centralized location like a datacenter or the cloud. By keeping sensitive data on-site, edge computing enables faster data processing, reduces bandwidth usage, and enhances data security. Yet, edge deployment remains a complex endeavor, with numerous choices and decisions to be made, the benefits of which are often unclear. 

Reliable, secure, and customizable network solutions are at the heart of edge computing success. With constant improvements taking shape across wired, 4G, and 5G standards, and with public and private options also available, what are the real benefits of each and how do they compare against one another? We are also hearing more about private 5G networks at the edge. What advantages does it provide for edge applications and is it the best short- and long-term choice for every enterprise? These are just a few questions that arise when determining how best to simplify and maximize edge investments.

Bill Pfeifer, who leads Messaging and Thought Leadership for the Dell Technologies Edge team, and Stephen Foster, a product manager within the Edge business unit at Dell Technologies, recently discussed these key issues. 

Bill Pfeifer

Dell

Foster has significant knowledge of connectivity technologies and how they can enhance business outcomes, from both an IT and telecom network perspective. 

Meanwhile Pfeifer’s focus is on distilling the complexity of the edge into simple messages that are immediately useful so that customers can succeed as they build, grow and simplify their edges.

Bill: We have all been seeing non-stop hype throughout the industry about the magic of 5G and how it’s going to change the world, but often, the conversation seems to follow the track that “it will be faster than 4G.” While faster is a good step in connectivity evolution, it is hardly a revolution. Can you explain what 5G is all about, beyond just being faster? 

Stephen: 5G offers many advantages over its predecessor, 4G. Beyond just speed, 5G networks are driving innovation and new applications across a wide range of industries. 

Bill: So those benefits apply to 5G across the board, but I am also hearing lots about how private 5G is on the rise. Also, private 4G, so – how is private wireless notably different or better for enterprise edge deployments?

Stephen: Private wireless has been growing over the years, beginning with 4G, and now moving to 5G. In the early days, it may have just been limited to a shared or dedicated radio serving a single enterprise but, with private 5G, we have a couple of options for keeping the data entirely within the enterprise to take full advantage of the very low latency and data security. 

One option enabled by 5G networking is the deployment of MEC. MEC stands for Multiaccess Edge Computing, which is a 5G-defined technology that enables computing resources to be located at the edge of a network, closer to the end users and devices at the enterprise location. MEC is often deployed as part of the communication service provider’s public 5G network, but it allows for private processing of data either on-premises at an enterprise or nearby on dedicated hardware.

The second option is Standalone Private wireless. Here, the complete cellular network is deployed within the enterprise location. There is no connection to the public network. The management of the network is completely under control of the enterprise including SIM management. Standalone Private solutions started with 4G, but most new ones are using 5G technology.

In either case, Private 5G Wireless networks enable support for challenging use cases and business processes that are restricted in public networks.

Bill: We are also seeing tech refreshes across other connectivity types – wired connections are faster than ever, Bluetooth is great for short-range connectivity, and NFC (near field communication) means we rarely have to swipe our credit cards anymore. But related more closely to this topic is WiFi6. Enterprises the world over have Wi-Fi installed, and it sounds like WiFi6 is a notable enhancement, too. Can you tell us what to expect there?

Stephen: If you have things or people that are moving around the enterprise or if they are difficult or expensive to reach, then the choices come down to Wi-Fi or Private 5G wireless. WiFi6 shares many technology attributes with 5G. Typically, private 5G wireless complements Wi-Fi – they both have a role within the enterprise.

The main differences between private 5G and Wi-Fi include:

Range: Private 5G has a much wider range than Wi-Fi, which means that it can provide connectivity over a much larger area. Private 5G can cover an area of several kilometers, while Wi-Fi is typically limited to a range of a few hundred feet. Serving a large area with Wi-Fi will require many access points to operate and maintain. 

Capacity: Private 5G has much greater capacity than Wi-Fi, which means that it can support a much larger number of devices and data-intensive applications. Private 5G can support up to one million devices per square kilometer, while Wi-Fi is typically limited to a few hundred devices per access point.

Security: Private 5G provides stronger security than Wi-Fi, with better encryption and authentication mechanisms. This is particularly important for enterprises that are dealing with sensitive data or operating in high-security environments.

Reliability: Private 5G is more reliable than Wi-Fi, with better coverage and fewer dropped connections. This is achieved through technologies such as beamforming and network slicing, which enable the network to allocate resources more efficiently.

Bill: So, bringing those points together, do you have thoughts on when a typical enterprise might want private wireless vs. Wi-Fi? Can you describe a few scenarios where someone might prefer one over the other and explain why, so that we can start to understand how they really compare?

Stephen: In general, the drivers for going with private 4G/5G wireless vs. Wi-Fi are the need for a more secure solution, more predictable performance including lower latency, throughput and coverage, and the need for large and bounded geographical coverage areas, like factories, shipping ports, airports, and mining areas. Many of these areas, especially outdoor ones, are incompatible with Wi-Fi. Even indoor areas like large factories or warehouses cannot always be predictably reached by Wi-Fi. Think of reliable connections to automate guided vehicles within a factory or locate shipping containers at a port. 

Another difference to consider is the interference in the spectrum. Wi-Fi operates in an unlicensed spectrum and is often prone to interference. Private 5G wireless operates in licensed spectrum and is very well suited for mission-critical applications. Coverage, reliability, and predictability are a few of the major factors influencing the choice of private 5G wireless. 

Bill: To wrap up this conversation, could you give a quick summary of key considerations that folks should be making? Let’s say we’re talking to a typical enterprise organization that has a legacy wired network and is looking to move to Wi-Fi, or public 5G, or private 4G/5G – I’m sure none of them are a one-size-fits-all solution, so what are the key points to consider when trying to decide which technology to use?

Stephen: When enterprises are considering an upgrade from a legacy wired network, they should consider several factors, including coverage, bandwidth and speed, latency, security, cost, and customization. To choose the right wireless technology, enterprises must weigh the advantages and disadvantages of Wi-Fi and private 5G. Wi-Fi is the most cost-effective option, but it may not be the most secure and may not offer adequate range. 

On the other hand, private 5G provides wider coverage, higher speeds, flexibility in coverage, and strong security features. Private 5G networks offer the lowest latency, which is essential for applications that require real-time response. Ultimately, the choice between Wi-Fi and private 5G depends on the specific needs and requirements of the enterprise.

It is important to note that the networking options of Private 5G and Wi-Fi are just one piece of the puzzle in achieving a total solution for the enterprise. Edge computing serves as a platform to support multiple enterprise applications, like computer vision, digital twins, AR/VR, and more. These applications play a crucial role in supporting various business outcomes, such as workforce productivity, operational efficiency, quality improvements, cost savings, workplace safety, and sustainability. 

Edge computing can help enterprises process data closer to the source, reducing latency and improving response times. By combining the capabilities of edge computing with the benefits of private 5G or Wi-Fi, we can build a comprehensive solution that meets specific needs for today while putting in place a robust foundation that supports the digital transformation journey.

And of course, Dell Technologies and Intel are always collaborating to help our customers succeed across a broad range of workloads at the edge, working with industry leaders and the open-source community to produce powerful, comprehensive solutions that are optimized to meet our customers’ needs today with the flexibility to address what comes next. Public and private wireless networks powered by Dell and Intel technologies help enterprises capitalize on 5G, MEC and edge computing, to further improve how businesses operate today, and tomorrow.

Bill: Great information, Stephen! Thanks for your time today, and for sharing your perspective and expertise. 

Learn more how Dell helps enterprises build a simpler edge with Private 5G and more www.dell.com/edge

***

Bill currently leads Messaging and Thought Leadership for the Dell Technologies Edge team. His focus is on distilling the complexity of the edge into simple messages that are immediately useful so that customers can succeed as they build, grow and simplify their edges.

Edge Computing

Vlad Sejnoha, Partner at Glasswing Ventures, former CTO & SVP R&D at Nuance, and Kleida Martiro, Principal at Glasswing Ventures are contributing authors.

Generative AI (Artificial Intelligence) and its underlying foundation models represent a paradigm shift in innovation, significantly impacting enterprises exploring AI applications. For the first time, because of generative AI models, we have systems that understand natural language at a near-human level and can generate and synthesize output in various media, including text and images. Enabling this technology are powerful, general foundation models that serve as a basis or starting point for developing other, more specialized generative AI models. These foundation models are trained on vast amounts of data. When prompted with natural language instructions, one can use these learnings in a context-specific manner to generate an output of astonishing sophistication. An analogy to generative AI used to create images may be the talented artist who, in response to a patron’s instructions, combines her lifelong exposure to other artists’ work with her inspiration to create something entirely novel.

As news cycles eclipse one another about these advancements, it may seem like generative AI sprang out of nowhere for many business and executive leaders. Still, the reality is that these new architectures are built on approaches that have evolved over the past few decades. Therefore, it is crucial to recognize the essential role the underlying technologies play in driving advancement, enterprise adoption, and opportunities for innovation.

How we got here

The most notable enabling technologies in generative AI are deep learning, embeddings, transfer learning (all of which emerged in the early to mid-2000s), and neural net transformers (invented in 2017). The ability to work with these technologies at an unprecedented scale – both in terms of the size of the model and the amount of training – is a recent and critically important phenomenon.

Deep learning emerged in academia in the early 2000s, with broader industry adoption starting around 2010. A subfield of machine learning – deep learning – trains models for various tasks by presenting them with examples. Deep learning can be applied to a particular type of model called an artificial neural net, which consists of layers of interconnected simple computing nodes called neurons. Each neuron processes information passed to it by other neurons and then passes the results on to neurons in subsequent layers. The parameters of the neural net models are adjusted using the examples presented to the model in training. The model can then predict or classify new, previously unseen data. For instance, if we have a model trained on thousands of pictures of dogs, that model can be leveraged to detect dogs in previously unseen images.

Transfer learning emerged in the mid-2000s and quickly became popular. It is a machine-learning technique that uses knowledge from one task to improve the model performance on another task. An analogy to understand this powerful technique is learning one of the “Romance Languages,” like Spanish. Due to their similarities, one may find it easier to learn another romance language, like Italian. Transfer learning is essential in generative AI because it allows a model to leverage knowledge from one task into another related task. This technique has proven groundbreaking as it mitigates the scarcity of data challenge. Transfer learning can also improve the diversity and quality of generated content. For example, a model pre-trained on a large dataset of text can be fine-tuned on a smaller dataset of text specific to a particular domain or style. This allows the model to generate more coherent and relevant text for a particular domain or style.

Another technique that became prevalent in the early to mid-2000s was embedding. This is a way to represent data, most frequently words, as numerical vectors. While consumer-facing technologies, such as ChatGPT, demonstrate what feels like human-like logic, they are a great example of the power of word embeddings. Word embeddings are designed to capture the semantic and syntactic relationships between words. For example, the vector space representation of the words “dog” and “lion” would be much closer to each other than to the vector space for “apple.” The reason is that “dog” and “lion” have considerable contextual similarities. In generative AI, this enables a model to understand the relationships between words and their meaning in context, making it possible for models like ChatGPT to provide original text that is contextually relevant and semantically accurate.

Embeddings proved immensely successful as a representation of language and fueled an exploration of new, more powerful neural net architectures. One of the most important of such architectures, the “transformer,” was developed in 2017. The transformer is a neural network architecture designed to process sequential input data, such as natural language, and perform tasks like text summarization or translation. Notably, the transformer incorporates a “self-attention” mechanism. This allows the model to focus on different parts of the input sequence as needed to capture complex relationships between words in a context-sensitive manner. Thus, the model can learn to weigh the importance of each part of the input data differently for each context. For example, in the phrase, “the dog didn’t jump the fence because it was too tired,” the model looks at the sentence to process each word and its position. Then, through self-attention, the model evaluates word positions to find the closest association with “it.” Self-attention is used to generate an understanding of all the words in the sentence relative to the one we are currently processing, “it.” Therefore, the model can associate the word “it” with the word “dog” rather than with the word “fence.”

Progress in deep learning architectures, efficiently distributed computation, and training algorithms and methodologies have made it possible to train bigger models. As of the time of writing this article, the largest model is OpenAI’s ChatGPT3, which consists of 173 billion parameters; ChatGPT4 parameter information is not yet available. ChatGPT3 is also noteworthy because it has “absorbed” the largest publicly known quantities of text, 45TB of data, in the form of examples of text, all text content of the internet, and other forms of human expression.

While the combined use of techniques like transfer learning, embedding, and transformers for Generative AI is evolutionary, the impact on how AI systems are built and on the adoption by the enterprise is revolutionary. As a result, the race for dominance of the foundation models, such as the popular Large Language Models (LLMs), is on with incumbent companies and startups vying for a winner-take-all or take-most position.

While the capital requirements for foundation models are high, favoring large incumbents in technology or extremely well-funded startups (read billions of dollars), opportunities for disruption by Generative AI are deep and wide across the enterprise. 

Understanding the technology stack

To effectively leverage the potential of generative AI, enterprises and entrepreneurs should understand how its technology layers are categorized, and the implications each has on value creation.

The most basic way to understand the technologies around generative AI is to organize them in a three-layer technology “stack.” At the bottom of this stack are the foundation models, which represent a transformational wave in technology analogous to personal computing or the web. This layer will be dominated by entrenched incumbents such as Microsoft, Google, and Meta, rather than new startup entrants, not too different from what we saw with the mobile revolution or cloud computing. There are two critical reasons for this phenomenon. First, the scale in which these companies operate, and the size of their balance sheets are pretty significant. Secondly, today’s incumbents have cornered the primary resources that fuel foundation models: compute and data.

At the top of this stack are applications – software developed for a particular use case designed for a specific task. Next in the stack is the “middle layer.” The middle layer is where enabling technologies power the applications at the top layer and extend the capabilities of foundation models. For example, MosaicML allows users to build their own AI on their data by turning data into a large-scale AI model that efficiently runs machine learning workloads on any cloud in a user’s infrastructure. Notably, an in-depth assessment of the middle layer is missing from this discussion. Making predictions about this part of the stack this early in the cycle is fraught with risk. While free tools by incumbents seeking to drive adoption of their foundation models could lead to a commoditization of the middle layer, cross-platform or cross-foundational model tools that provide added capabilities and optimize for models best fit for a use case could become game-changers.

In the near term, preceding further development in the enabling products and platforms at the middle layer, the application layer represents the bulk of opportunities for investors and builders in generative AI. Of particular interest are user-facing products that run their proprietary model pipelines, often in addition to public foundation models. These are end-to-end applications. Such vertically integrated applications, from the model to the user-facing application layer, represent the greatest value as they provide defensibility. The proprietary model is valuable because continuously re-training a model on proprietary product data creates defensibility and differentiation. However, this comes at the cost of higher capital intensity and creates challenges for a product team to remain nimble.

Use cases in generative AI applications

Proper consideration of near-term application-layer use cases and opportunities for generative AI requires knowledge of the incremental value of data or content and a complete understanding of the implications of imperfect accuracy. Therefore, near-term opportunities will be those with a high value of incremental data or content, where more data or content has economic value to the business and low consequences of imperfect accuracy.

Additional considerations include the structure of the data for training and generation and the role of human-in-the-loop, an artificial intelligence system in which a human is an active participant and thus can check the work of the model.

Opportunities for entrepreneurs and enterprises in generative AI lie in use cases where data is very structured, such as software code. Additionally, human-in-the-loop can mitigate the risk of the mistakes an AI can make.

Industry verticals and use cases with these characteristics represent the initial opportunity with generative AI. They include:

Content creation: Generative AI can improve creativity, rate of content creation, and content quality. The technology can also be leveraged to analyze the performance of different types of content, such as blogs or social media ads, and provide insight into what is resonating with the audience.

Customer service and support: Generative AI can augment and automate customer service and support through chatbots or virtual assistants. This helps businesses provide faster and more efficient service to their customers while reducing the cost of customer service operations. By pre-training on large amounts of text data, foundation models can learn to accurately interpret customer inquiries and provide more precise responses, leading to improved customer satisfaction and reduced operating costs. Differentiation among new entrants leveraging generative AI will largely depend on their ability to use fine-tuned smaller models which enable a better understanding of industry-specific language, jargon, or common customer questions as a mechanism to deliver tailored support that meets the needs of each customer and to continuously refine products for more accurate and effective outcomes.

Sales and marketing: AI can analyze customer behavior and preferences and generate personalized product recommendations. This can help businesses increase sales and customer engagement. In addition, fine-tuned models can help sales and marketing teams target the right customers with the right message at the right time. By analyzing data on customer behavior, the model can predict which customers are most likely to convert and which messaging will be most effective. And that becomes a strong differentiator for a new entrant to capture market share.

Software and product development: Generative AI will simplify the entire development cycle from code generation, code completion, bug detection, documentation, and testing. Foundation models allow developers to focus on design and feature building rather than correcting errors in the code. For instance, new entrants can provide AI-powered assistants that are fine-tuned to understand programming concepts and provide context-aware assistance, helping developers navigate complex codebases, find relevant documentation, or suggest code snippets. This can help developers save time, upskill their abilities, and improve code quality.

Knowing the past to see the future

While we are still in the early days of the immense enterprise and startup value that generative AI and foundation models will unlock, everyone from entrepreneurs to C-suite decision-makers benefits from understanding how we arrived at where we are today. Moreover, understanding these concepts helps with realizing the potential for scale, reframing, and growing business opportunities. Knowing where the opportunities lie means making smart decisions about what promises to be an inspiring future ahead.

Artificial Intelligence, Enterprise, Startups

Despite a slow-down in the latter half of the year, 2022 saw mergers or acquisitions involving 1,837 enterprise software companies, up 8% on the previous year.

That’s according to Hampleton Partners, which noted that the share of those acquisitions made by private equity firms fell to 36%, the lowest for six years. It blamed the decline on the rise in interest rates, which discourages the debt-leveraged deals private equity firms specialize in.

Although interest rates remain high, private equity firms were behind two of the biggest acquisitions to close so far in 2023: Silver Lake’s $12.5 billion purchase of Qualtrics, and Thomas Bravo’s $8 billion deal for Coupa Software.

Based on the number of deals through September 2022, deal advisor Hampleton Partners forecast that merger activity for the full year would be 19% higher than in 2021, with deals also growing larger, and valuations as a multiple of earnings also increasing.

A survey published by law firm Morrison Foerster in November 2022 found that 80% of private equity firms and 71% of corporates expect tech M&A deal volumes to increase over the next 12 months. AI will be the hottest market sector for deals, according to 51% of respondents (up from 3% last year), with cloud favored by 31% (down from 54% last year).

There’s also increasing demand for IT security consulting companies, Hampleton Partners said, and our colleagues at CSOonline have the rundown on cybersecurity M&A activity.

The top enterprise technology M&A deals of late 2022 and early 2023 included acquisitions of Qualtrics, Coupa Software, VMware, and Citrix. For CIOs, these deals can disrupt strategic rollouts, spell a need to pivot to a new solution, signal sunsetting of essential technology, provide new opportunities to leverage newly synergized systems, and be a bellwether of further shifts to come in the IT landscape. Keeping on top of activity in this area can also help your company make the most of emerging opportunities and steer clear of issues that often arise when vendors combine.

Here, CIO.com rounds up some of the most significant tech M&As of the last 12 months that could impact IT.

April

Ciphr adds diversity with Marshall acquisition

Marshall E-learning, a provider of diversity and inclusion training, is now part of Ciphr, a UK-based HR SaaS platform. Ciphr expects the deal will enable it to expand its existing online learning offering.

March

Silver Lake buys Qualtrics for $12.5 billion

Qualtrics has changed hands again. SAP acquired it for $8 billion in 2018, but the graft didn’t take, and SAP soon sold a minority stake. Now Silver Lake and the Canada Pension Plan Investment Board have snapped up the whole company. SAP will remain a go-to-market partner of Qualtrics and service their joint customers.

Quantive buys OKR consulting firm

Following its acquisition of consulting firm AuxinOKR, strategy execution platform Quantive is rolling out a new consulting division to support enterprises adopting its tools for measuring business results.

Quantexa is loving the Aylien

Quantexa has acquired Aylien, a Dublin-based natural language processing firm specializing in risk management and market insight. It will use Aylien’s NLP skills to enhance its AI-based decision intelligence tools for the finance industry.

Capita lays off employment screening business

Matrix SCM, a British IT staffing agency, has acquired Security Watchdog, a provider of employment screening services, from Capita, the giant IT services business. It’s part of a broader sell-off for Capita, which also let go of three other human resources companies in March: Capita Resourcing, HR Solutions, and ThirtyThree. Capita is selling non-core businesses to reduce its debts, and refocusing on public sector and customer experience work.

Key secures Rocket

Mainframe software developer Rocket Software has bought mainframe security specialist Key Resources. The deal will help Rocket better secure its software, and offer additional security-related services to the mainframe users it works with.

February

Thomas Bravo manages $8 billion spend on Coupa Software

In an $8 billion deal, investment firm Thomas Bravo acquired Coupa Software, a provider of business spend management tools. Abu Dhabi Investment Authority has taken a minority stake. Thomas Bravo also owns business payments company BottomLine, open finance platform Solifi, data management tool Talend, and a raft of security and identity management software companies.

Arm sells software arm

Processor designer Arm has sold Forge, its suite of software development tools for high-performance computing, to Linaro, which develops and supports a range of other Arm-specific software for enterprises. Arm originally acquired Forge in 2016 to support its entry into the HPC market.

Accenture buys Morphus, adds new South-American cybersecurity center

With its acquisition of Brazilian cybersecurity and threat intelligence provider Morphus, Accenture has added a new site from which to supervise its offering of managed security services and advanced analytics. The cyber fusion center in Fortaleza, Brazil, was previously Morphus Labs.

Broadcom and VMware kick merger can further down road

Broadcom first offered to buy VMware in May 2022, and VMware’s shareholders agreed to the deal in November, but still the deal isn’t done. Now the two companies have formally given themselves until May 2023 to close the acquisition, and Broadcom representatives say the deal will close by the end of the company’s fiscal year, on October 30, 2023.

January

Dell buys Cloudify

After selling off its stake in VMware, Dell is moving back into the cloud software business with the acquisition of Cloudify. The Israeli startup has developed a cloud orchestration platform to help devops teams automate provisioning.

McKinsey buys machine MLops platform Iguazio

McKinsey is adding to its stable of machine learning experts with the acquisition of software developer Iguazio. In time, it plans to integrate it into QuantumBlack, a McKinsey business unit that has specialized in AI for the last decade. Iguazio is the developer of a commercial MLops platform and two open-source tools: MLRun, for ML pipeline orchestration, and Nuclio, which offers real-time serverless functions for automating application deployment.

HPE buys Pachyderm to automate ML development

Pachyderm, a developer of data pipeline automation tools used in training machine learning models, is now part of Hewlett-Packard Enterprise. The company’s software will become part of the HPE Machine Learning Development Environment.

Quantum fusion: IonQ ties up with Entangled Networks

With each of the handful of companies developing quantum computers betting on a different architecture, a cross-platform quantum computing operating system is still a way off. That’s why quantum hardware companies like IonQ are developing their own software tools too. To speed up the process, IonQ has acquired Entangled Networks, a developer of software optimization tools for quantum computers, and is building a new Canadian subsidiary around the software team.

Two ServiceNow partners tie the knot

ServiceNow solutions provider Thirdera has acquired another ServiceNow partner, SilverStorm Solutions, to expand its reach in Europe. Thirdera also operates in South America. Combined, the two ServiceNow Elite-level partners have over 900 employees.

For last year’s mergers and buyouts, see The biggest enterprise technology M&A deals of 2022.

Mergers and Acquisitions, Technology Industry

Generative AI (GenAI) is taking the world by storm. During my career, I’ve seen many technologies disrupt the status quo, but none with the speed and magnitude of GenAI. Yet, we’ve only just begun to scratch the surface of what is possible. Now, GenAI is emerging from the consumer realm and moving into the enterprise landscape. And for good reason; GenAI is empowering big transformations.

My previous article covered how an enterprise’s unique needs are best met with a tailored approach to GenAI. Doing so on the front end will avoid re-engineering challenges later. But how can enterprises use GenAI and large language models today? From optimizing back-office tasks to accelerating manufacturing innovations, let’s explore the revolutionary potential of these powerful AI-driven technologies in action across various industries.

Enterprise Use Cases for GenAI

GenAI fuels product development and innovation.  In product development, GenAI can play a crucial role in fueling the ideation and design of new products and services. By analyzing market trends, customer feedback and competitors’ offerings, AI-driven tools can generate potential product ideas and features, offering unique insights that help businesses accelerate innovation. For instance, automotive manufacturers can use GenAI to design lighter-weight components —via material science innovations and novel component designs — that help make vehicles more energy efficient.

GenAI crafts marketing campaigns

Large language models can produce highly personalized marketing campaigns based on customer data and preferences. By analyzing purchase history, browsing behavior and other factors, these models generate tailored messaging, offers and promotions for individual customers to increase engagement, conversion rates and customer loyalty. Gartner estimates that 30% of outbound marketing messages from enterprise organizations will be AI-driven by 2025, increasing from less than 2% in 2022.

GenAI enhances customer support

GenAI can provide instant, personalized responses to customer queries in an incredibly human-like manner. Large language models can offer relevant solutions, make product recommendations and engage in natural-sounding conversations. As a result, customers can gain faster response and resolution, and organizations can free up human agents to focus on more complex customer issues. For example, Amazon uses GenAI to power Alexa and its automated online chat assistant, both of which are available 24/7/365.

GenAI optimizes back-office tasks 

Generative AI models can automate and optimize various internal processes, such as drafting reports, creating standard operating procedures, and crafting personalized emails. Streamlining these tasks can reduce operational costs, minimize human error and increase overall efficiency.

GenAI writes software code

Through a technique known as neural code generation, GenAI enhances software development processes by automating code generation, refactoring and debugging. GenAI models can produce code snippets and suggest relevant libraries within the context and requirements of specific programming tasks. In this way, GenAI can help increase developer productivity, reduce errors and speed up development while providing more secure and reliable software. 

GenAI’s Powerful Potential

These diverse use cases demonstrate the immense potential of Generative AI and large language models to revolutionize the way enterprises operate—and no industry is exempt. Harnessing these cutting-edge technologies will usher in transformative ways for organizations to enhance customer experiences, drive innovation throughout operations and gain new levels of competitive differentiation. 

Because its capabilities are so revolutionary, AI will create a widening gap between organizations that embrace its transformative power and those that do not. Our own research shows that AI leaders are already advantaged over late adopters. While the urgency to leverage AI varies by company and industry, IDC, in that same research study, posits that we have reached the point where every organization must have an AI approach in place to stay viable. Thus, exploring AI and GenAI today, before the yawning gap grows, is a crucial step for organizations that want to secure their future.

Learn more.

***

To help organizations move forward, Dell Technologies is powering the enterprise GenAI journey. With best-in-class IT infrastructure and solutions to run GenAI workloads and advisory and support services that roadmap GenAI initiatives, Dell is enabling organizations to boost their digital transformation and accelerate intelligent outcomes. 

The compute required for GenAI models has put a spotlight on performance, cost and energy efficiency as top concerns for enterprises today. Intel’s commitment to the democratization of AI and sustainability will enable broader access to the benefits of AI technology, including GenAI, via an open ecosystem. Intel’s AI hardware accelerators, including new built-in accelerators, provide performance and performance per watt gains to address the escalating performance, price and sustainability needs of GenAI.

Artificial Intelligence

The CIO Digital Enterprise Forum will be held in London on Thursday 11th May at Prospero House, London Bridge. Amit Sen from the United Nations Refugee Agency and Howard Pyle from Experience Futures will host the opening keynote. They will focus on the importance of organizations linking analytics with social impact goals and standards of inclusion.

Only a third of companies are currently seeing social impact as a core strategy, despite many being active in social responsibility. Understanding the organization’s target audience and defining ethical principles that reflect their needs is crucial when applying generative AI. Organizations need to plan for generalized standards for reporting on the users they’re serving through AI-driven experiences and what the impact is. Howard Pyle shares that “CIOs will need to play a leading role in guiding their organizations toward creating personalized and inclusive experiences that align with their overall KPIs and social impact goals. By developing inclusive product and experience strategies that are tailored to each user’s needs and abilities, organizations can ensure that all stakeholders receive the maximum value.”

It is essential to consider the impact of generative AI on the business and individual audiences while planning for generalized reporting standards. Personalization can reduce acquisition costs and increase revenues and marketing efficiency, but CIOs and IT leaders must focus on aligning social impact and business KPIs, developing a product and experience strategy based on them, and creating custom-tuned experiences. . It is critical to keep in mind the need for inclusive and ethical technology, the importance of individualized experiences, and the power of generative AI when used strategically and with clear goals in mind. By doing so, CIOs can help their organizations create experiences that meet the needs of individual users and the broader goals of the business.

The programme continues to include a panel discussion on keeping ahead of your data strategy, featuring Rashad Saab, Founder & CTO of rkbt.ai; Raj Jethwa, CTO, Digiterre; and Caroline Carruthers, Chief Executive, Carruthers & Jackson. The panel discussion will focus on the challenges of reaching data strategy goals and creating a data strategy that meets business needs and practices while allowing for future possibilities.

Discussion will focus on the human side of cybersecurity in the digital enterprise. The panel will look at human perception and social trust to digital counterparts and how to ensure cybersecurity alongside the introduction of new emerging tech. The moderator for the panel will be Michael Hill, Editor, CSO, and the panelists will include Jennifer Surujpaul, Head of IT & Digital, The Brit School; Mel Smith, CIO, Buckles Solicitors LLP; and Sue Khan, VP of Privacy and DPO, Flo Health.

In a keynote session, the Green Web Foundation, a Dutch non-profit, will discuss its efforts to increase the internet’s energy efficiency and speed its transition away from fossil fuels. The foundation stewards the largest open dataset that tracks websites running on renewables, with an open tool suite used over 3.5 billion times.

Closing the Forum will be Dy Jacqui Taylor, sharing her insight and expertise on the evolution of the digital enterprise, finding the golden thread of resilience as tech continues to change as well as how to achieve the NetZero agenda.

Along with the themes of the forum, this event preview was written using ChatGPT, with insight shared by our keynote speaker Howard Pyle. Share your thoughts on using these emerging technologies by registering here to join, the forum is free for qualified attendees and you can view the full programme here

CIO

The CIO Digital Enterprise Forum will be held in London on Thursday 11th May at Prospero House, London Bridge. Amit Sen from the United Nations Refugee Agency and Howard Pyle from Experience Futures will host the opening keynote. They will focus on the importance of organizations linking analytics with social impact goals and standards of inclusion.

Only a third of companies are currently seeing social impact as a core strategy, despite many being active in social responsibility. Understanding the organization’s target audience and defining ethical principles that reflect their needs is crucial when applying generative AI. Organizations need to plan for generalized standards for reporting on the users they’re serving through AI-driven experiences and what the impact is. Howard Pyle shares that “CIOs will need to play a leading role in guiding their organizations toward creating personalized and inclusive experiences that align with their overall KPIs and social impact goals. By developing inclusive product and experience strategies that are tailored to each user’s needs and abilities, organizations can ensure that all stakeholders receive the maximum value.”

It is essential to consider the impact of generative AI on the business and individual audiences while planning for generalized reporting standards. Personalization can reduce acquisition costs and increase revenues and marketing efficiency, but CIOs and IT leaders must focus on aligning social impact and business KPIs, developing a product and experience strategy based on them, and creating custom-tuned experiences. . It is critical to keep in mind the need for inclusive and ethical technology, the importance of individualized experiences, and the power of generative AI when used strategically and with clear goals in mind. By doing so, CIOs can help their organizations create experiences that meet the needs of individual users and the broader goals of the business.

The programme continues to include a panel discussion on keeping ahead of your data strategy, featuring Rashad Saab, Founder & CTO of rkbt.ai; Raj Jethwa, CTO, Digiterre; and Caroline Carruthers, Chief Executive, Carruthers & Jackson. The panel discussion will focus on the challenges of reaching data strategy goals and creating a data strategy that meets business needs and practices while allowing for future possibilities.

Discussion will focus on the human side of cybersecurity in the digital enterprise. The panel will look at human perception and social trust to digital counterparts and how to ensure cybersecurity alongside the introduction of new emerging tech. The moderator for the panel will be Michael Hill, Editor, CSO, and the panelists will include Jennifer Surujpaul, Head of IT & Digital, The Brit School; Mel Smith, CIO, Buckles Solicitors LLP; and Sue Khan, VP of Privacy and DPO, Flo Health.

In a keynote session, the Green Web Foundation, a Dutch non-profit, will discuss its efforts to increase the internet’s energy efficiency and speed its transition away from fossil fuels. The foundation stewards the largest open dataset that tracks websites running on renewables, with an open tool suite used over 3.5 billion times.

Closing the Forum will be Dy Jacqui Taylor, sharing her insight and expertise on the evolution of the digital enterprise, finding the golden thread of resilience as tech continues to change as well as how to achieve the NetZero agenda.

Along with the themes of the forum, this event preview was written using ChatGPT, with insight shared by our keynote speaker Howard Pyle. Share your thoughts on using these emerging technologies by registering here to join, the forum is free for qualified attendees and you can view the full programme here

CIO