Now, more than ever, global businesses have an opportunity. With people and infrastructure touching every point on the planet — and new technology empowering us to radically change the way we consume resources — we can lead the world toward a better, more sustainable future. 

That optimism stems from three core beliefs: 

We can build our business ecosystems to promote environmental stewardship, achieving ambitious goals like net zero and zero waste. We can foster well-being and growth, as well as diversity, equity, inclusion, and accessibility (DEIA), for all our people — those who work for us and those who live in the communities we touch. We can grow our businesses through innovation and digitization, establishing long-term prosperity without social or environmental compromise. 

Learn how TCS and Microsoft are powering sustainability through innovation.  

Green IT, Retail Industry

Now, more than ever, global businesses have an opportunity. With people and infrastructure touching every point on the planet — and new technology empowering us to radically change the way we consume resources — we can lead the world toward a better, more sustainable future. 

That optimism stems from three core beliefs: 

We can build our business ecosystems to promote environmental stewardship, achieving ambitious goals like net zero and zero waste. We can foster well-being and growth, as well as diversity, equity, inclusion, and accessibility (DEIA), for all our people — those who work for us and those who live in the communities we touch. We can grow our businesses through innovation and digitization, establishing long-term prosperity without social or environmental compromise. 

Learn how TCS and Microsoft are powering sustainability through innovation.  

Financial Services Industry, Green IT

As the generative AI bandwagon gathers pace, Nvidia is promising tools to accelerate it still further.

On March 21, CEO Jensen Huang (pictured) told attendees at the company’s online-only developer conference, GTC 2023, about a string of new services Nvidia hopes enterprises will use to train and run their own generative AI models.

When they hit the market, they’ll open up more options along the build-vs-buy continuum for CIOs called upon to support AI training workloads.

This doesn’t mean CIOs can just hand off responsibility for AI infrastructure, said Shane Rau, a research VP covering data processing chips for IDC. 

“CIOs should already understand that AI is not one-size-fits-all,” he said. “The AI solution stack varies according to how it will be used, which implies one must have intimate understanding of the AI use case — your AI workers and the end domain in which they work — and how to map the needs of the use case to the silicon, software, system hardware, and services.”

Nvidia is offering as-a-service solutions to those problems at several levels. Nvidia AI Foundations is a family of cloud services with which enterprises will be able to build their own large language models (LLMs), the technologies at the heart of generative AI systems, and run them at scale, calling them from enterprise applications via Nvidia’s APIs.

There will be three Foundations services at launch, still in limited access or private preview for now: NeMo for generating text, Picasso for visuals, and BioNeMo for molecular structures. Each offering will include pre-trained models, data processing frameworks, personalization databases, inference engines, and APIs that enterprises can access from a browser, Nvidia said.

Generative AI in action

Financial data provider Morningstar is already studying how it can use test-based NeMo to extract useful information about markets from raw data, drawing on the expertise of its staff to tune the models, according to Nvidia.

The Picasso service will enable enterprises to train models to generate custom images, videos, and even 3D models in the cloud. Nvidia is partnering with Adobe to deliver such generative capabilities inside Adobe’s tools for creative professionals such as Photoshop and After Effects.

Nvidia is seeking to clean up graphical generative AI’s reputation for playing fast and loose with the rights of the artists and photographers on whose works the models are trained. There are concerns that using such models to create derivative work could expose enterprises to lawsuits for breach of copyright. Nvidia hopes to allay those concerns by striking a licensing deal with stock image library Getty Images, which says it will pay royalties to artists on revenue generated by models trained on the works in its database.

Nvidia is working with another library, Shutterstock, to train Picasso to create 3D models in response to text prompts based on licensed images in its database. These 3D designs will be available for use in industrial digital twins running on Nvidia’s Omniverse platform.

The third AI Foundations service, BioNeMo, deals not in words and images but in molecular structures. Researchers can use it to design new molecules and predict their behavior. Nvidia is targeting it at pharmaceutical firms for drug discovery and testing, fine-tuning it with proprietary data. It named biotechnology company Amgen as one of the first users of the service.

AI infrastructure additions

Nvidia’s AI Foundations software services will run in DGX Cloud, a new infrastructure-as-a-service offering.

DGX is the name of Nvidia’s supercomputer in a box, one of the first of which was delivered to OpenAI, developer of ChatGPT, in 2016. Half of all Fortune 100 companies now have their own DGX supercomputers, according to Nvidia.

Cloud providers such as Oracle and Microsoft are beginning to offer access to the H100 processors on which the DGX is built, and Amazon Web Services will soon join them, Nvidia said.

Later this year, enterprises that don’t want to buy their own DGX supercomputer will have the option of renting clusters of those H100 processors by the month through the DGX Cloud service, which will be hosted by Nvidia’s hyperscaler partners.

CIOs are likely already using hardware, software, and perhaps services from Nvidia to support AI-enabled applications in the enterprise, but the company’s move deeper into the as-a-service market raises new questions, said IDC’s Rau.

“Having Nvidia as a service provider will likely mean a single source of responsibility for the service and a familiar underlying solution architecture,” he said. “But relying on Nvidia for service and the underlying solution architecture offers the prospect of costly lock-in should other service and solution architecture providers innovate faster on some measure, like performance or cost.”

Artificial Intelligence, Cloud Management, Infrastructure Management, IT Leadership

According to Infosys research, data and artificial intelligence (AI) could generate $467 billion in incremental profits worldwide and become the cornerstone of enterprises gaining a competitive edge.

But while opportunities to use AI are very real – and ChatGPT’s democratisation is accelerating generative AI test-and-learn faster than QR code adoption during the Covid pandemic – the utopia of substantial business wins through autonomous AI is a fair way off. Getting there requires process and operational transformation, new levels of data governance and accountability, business and IT collaboration, and customer and stakeholder trust.

The reality is many organisations still struggle with the data and analytics foundations required to progress down an advanced AI path. Infosys research found 63 per cent of AI models function at basic capability only, are driven by humans, and often fall short on data verification, data practices and data strategies. It’s not surprising only one in four practitioners are highly satisfied with their data and AI tools so far.

This status quo can be partly explained by the fact eight in 10 only began their AI journey in the last four years. Just 15 per cent of organisations have achieved what’s described as an ‘evolved’ AI state, where systems can find causes, act on recommendations and refine their own performances without oversight.

Then there are the trust and accuracy considerations around AI utilisation to contend with. Gartner predictions forecast 85 per cent of all AI projects by 2022to wind up with an erroneous outcome through mistakes, errors, bias and things that go wrong. One in three companies, according to Infosys, are using data processes that increase the risk of bias in AI right now.

Ethical use of AI is therefore an increasingly important movement being led by government, industry groups and thought leaders as this disruptive technology advances. It’s for these important reasons the Australian Government deployed the AI Ethics Principles framework, which followed an AI ethics trial in 2019 supported by brands such as National Australia Bank and Telstra.

Yet even with all these potential inhibitors, it’s clear the appetite for AI is growing and spend is increasing with it.

So what can IT leaders and their teams do now to take AI out of the data science realm, and into practical business applications and innovation pipelines? What data governance, operational and ethical considerations must we factor in? And what human oversight is required?

It’s these questions technology and transformation leaders from finance, education and retail sectors explored during a panel session at the Infosys APAC Confluence event. Here’s what we discovered.

Operational efficiency is the no-brainer use case for AI

While panellists agreed use cases for AI could well be endless and societally positive, the ones gaining most favour right now orient to operational efficiency.

“We are seeing AI drive a lot deeper into the organisation around how we can revolutionise our business processes, change how we run our organisation, and all add that secret sauce from a data and analytics perspective to improve customer outcomes,” said ANZ Bank CIO for Institutional Banking and Markets, Peter Barrass.

An example is meeting legislative requirements to monitor communications traders generate in 23 countries. AI is successfully used to analyse, interpret and monitor for fraudulent activity at a global scale. Crunching and digitisation of documents, and chatbots are other examples.

Across retail and logistics sectors, nearly three in 10 retailers are actively adopting AI with strong business impact, said Infosys APAC regional head for Consumer, Retail and Logistics, Andal Alwan. While personalisation is often a headline item, AI is also increasing operational efficiencies and frictionless experiences across the end-to-end supply chain.

Cyber security is another favoured case for AI across multiple sectors, once again tying to risk mitigation and governance imperatives.

Advancing AI can’t be done without a policy and process rethink

But realising advanced AI isn’t only a technical or data actionability feat. It requires transformation at a systematic, operational and cultural level.

Just take the explosion of accessible AI to students from a learning perspective. With mass adoption comes the need for education institutions such as the Melbourne Archdiocese Catholic Schools (MACS) to actively build policies and positions around AI use. One consideration is how open accessibility of such tools can influence students. Another is protecting academic integrity.

Then it’s making sure leadership is very clear from an education system perspective to gain consistency across MACS’ 300 schools for how to utilise AI in learning. “We need to educate our teachers to be able to think about how their students will use AI and how they can maximise the learning for individual students, taking on-board some of these types of tools available,” MACS chief technology and transformation officer, Vicki Russell, said.   

Elevating data governance and sharing is critical

Simultaneously, data governance and practices need refinement. Alwan outlined two dimensions to the data strategy debate: Intra-organisation; and inter-organisation.

“Intra-organisation is about how I govern the data: What data I collect, why I’m collecting it and how am I protecting and using it,” she explained. “Then there’s inter-organisation, or between retailers, producers and logistic firms, for instance. Collaboration and sharing of data is very important. Unless there is visibility end-to-end of the supply chain, a retailer isn’t going to know what’s available and when it’s going to be arriving. All of this requires huge amounts of data, which means we’re going to need AI for scaling and to predict trends too.”

A further area of data collaboration is between retailers and consumers, which Alwan referred to as “autonomous supply chains”. “It’s about understanding demand signals from the point of consumption, be it online or physical, then translating that in real time to organisation systems to get more security of planning and supply chain. That’s another area of AI maturity we’re seeing evolving.”

Infosys Knowledge Institute’s Data + AI Radar found organisations wanting to realise business outcomes from advanced AI must develop data practices that encourage sharing and position data as currency.

But even as the financial sector works to pursue data sharing through the Open Banking regime, Barrass reflected on the need to protect the information and privacy of customers and be deliberate about the value data has to both organisation and customer.

“In the world of data, you have to remember you have multiple stakeholders,” he commented. “The customer and person who owns the data and who the data is associated with is really the curator of that information, and should have right to where it’s shared and how it’s shared. Corporates like banks have a responsibility to customers to enable that. That needs to be wrapped up in your data strategy.”

Internally, utilising the wealth of education learning and data points MACS has been capturing is a critical foundation to using AI successfully.

“The data and knowledge a business has about itself before it enters into an AI space is really important in that maturity curve,” Russell said. “Having great data and knowing what you have to a certain extent before you jump into your AI body of work or activities is important. But I also think AI can help us leapfrog. There’s knowing enough but also being open to what you might discover along that journey.”

Building trust with customers around AI still needs human oversight

What’s clear is the onus is on organisations to structurally address trust and bias issues, especially as they lean towards allowing AI to generate outcomes for customers autonomously. Ethical use of data and trust in what and how information is used must come into play. As a result, parallel human oversight of what the machine is doing to ensure outcomes are accurate and ethical remains critical.

“Trust in the source of information and really clear ownership of that information is really important, so there’s clear accountability in the organisational structure for who is responsible for maintaining a piece of information driving customer decision outcomes,” said Barrass.  “Then over time, as this matures, we potentially could have two sets of AI tools looking at the same problem sets and validating each other’s outcomes based on different data sets. So you at least get some validation of one set of information drivers.”

Transparency of AI outcomes is another critical element with customers if trust in AI is to evolve over time. This again comes back to stronger collaboration with data owners and stakeholders, an ability to detail the data points driving an AI-based outcome, and explaining why a customer got the result they did.

“It’s very important to be conscious of the bias and how you balance and provide vast sets of data that constantly work against the bias and correct it,” Alwan added. “That’s going to be key for the success of AI in the business world.” 

We all need to work with ChatGPT, not against it

Even as we strive for responsible AI use, ChatGPT is accelerating generative AI adoption at an unprecedented rate. Test cases are being seen in everything from architectural design to writing poetry, creating law statement of claims and developing software code. Panellists agreed we’re only scratching the surface of use cases this generative AI can tackle.

In banking, it’s about experimenting in a controlled way and understanding the ‘why’ so generative AI is applied to achieve solid business outcome, Barrass said. In the world of retail and consumer-facing industries, conversational commerce is already front and centre and ChatGPT is set to accelerate this further, Alwan said.

For Russell, the most important thing is ensuring the future generation learns how to harness openly accessible AI tools and can prompt it appropriately to gather great information out of it, then reference it. In other words, education evolves and works with it.

It’s a good lesson for us all.

Infosys

Companies capture more data and compute capacity at the edge. At the same time, they are laying the groundwork for a distributed enterprise that can capitalize on a multiplier effect to maximize intended business outcomes.

The number of edge sites — factory floors, retail shops, hospitals, and countless other locations — is growing. This gives businesses more opportunity to gain insights and make better decisions across the distributed enterprise. Data follows the activities of customers, employees, patients, and processes. Pushing computing power to the distributed edge ensures that data can be analyzed in near real time — a model not possible with cloud computing. 

With centralized cloud computing, due to bandwidth constraints, it takes too long to move large data sets and analyze the data. This introduces unwanted decision latency, which, in turn, destroys the business value of the data. Edge computing addresses this need for immediate processing by leaving the data where it is created by instead moving compute resources next to such data streams. This strategy enables real-time analysis of data as it is being captured and eliminates decision delays. Now the next level of operational efficiency can be realized with real-time decision-making and automation. At the edge: where activity takes place. 

Industry experts are projecting that 50 billion devices will be connected worldwide this year, with the amount of data being generated at the edge slated to increase by over 500% between 2019 and 2025, amounting to a whopping 175 zettabytes worldwide. The tipping point comes in 2025, when, experts project, roughly half of all data will be generated and processed at the edge, soon overtaking the amount of data and applications addressed by centralized cloud and data center computing.

The deluge of edge data opens opportunities for all kinds of actionable insights, whether it’s to correct a factory floor glitch impacting product quality or serving up a product recommendation based on customers’ past buying behavior. On its own, such individual action can have genuine business impact. But when you multiply the possible effects across thousands of locations processing thousands of transactions, there is a huge opportunity to parlay insights into revenue growth, cost reduction, and even business risk mitigation.

“Compute and sensors are doing new things in real time that they couldn’t do before, which gives you new degrees of freedom in running businesses,” explains Denis Vilfort, director of Edge Marketing at HPE. “For every dollar increasing revenue or decreasing costs, you can multiple it by the number of times you’re taking that action at a factory or a retail store — you’re basically building a money-making machine … and improving operations.”

The multiplier effect at work

The rise of edge computing essentially transforms the conventional notion of a large, centralized data center into having more data centers that are much smaller and located everywhere, Vilfort says. “Today we can package compute power for the edge in less than 2% of the space the same firepower took up 25 years ago. So, you don’t want to centralize computing — that’s mainframe thinking,” he explains. “You want to democratize compute power and give everyone access to small — but powerful — distributed compute clusters. Compute needs to be where the data is: at the edge.”

Each location leverages its own insights and can share them with others. These small insights can optimize operation of one location. Spread across all sites, these seemingly small gains can add up quickly when new learnings are replicated and repeated. The following examples showcase the power of the multiplier effect in action:

Foxconn, a large global electronics manufacturer, moved from a cloud implementation to high-resolution cameras and artificial intelligence (AI) enabled at the edge for a quality assurance application. The shift reduced pass/fail time from 21 seconds down to one second; when this reduction is multiplied across a monthly production of thousands of servers, the company benefits from a 33% increase in unit capacity, representing millions more in revenue per month.

A supermarket chain tapped in-store AI and real-time video analytics to reduce shrinkage at self-checkout stations. That same edge-based application, implemented across hundreds of stores, prevents millions of dollars of theft per year.

Texmark, an oil refinery, was pouring more than $1 million a year into a manual inspection process, counting on workers to visually inspect 133 pumps and miles of pipeline on a regular basis. Having switched to an intelligent edge compute model, including the installation of networked sensors throughout the refinery, Texmark is now able to catch potential problems before anyone is endangered, not to mention benefit from doubled output while cutting maintenance costs in half.

A big box retailer implemented an AI-based recommendation engine to help customers find what they need without having to rely on in-store experts. Automating that process increased revenue per store. Multiplied across its thousands of sites, the edge-enabled recommendation process has the potential to translate into revenue upside of more than $350 million for every 1% revenue increase per store. 

The HPE GreenLake Advantage

The HPE GreenLake platform brings an optimized operating model, consistent and secure data governance practices, and a cloudlike platform experience to edge environments — creating a robust foundation upon which to execute the multiplier effect across sites. For many organizations, the preponderance of data needs to remain at the edge, for a variety of reasons, including data gravity issues or because there’s a need for autonomy and resilience in case a weather event or a power outage threatens to shut down operations.

HPE GreenLake’s consumption-based as-a-service model ensures that organizations can more effectively manage costs with pay-per-use predictability, also providing access to buffer capacity to ensure ease of scalability. This means that organizations don’t have to foot the bill to build out costly IT infrastructure at each edge location but can, rather, contract for capabilities according to specific business needs. HPE also manages the day-to-day responsibilities associated with each environment, ensuring robust security and systems performance while creating opportunity for internal IT organizations to focus on higher-value activities.

As benefits of edge computing get multiplied across processes and locations, the advantages are clear. For example, an additional monthly increase in bottom-line profits of $2,000 per location per month is easily obtained by a per-location HPE GreenLake compute service at, say, $800 per location per month. The net profit, then, is $1,200. When that is multiplied across 1,000 locations, the result is an aggregated profit of an additional $1.2 million per month — or $14.4 million per year. Small positive changes across a distributed enterprise quickly multiply, and tangible results are now within reach.  

As companies build out their edge capabilities and sow the seeds to benefit from a multiplier effect, they should remember to:

Evaluate what decisions can benefit from being made and acted upon in real time as well as what data is critical to delivering on those insights so the edge environments can be built out accordingly

Consider scalability — how many sites could benefit from a similar setup and how hard it will be to deploy and operate those distributed environments

Identify the success factors that lead to revenue gains or cost reductions in a specific edge site and replicate that setup and those workflows at other sites

In the end, the multiplier effect is all about maximizing the potential of edge computing to achieve more efficient operations and maximize overall business success. “We’re in the middle of shifting from an older way of doing things to a new and exciting way of doing things,” Vilfort says. “At HPE we are helping customers find a better way to use distributed technology in their distributed sites to enable their distributed enterprise to run more efficiently.”

For more information, visit https://www.hpe.com/us/en/solutions/edge.html

Edge Computing

If digital transformation was about driving fundamental change within the company, then its next chapter will be far more outward-looking. This is about being digital-first: to build digital businesses that are viable and sustainable in the long term. Rather than just leveraging digital technology to seize new opportunities, such organisations are poised to create operating models for meeting evolving customer needs. In fact, 95% of CEOs globally already see the need to adopt a digital-first strategy.

But what does it mean to be a digital business? Firstly, digital businesses embrace a digital-first enterprise strategy led by the CEO and other C-suite executives. They use technology to stay competitive, shifting their priorities from just driving efficiency. They are fixated on delivering business outcomes at scale with digital innovation programs. They create value through digital technologies.

This change can be seen in how most Asia/Pacific Japan (APJ) CIOs are taking up the role of strategic advisor and partner, collaborating with their business counterparts in operations and product development. And with revenue generation becoming an integral part of the CIO’s breadth of responsibilities, it’s clear that technology is taking on a leading role in value creation.

More and more businesses today have taken root in a digital business model to serve as their stepping stone towards becoming Future Enterprises. The Future Enterprise is the vision of International Data Corporation (IDC) on how enterprises should operate and invest not only to achieve measurable business goals and outcomes, but to participate in the new era of digital businesses. This is where forward-thinking organisations will thrive by attracting digital talent, improving enterprise intelligence, scaling digital innovation, and more.

To celebrate and recognise the APJ leaders and businesses who have challenged themselves to become a digital business, IDC has launched anew the APJ IDC Future Enterprise Awards. Last year, stand out winners include companies and individuals such as:

Midea Group (China), Future Enterprise of the Year, for deploying AI and advanced digital technologies to enhance its user experience with an end-to-end value chain while providing digital empowerment for all employees and partners to create a flexible and labour- and energy-efficient supply chain.Maria Beatriz A. Adversalo of Malayan Insurance Co., Inc. (Philippines), recognized as CIO of the Year in Asia/Pacific, for leading the company’s digital transformation program strategy which includes the deployment of web apps, portals, APIs, OCR, RPA, analytics, and cloud resulting in increased digitalised policy issuance premiums, savings in terms of manhours and software subscriptions.James Chen of CTBC Bank (Taiwan), lauded as CEO of the Year in Asia/Pacific, for his forward-thinking leadership to strengthen the bank’s digital technology services by investing over TWD7.67 billion to modernise its information core and transform its technology to better serve digital customers.Zuellig Pharma Holdings Pte. Ltd., Best in Future of Intelligence in Singapore, for leveraging data analytics to build a data superhighway that would connect all its current and future digital and data solutions. Anchored in the mission of making healthcare more accessible, it built three main pillars of service—commercial excellence, supply chain analytics, and business intelligence—to deliver actionable intelligence and insights. As a result of improved insights and services, the data analytics team has secured collaborative projects with over 30 principals and generated more than US$8 million in revenue in the last 18 months.

Entries will be judged against these critical capabilities of the Future Enterprise:

Future of TrustFuture of Industry EcosystemsFuture of OperationsFuture of WorkFuture of IntelligenceFuture of Digital InfrastructureFuture of ConnectednessFuture of Customer ExperienceFuture of Digital InnovationFuture Enterprise of the Year Award

To celebrate the innovative works of individuals and organisations, the Future Enterprise Awards also have these categories:

CEO of the YearCIO/CDO of the YearSpecial Award for Digital ResiliencySpecial Award for Sustainability

This year, to recognise outstanding organisations born in the digital-native era and smart cities projects, IDC will also hand out Special Awards for:

Digital Native BusinessSmart Cities – Best in Connected CitySmart Cities – Best in Digital PoliciesSmart Cities – Best in Citizen Wellbeing

The Future Enterprise Awards will also serve as a forum for sharing smart cities’ best practices to aid and accelerate development in APJ. As smart cities catalyse the digital transformation of urban ecosystems towards systemic environmental, financial, and social outcomes, they tap into emerging technologies and innovation to make cities more liveable, while offering new services and economic opportunities.

Nominations are now open for the awards across different regions—APJ, North America, Europe, and Middle East, Africa, and Turkey—with entries reviewed by a select panel of judges, composed of IDC worldwide analysts, industry thought leaders, and members of the academia. Each nomination is first evaluated by IDC’s country and regional analysts against a standard assessment framework, based on IDC’s Future Enterprise taxonomy. Winners of each country will then qualify for the regional competition.

Visit IDC Future Enterprise Awards to learn more. To submit a nomination, complete this form by 16 June, 2023.

Digital Leader Award, Enterprise

Vince Kellen understands the well-documented limitations of ChatGPT, DALL-E and other generative AI technologies — that answers may not be truthful, generated images may lack compositional integrity, and outputs may be biased — but he’s moving ahead anyway. Kellen, CIO at the University of California San Diego (UCSD), says employees are already using ChatGPT to write code as well as job descriptions.

OpenAI’s text-generating ChatGPT, along with its image generation cousin DALL-E, are the most prominent among a series of large language models, also known as generative language models or generative AI, that have captured the public’s imagination over the last year. The models respond to written requests to generate a variety of responses ranging from text documents and images to programming code.

Kellen sees ChatGPT-generated code as a productivity-enhancing tool in much the same way that compilers were an improvement over assembly language. “Something that produces libraries and software is no different than searching GitHub,” he says. “We also use it to write job descriptions that are sensitive to our content and formatting. You can then move on to editing very quickly, looking for errors and confabulations.” While the technology is still in its early stages, for some enterprise applications, such as those that are content and workflow-intensive, its undeniable influence is here now — but proceed with caution.

Ready for the right applications

Generative AI is ready for use in coding, administrative workflows, data refinement, and simple use cases such as pre-filling forms, says Oliver Wittmaier, CIO and product owner at DB SYSTEL GmbH, the wholly owned subsidiary of DB AG and digital partner for all group companies. And in the transportation industry, he says, “AI can directly or indirectly impact the avoidance of transport, the steering of transport, and the management of transport.”

Oliver Wittmaier, CIO and product owner at DB SYSTEL GmbH

DB SYSTEL GmbH

Content generation is also an area of particular interest to Michal Cenkl, director of innovation and experimentation at Mitre Corp. “I want contextual summarization and refinement via dialog, and that’s what these large-language models deliver, he says. Currently his team is looking into two use cases in the knowledge and expertise domains. “The first is if I want to write an email to one of our sponsors that summarizes the work we’ve done that’s relevant to them—and write it in the context of communications we’ve already had with them. That’s incredibly powerful.”

The second is for project staffing. Normally Cenkl reviews résumés and searches by skills tags to find the right people for a project. Generative AI can facilitate that. “For example, I might want to ask, ‘What can Michael do on this project,’ based on what he’s doing now, and get a summary of what he could do without me having to construct that from a résumé.”

And over at used car retailer CarMax, they’ve been using generative AI for over a year, leveraging OpenAI’s APIs to consolidate customer review text to summaries that are more manageable and readable. But CIO Shamim Mohammad says his team has expanded its use of the technology into other areas as well.

One application, in vehicle imaging, was conceived as a way to improve customer experience. AI optimizes images for every vehicle the company adds to its inventory, which at any given time includes between 50,000 and 60,000 vehicles, he says. “We make every image as realistic as possible without losing the validity of it,” he says. For example, its data scientists created a “digital sweeper” model that replaces a photo of a car sitting on a dirty floor with an image that shows the car sitting on a clean one. “It’s still the same car, but looks better and it’s a better experience for the customer,” he says.

Similarly, Nike has been using generative AI to generate product prototype images, says Forrester analyst Rowan Curran. “You can use a text-to-3D modeler, test in 3D space, and get a much more visceral feel for how it will look in the real world — all with very little effort,” he says.

Applications with the greatest potential payback

Creating code and improving customer experience are the main areas companies can take advantage of today using generative AI, and they have the greatest potential payback in terms of efficiency gains, Mohammad says.

Shamim Mohammad, CIO, CarMax

CarMax

Gary Jeter, EVP and CIO at TruStone Financial Credit Union, says these are areas his developers have also pursued with GitHub’s implementation of OpenAI’s Codex. And, he says, using generative AI for coding has worked well. Cenkl adds that generative AI models work better on coding than on human language because programming languages are more structured. “It can tease out that structure, and that’s why it works,” says Cenkl.

CarMax is experimenting with GitHub’s Copilot, where he says engineers in some cases could potentially generate up to 40% of their code. “This is evolving quickly,” Mohammad says. “But you have to make sure there’s no copyright infringement, fake content or malware embedded if you’re using it to create software.” You can’t just plug that code in without oversight.

Other areas ripe for enterprise applications, says Curran, include generating marketing copy, images, designs, and creating better summaries of existing data so people can consume it more effectively. “Some people are even using these large language models as a way to clean unstructured data,” he says. And in the coming year, generative AI capabilities may begin to appear in some enterprise software ranging from help desk software to Microsoft Office applications.

Don’t trust, verify

Aside from the benefits, CIOs deploying the technology need to be aware of potential intellectual property issues regarding generated outputs, CarMax’s Mohammad cautions. Generative models, such as DALL-E that trains on data from the Internet, have generated content that may infringe on copyrighted content, which is why Getty Images recently sued Stability AI over its AI-driven art generation tool Stable Diffusion.

Michal Cenkl, director of innovation and experimentation, Mitre Corp.

Mitre Corp.

The technology also needs human oversight. “Systems like ChatGPT have no idea what they’re authoring, and they’re very good at convincing you that what they’re saying is accurate, even when it’s not,” says Cenkl. There’s no AI assurance — no attribution or reference information letting you know how it came up with its response, and no AI explainability, indicating why something was written the way it was. “You don’t know what the basis is or what parts of the training set are influencing the model,” he says. “What you get is purely an analysis based on an existing data set, so you have opportunities for not just bias but factual errors.”

Wittmaier is bullish on the technology, but still not sold on customer-facing deployment of what he sees as an early-stage technology. At this point, he says, there’s short-term potential in the office suite environment, customer contact chatbots, help desk features, and documentation in general, but in terms of safety-related areas in the transportation company’s business, he adds, the answer is a clear no. “We still have a lot to learn and improve to be able to include generative AI in such sensitive areas,” he says.

Jeter has similar concerns. While his team used ChatGPT to identify a code fix and deploy it to a website within 30 minutes — “It would have taken much longer without ChatGPT” — and he thinks it’s useful for drafting terms and conditions in contracts, it’s not entirely proven. “We will not expose any generative AI to external members,” he says. “TruStone will not be bleeding edge in this space.”

Gary Jeter, EVP and CIO, TruStone Financial Credit Union

TruStone Financial Credit Union

When TruStone eventually starts using the technology for the benefit of its members, he adds, it will monitor conversations through human and automated review to protect its members and the brand.

Today, the key to successful deployment is still having a human in the loop to review generated content for accuracy and compliance, says UCSD’s Kellen. “Making sure the machine makes the right decision becomes an important litigation point,” he says. “It’ll be quite a while before organizations [use it] for anything that’s high risk, such as medical diagnoses.” But generative AI works fine for generating something like review summaries, provided there’s a human overseeing them. “That slows us down a bit, but it’s the right thing to do,” he says. Eventually, he adds, “We’ll find automated ways to ensure that quality is good. But right now, you must have a review process to make sure the content generated is accurate.”

Vince Kellen, CIO, UCSD

UCSD

Another well-documented risk, in addition to accuracy, is the potential for bias in the models introduced from the data used to train them. This is especially problematic when generative AI is using content from the Internet, as ChatGPT does, but that may be less of an issue when training the model against your own private corporate data that you can review for potential bias, Kellen says. “The more you get to the enterprise, where the class of data is more constrained and more mundane, the more generative AI shines,” he says.

The thing to understand about large-language models, says Cenkl, is these machines are to some degree savants. “They don’t understand, but they’re very good at computing,” he says.

Changes in job responsibilities, roles

“Technology has made things better, but it’s also created a lot of extra work for us,” says Mohammad. However, he believes generative AI is different. “It’s exciting because it’s going to take away some of the stuff we don’t like to do and make us more intelligent,” he says. “It will augment humans.”

But Curran points out that there’s no expectation that generative AI will completely replace any role in the short term. “It may reduce the number of people needed to execute a role, such as in content development, product information management or software development,” he says. “But there will always be the need for a human in the loop.” And Mohammad adds that even if the technology can write and summarize, human intelligence will always be needed to ensure quality, and to control what’s been generated to make it better.

Steps to get started

Now is the time to get up to speed on generative AI technology and start experimenting, says Kellen. “CIOs have to get their heads inside this puzzle before they’re bamboozled by vendors who are embedding the technology into their enterprise software offerings,” he says. “If you spend the next year procrastinating, you’ll be behind the curve.”

It’s important to get educated and go deeper than the public discussion on ChatGPT in order to understand that this technology is much more complex than one application, says Curran. Then start considering use cases where generative AI might improve the efficiency or quality of existing processes. Finally, ask what types of capabilities you’ll need and whether you should acquire that from a vendor or build it yourself.

From there it’s a matter of testing the technology and consider potential use cases. “A lot of your systems, whether they use structured or unstructured data, will have at least some component of natural language and conversational interface,” says Cenkl. “Think about the data you have and what parts of that can be augmented by these technologies,” and then demonstrate the potential. For example, Jeter says he generated a template of terms and conditions and sent it to his compliance department to show how they could use it.

Generative AI models are large, and training them from scratch is expensive, so the best way to get started is to use one of the cloud services, says Curran. CarMax, for example, uses Microsoft’s Azure OpenAI Service with GPT 3.5. “The data we load is our own — it’s not shared with others,” Mohammad says. “We can have massive amounts of data and process it very quickly to run our models. If you have a small team or business problem that might take advantage of generative AI technology, give it a shot.”

Application Management, Artificial Intelligence, CIO, Emerging Technology, IT Leadership

Huawei’s Enterprise Business Group (EBG) arrived at Mobile World Congress in Barcelona this year with a proposition fit for the times, emphasizing the value created by digital transformation across multiple industries and use case scenarios. Huawei has developed more than 100 scenario-based solutions, covering over 10 industries. EBG’s strategy of ‘Weaving Technologies for Industry Scenarios’ paid off — the business has been growing rapidly with Huawei’s overall revenue reaching about 636 billion Yuan in 2022.

On day two of the mobile conference, four of EBG’s senior executives took part in an hour-long panel discussion in front of journalists and partners. Their aim: to explain how EBG is playing its part in Huawei’s larger effort to help industries go digital as technology plays an increasingly important role in economy, culture, society and environment.

Huawei’s Enterprise Business Group is the one of Huawei’s three major divisions, sitting alongside its carrier business and its consumer electronics unit. EBG’s core business is the infrastructure that enables digital transformation and “scenario-based technologies” created in collaboration with a fast-growing partner ecosystem. To better meet customer needs, EBG has established business units (BUs) dedicated to certain industries such as Government Public Services, Digitalization BU and Digital Finance BU which integrate resources to efficiently serve and create value for customers, helping industries digitally transform.

To date, Huawei’s enterprise group has worked with more than half of the Fortune Global 500. Moreover, 54.8% of Huawei employees are engaged in Research and Development (R&D), which over the last decade has been supported with US$132.5 billion investment. Bob Chen, EBG Vice-President, also announced the new Small and Medium Enterprise business strategy at the MWC, which will see Huawei step up investment in this market to support these businesses as they seek to transform. EBG is also transforming its organization, channel, and IT equipment to extend its breadth in the SME-dominated markets.  Six distribution product R&D teams have been set up and more than 200 new products and solutions will be released to the SME market this year. Huawei will continue to work with partners to help more SMEs achieve digital transformation and business success.

For all of these numbers though, this was a panel discussion frequently dominated by the qualitative impacts of digital transformation. Chen cited technology deployments that have revived regional economies, limited the devastation caused by forest fires and brought high-quality teaching resources to impoverished rural neighborhoods. According to Chen, digital technologies now play an essential role in “driving the development of the economy, culture, society and environment towards an intelligent world”.

Historically, Huawei has thrived on big visions and big projects. Jason Cao, CEO of Huawei Global Digital Finance, highlighted that mobile and intelligent financial services are more and more popular, and the core fields are highly digitalized. Huawei strives to accelerate technology application in six fields, including shifting from transaction to digital engagement, developing cloud-native applications and data, evolving infrastructure to MEGA, industrializing data and AI application, enhancing real-time data analysis, and moving towards a cutting-edge AI brain. In this way, we help financial customers accelerate changes, innovatively improve productivity, and make productivity visible, and speed up evolution towards the future.

Cao was one of two vertical sector specialists on the panel. The other was Hong-eng Koh, Huawei’s Global Chief Public Services Industry Scientist. With an MBA from Leeds University in the UK, Koh rose to play a leading role in Singapore’s e-Government program and spent 16 years in government roles at Oracle before joining Huawei.

Instead of driving revenue and profit, Koh told the audience that the public sector has to use what he calls “people-centric services” to remove the friction from the relationship between citizen and state. “For example, a businessman wants to open a restaurant. . . Regulations require him to transact with numerous government agencies to get the necessary permits. Digital transformation can help make this reluctant businessman more willing [to accept digital channels].”

Koh took the audience on a four minute summary of the way in which digital government can, and should, enable “digital economy and digital society”. Stops along the way included government-owned broadband and cloud services providers in Nigeria and UAE, an intelligent university campus in Macau and e-government systems in Spain and Sweden.

Much of this work is achieved in collaboration with EBG’s 35,000-strong partner ecosystem, represented on stage by Haijun Xiao, President of Global Partner Development and Sales. Xiao’s worldwide brief is vast: 25,000 sales partners, 8,000 solution development and services partners, 2,400 training courses, and ICT academies working with 2,200 universities.

Here, too, discussions about value are noticeable. EBG has invested significantly in its partner ecosystem in recent years, signaling continued commercial momentum and increasing maturity. Xiao knows the value of his partners, and wants to keep them onside. “We adopt mutual benefits through open collaboration,” he says. “We adopt fair, just, transparent and simple partner policies.” This year, Huawei is building end-to-end capabilities from R&D, marketing, sales, supply, and service systematically, centering on “partner-centricity”.

In Huawei’s corporate calendar, the congress is one of the last big public events to take place before the privately-held company reveals its annual financial performance. Chen hinted at what we’re likely to hear in the near future: continuing rapid growth at EBG. EBG will be working with partners to help more SMEs go digital and succeed in 2023.

This year’s theme of the value generated by digital transformation is designed to perpetuate that track record of success in what seem likely to be more uncertain times.

Find out more about Huawei’s MWC program here.

Mobile World Congress

ChatGPT and other artificial intelligence tools have dominated the conversation lately. Their power to imitate human writing and art is raising concerns that machines could start replacing white-collar workers, the way they took over many blue-collar jobs in the 19th century. We at Digitate are thinking about machines’ role at work too, as we develop software tools to make the autonomous enterprise real.

In our vision of the “autonomous enterprise,” machines (or rather, AI algorithms) fulfill highly repetitive or defined tasks, while strategic, decision-making tasks are driven by humans.

You may think that rule means it’s easy to decide which tasks can be assigned to machines. But as AI and machine learning continue to become more sophisticated and powerful, the dividing line keeps moving. However, the distinguishing factor remains the same: Whether the task under consideration handles data in a defined or undefined way.

Defined: Activities in the defined cluster offer all the information (data) and instructions that you need to perform them. No information is hidden, and the specific instruction to use can be calculated using the data available. Defined data activities are ripe to be machine-managed.Undefined: Activities in this cluster don’t offer all the necessary information to perform. Intuition, interpretation, analysis, deduction, and guessing are required. Undefined data activities do not adapt well to machine management.

Games can help to understand how to deploy AI

Games that are prime examples of these two clusters are chess and poker, respectively. These categories were first defined by pioneering mathematician and computer scientist John von Neumann (who created a whole field of study with his 1944 book, Theory of Games and Economic Behavior).

I was reminded about von Neumann’s distinction when I attended a speech last fall by scholar and poker champion Maria Konnikova that covered some of the points below.

First, think about a chess game. It has a defined set of pieces with specific roles, a clear set of rules, and a defined space (the chess board). All data is on display for both players, with no hidden information (and no ambiguity about whether a move is legal or not). The total number of all possible moves is very high, but not infinite. This means a machine equipped with a good set of algorithms and enough computing power can beat any chess champion. (In fact, this first happened a quarter-century ago.)

Now think about poker. It also has a defined set of pieces (a deck of cards), a set of rules, and a defined space (the card table). However, not all information is on display; in fact, the central mechanism of poker is to guess which cards your opponents secretly hold, and then successfully predict how they will bet. The game must be played by assumptions, clues, and intuitions about both the cards available and human behavior under specific emotional pressures.

Know when to fold ‘em? That does not compute

Here is the major difference: Machines don’t do well when not all the necessary information is available.

While I realize people might object that AI is progressing and it is mimicking human intelligence, there is no enterprise-grade application of such solutions yet. At least for the next few years, machines still can’t beat us at poker.

End-to-end enterprise operations are closer to poker than chess because often not all data are available. Decision-making is often driven by limited data, information interpretation, and intuition.

Machines are very effective and efficient at managing tasks with a clear set of data and a well-defined set of rules, also known as Standard Operating Procedures. In many enterprises, a wide range of operations from sales to HR can be described with SOPs, and therefore automated. (In IT operations, where I’ve spent my career, 80% of tasks can be machine-managed.)

The typical journey towards an autonomous enterprise usually moves through these phases:

Manual: There is no support by machines; all tasks are executed by humans.Augmented: There are specific routines that alleviate repetitive tasks, but these routines are triggered by humans. (The most common phase nowadays.)Automated: The machine reacts to a ticket (human’s request), triggering a specific routine to solve the problem.Autonomous: Machines suggest and execute actions to prevent incidents or improve overall performance. Usually there is a supervisory period where the human is “teaching” or modeling actions for the machine to take, which later it will execute without supervision.

At Digitate, we built “ignio™,” our flagship AIOps platform for IT and business operations, to become fully autonomous. After its “learning” period ignio’s proprietary machine learning algorithm can filter out excess information generated by the production environment, focusing only on the activities needed to improve or rectify the situation.

Staying one move ahead with autonomous operations

Like any good chess computer program, ignio has a library of over 10,000 customizable moves (use cases) to apply when a situation occurs. Of course, at the beginning ignio will seek human approval before executing the use case. But when the machine learning period is over, ignio is ready to not just self-heal IT problems but optimize all kinds of business processes.

The bottom line: ignio is designed to be an autonomous enterprise solution for IT operations. ignio focuses on the whole landscape, not just single aspects such as data flow, ticket management, or monitoring. ignio is not merely a tool for a specific need, but rather a solution to make the IT autonomous enterprise a reality.

And you can bet your whole stake on that deal.

To see ignio in action, click here to request a demo.

Artificial Intelligence, Machine Learning

The increasing amounts of data generated by today’s modern enterprise continues to challenge organizations as they look to extract valuable insights from that data. The inability to leverage all kinds of data and the amount of expertise required from data scientists to prep data put a strain on many enterprises. IT groups need a new approach to allow for better solutions and to get data analysis tools and technologies into the hands of domain experts.

“It’s a very competitive marketplace for well trained, smart data scientists, and for the public and private sector, getting access to those resources and keeping them is very difficult,” says Andy MacIsaac, director of solutions marketing, public sector, at Alteryx. “But many times, organizations don’t always need to have that high-level resource for every data issue or need for answers. What we need to do is democratize the approach to data analytics and give access to the domain experts who can self-serve their data and analytics needs.”

Business leaders unable to access these data tools end up asking the data scientists for answers to their questions, taking them away from their larger, more mission-critical data projects. This creates friction as the two groups clash over what should be a cooperative effort.

“You want your data scientists to be working on bigger, more impactful projects,” says MacIsaac. “They don’t want to be data janitors, cleaning up data and running standard reports. They need to be tackling the big problems, because that’s what they were trained for. By lowering the barrier to analytics through new self-service and low-code data tools, you are putting a lot of capability into the hands of a larger group of people.”

Tools such as those offered by Alteryx can be quickly deployed for an enterprise, as well as public agencies, offering new insights for those groups. With automation and data cleanup tools freeing up time for data experts, they can look for new answers to new questions, and take advantage of more data sources – all without worrying about the effort to code algorithms or figure out which data is good or bad.

For example, a government agency was able to use the tools to discover whether sailors on ships during the Vietnam War were eligible for benefits related to the deployment of Agent Orange, by determining what ship they were aboard on a given day.

“Typically a business or agency will start with a business problem or challenge, and there’s always a question that needs to be figured out,” says MacIsaac. “With these tools, you don’t have to be a data scientist to get the answers if you know the data you need, pull it down and create the analysis.”

To learn more about how Alteryx can help your organization expand the use of its data analytics tools, click here.

Data Science