Generative AI (GenAI) is taking the world by storm. During my career, I’ve seen many technologies disrupt the status quo, but none with the speed and magnitude of GenAI. Yet, we’ve only just begun to scratch the surface of what is possible. Now, GenAI is emerging from the consumer realm and moving into the enterprise landscape. And for good reason; GenAI is empowering big transformations.

My previous article covered how an enterprise’s unique needs are best met with a tailored approach to GenAI. Doing so on the front end will avoid re-engineering challenges later. But how can enterprises use GenAI and large language models today? From optimizing back-office tasks to accelerating manufacturing innovations, let’s explore the revolutionary potential of these powerful AI-driven technologies in action across various industries.

Enterprise Use Cases for GenAI

GenAI fuels product development and innovation.  In product development, GenAI can play a crucial role in fueling the ideation and design of new products and services. By analyzing market trends, customer feedback and competitors’ offerings, AI-driven tools can generate potential product ideas and features, offering unique insights that help businesses accelerate innovation. For instance, automotive manufacturers can use GenAI to design lighter-weight components —via material science innovations and novel component designs — that help make vehicles more energy efficient.

GenAI crafts marketing campaigns

Large language models can produce highly personalized marketing campaigns based on customer data and preferences. By analyzing purchase history, browsing behavior and other factors, these models generate tailored messaging, offers and promotions for individual customers to increase engagement, conversion rates and customer loyalty. Gartner estimates that 30% of outbound marketing messages from enterprise organizations will be AI-driven by 2025, increasing from less than 2% in 2022.

GenAI enhances customer support

GenAI can provide instant, personalized responses to customer queries in an incredibly human-like manner. Large language models can offer relevant solutions, make product recommendations and engage in natural-sounding conversations. As a result, customers can gain faster response and resolution, and organizations can free up human agents to focus on more complex customer issues. For example, Amazon uses GenAI to power Alexa and its automated online chat assistant, both of which are available 24/7/365.

GenAI optimizes back-office tasks 

Generative AI models can automate and optimize various internal processes, such as drafting reports, creating standard operating procedures, and crafting personalized emails. Streamlining these tasks can reduce operational costs, minimize human error and increase overall efficiency.

GenAI writes software code

Through a technique known as neural code generation, GenAI enhances software development processes by automating code generation, refactoring and debugging. GenAI models can produce code snippets and suggest relevant libraries within the context and requirements of specific programming tasks. In this way, GenAI can help increase developer productivity, reduce errors and speed up development while providing more secure and reliable software. 

GenAI’s Powerful Potential

These diverse use cases demonstrate the immense potential of Generative AI and large language models to revolutionize the way enterprises operate—and no industry is exempt. Harnessing these cutting-edge technologies will usher in transformative ways for organizations to enhance customer experiences, drive innovation throughout operations and gain new levels of competitive differentiation. 

Because its capabilities are so revolutionary, AI will create a widening gap between organizations that embrace its transformative power and those that do not. Our own research shows that AI leaders are already advantaged over late adopters. While the urgency to leverage AI varies by company and industry, IDC, in that same research study, posits that we have reached the point where every organization must have an AI approach in place to stay viable. Thus, exploring AI and GenAI today, before the yawning gap grows, is a crucial step for organizations that want to secure their future.

Learn more.

***

To help organizations move forward, Dell Technologies is powering the enterprise GenAI journey. With best-in-class IT infrastructure and solutions to run GenAI workloads and advisory and support services that roadmap GenAI initiatives, Dell is enabling organizations to boost their digital transformation and accelerate intelligent outcomes. 

The compute required for GenAI models has put a spotlight on performance, cost and energy efficiency as top concerns for enterprises today. Intel’s commitment to the democratization of AI and sustainability will enable broader access to the benefits of AI technology, including GenAI, via an open ecosystem. Intel’s AI hardware accelerators, including new built-in accelerators, provide performance and performance per watt gains to address the escalating performance, price and sustainability needs of GenAI.

Artificial Intelligence

While there’s an open letter calling for all AI labs to immediately pause training of AI systems more powerful than GPT-4 for six months, the reality is the genie is already out of the bottle. Here are ways to get a better grasp of what these systems are capable of, and utilize them to construct an effective corporate use policy for your organization.

Generative AI is the headline-grabbing form of AI that uses un- and semi-supervised algorithms to create new content from existing materials, such as text, audio, video, images, and code. Use cases for this branch of AI are exploding, and it’s being used by organizations to better serve customers, take more advantage of existing enterprise data, and improve operational efficiencies, among many other uses.

But just like other emerging technologies, it doesn’t come without significant risks and challenges. According to a recent Salesforce survey of senior IT leaders, 79% of respondents believe the technology has the potential to be a security risk, 73% are concerned it could be biased, and 59% believe its outputs are inaccurate. In addition, legal concerns need to be considered, especially if externally used generative AI-created content is factual and accurate, content copyrighted, or comes from a competitor.

As an example, and a reality check, ChatGPT itself tells us that, “my responses are generated based on patterns and associations learned from a large dataset of text, and I do not have the ability to verify the accuracy or credibility of every source referenced in the dataset.”

The legal risks alone are extensive, and according to non-profit Tech Policy Press they include risks revolving around contracts, cybersecurity, data privacy, deceptive trade practice, discrimination, disinformation, ethics, IP, and validation.

In fact, it’s likely your organization has a large number of employees currently experimenting with generative AI, and as this activity moves from experimentation to real-life deployment, it’s important to be proactive before unintended consequences happen.

“When AI-generated code works, it’s sublime,” says Cassie Kozyrkov, chief decision scientist at Google. “But it doesn’t always work, so don’t forget to test ChatGPT’s output before pasting it somewhere that matters.”

A corporate use policy and associated training can help to educate employees on some of the risks and pitfalls of the technology, and provide rules and recommendations for how to get the most out of the tech, and, therefore, the most business value without putting the organization at risk.

With this in mind, here are six best practices to develop a corporate use policy for generative AI.

Determine your policy scope – The first step to craft your corporate use policy is to consider the scope. For example, will this cover all forms of AI or just generative AI? Focusing on generative AI may be a useful approach since it addresses large language models (LLMs), including ChatGPT, without having to boil the ocean across the AI universe. How you establish AI governance for the broader topic is another matter and there are hundreds of resources available online.

Involve all relevant stakeholders across your organization – This may include HR, legal, sales, marketing, business development, operations, and IT. Each group may see different use cases and different ramifications of how the content may be used or mis-used. Involving IT and innovation groups can help show that the policy isn’t just a clamp-down from a risk management perspective, but a balanced set of recommendations that seek to maximize productive use and business benefit while at the same time manage business risk.

Consider how generative AI is used now and may be used in the future – Working with all stakeholders, itemize all your internal and external use cases that are being applied today, and those envisioned for the future. Each of these can help inform policy development and ensure you’re covering the waterfront. For example, if you already see proposal teams, including contractors, experimenting with content drafting, or product teams experimenting with creative marketing copy, then you know there could be subsequent IP risk due to outputs potentially infringing on others’ IP rights.

Be in a state of constant development – When developing the corporate use policy, it’s important to think holistically and cover the information that goes into the system, how the generative AI system is used, and then how the information that comes out of the system is subsequently utilized. Focus on both internal and external use cases and everything in between. By requiring all AI-generated content to be labelled as such to ensure transparency and avoid confusion with human-generated content, even for internal use, it may help to prevent accidental repurposing of that content for external use, or act on the information thinking it’s factual and accurate without verification.

Share broadly across the organization – Since policies often get quickly forgotten or not even read, it’s important to accompany the policy with suitable training and education. This may include developing training videos and hosting live sessions. For example, a live Q&A with representatives from your IT, innovation, legal, marketing, and proposal teams, or other suitable groups, can help educate employees on the opportunities and challenges ahead. Be sure to give plenty of examples to help make it real for the audience, like when major legal cases crop up and can be cited as examples.

Make it a living document – As with all policy documents, you’ll want to make this a living document and update it at a suitable cadence as your emerging use cases, external market conditions, and developments dictate. Having all your stakeholders “sign” the policy or incorporate it into an existing policy manual signed by your CEO will show it has their approval and is important to the organization. Your policy should be just one of many parts of your broader governance approach, whether that’s for generative AI, or even AI or technology governance in general.

This is not intended to be legal advice, and your legal and HR departments should play a lead role in approving and disseminating the policy. But hopefully it provides some pointers for consideration. Much like the corporate social media policies of a decade or more ago, spending time on this now will help mitigate the surprises and evolving risks in the years ahead.

Artificial Intelligence, CIO, IT Leadership, IT Training 

The CIO Digital Enterprise Forum will be held in London on Thursday 11th May at Prospero House, London Bridge. Amit Sen from the United Nations Refugee Agency and Howard Pyle from Experience Futures will host the opening keynote. They will focus on the importance of organizations linking analytics with social impact goals and standards of inclusion.

Only a third of companies are currently seeing social impact as a core strategy, despite many being active in social responsibility. Understanding the organization’s target audience and defining ethical principles that reflect their needs is crucial when applying generative AI. Organizations need to plan for generalized standards for reporting on the users they’re serving through AI-driven experiences and what the impact is. Howard Pyle shares that “CIOs will need to play a leading role in guiding their organizations toward creating personalized and inclusive experiences that align with their overall KPIs and social impact goals. By developing inclusive product and experience strategies that are tailored to each user’s needs and abilities, organizations can ensure that all stakeholders receive the maximum value.”

It is essential to consider the impact of generative AI on the business and individual audiences while planning for generalized reporting standards. Personalization can reduce acquisition costs and increase revenues and marketing efficiency, but CIOs and IT leaders must focus on aligning social impact and business KPIs, developing a product and experience strategy based on them, and creating custom-tuned experiences. . It is critical to keep in mind the need for inclusive and ethical technology, the importance of individualized experiences, and the power of generative AI when used strategically and with clear goals in mind. By doing so, CIOs can help their organizations create experiences that meet the needs of individual users and the broader goals of the business.

The programme continues to include a panel discussion on keeping ahead of your data strategy, featuring Rashad Saab, Founder & CTO of rkbt.ai; Raj Jethwa, CTO, Digiterre; and Caroline Carruthers, Chief Executive, Carruthers & Jackson. The panel discussion will focus on the challenges of reaching data strategy goals and creating a data strategy that meets business needs and practices while allowing for future possibilities.

Discussion will focus on the human side of cybersecurity in the digital enterprise. The panel will look at human perception and social trust to digital counterparts and how to ensure cybersecurity alongside the introduction of new emerging tech. The moderator for the panel will be Michael Hill, Editor, CSO, and the panelists will include Jennifer Surujpaul, Head of IT & Digital, The Brit School; Mel Smith, CIO, Buckles Solicitors LLP; and Sue Khan, VP of Privacy and DPO, Flo Health.

In a keynote session, the Green Web Foundation, a Dutch non-profit, will discuss its efforts to increase the internet’s energy efficiency and speed its transition away from fossil fuels. The foundation stewards the largest open dataset that tracks websites running on renewables, with an open tool suite used over 3.5 billion times.

Closing the Forum will be Dy Jacqui Taylor, sharing her insight and expertise on the evolution of the digital enterprise, finding the golden thread of resilience as tech continues to change as well as how to achieve the NetZero agenda.

Along with the themes of the forum, this event preview was written using ChatGPT, with insight shared by our keynote speaker Howard Pyle. Share your thoughts on using these emerging technologies by registering here to join, the forum is free for qualified attendees and you can view the full programme here

CIO

The CIO Digital Enterprise Forum will be held in London on Thursday 11th May at Prospero House, London Bridge. Amit Sen from the United Nations Refugee Agency and Howard Pyle from Experience Futures will host the opening keynote. They will focus on the importance of organizations linking analytics with social impact goals and standards of inclusion.

Only a third of companies are currently seeing social impact as a core strategy, despite many being active in social responsibility. Understanding the organization’s target audience and defining ethical principles that reflect their needs is crucial when applying generative AI. Organizations need to plan for generalized standards for reporting on the users they’re serving through AI-driven experiences and what the impact is. Howard Pyle shares that “CIOs will need to play a leading role in guiding their organizations toward creating personalized and inclusive experiences that align with their overall KPIs and social impact goals. By developing inclusive product and experience strategies that are tailored to each user’s needs and abilities, organizations can ensure that all stakeholders receive the maximum value.”

It is essential to consider the impact of generative AI on the business and individual audiences while planning for generalized reporting standards. Personalization can reduce acquisition costs and increase revenues and marketing efficiency, but CIOs and IT leaders must focus on aligning social impact and business KPIs, developing a product and experience strategy based on them, and creating custom-tuned experiences. . It is critical to keep in mind the need for inclusive and ethical technology, the importance of individualized experiences, and the power of generative AI when used strategically and with clear goals in mind. By doing so, CIOs can help their organizations create experiences that meet the needs of individual users and the broader goals of the business.

The programme continues to include a panel discussion on keeping ahead of your data strategy, featuring Rashad Saab, Founder & CTO of rkbt.ai; Raj Jethwa, CTO, Digiterre; and Caroline Carruthers, Chief Executive, Carruthers & Jackson. The panel discussion will focus on the challenges of reaching data strategy goals and creating a data strategy that meets business needs and practices while allowing for future possibilities.

Discussion will focus on the human side of cybersecurity in the digital enterprise. The panel will look at human perception and social trust to digital counterparts and how to ensure cybersecurity alongside the introduction of new emerging tech. The moderator for the panel will be Michael Hill, Editor, CSO, and the panelists will include Jennifer Surujpaul, Head of IT & Digital, The Brit School; Mel Smith, CIO, Buckles Solicitors LLP; and Sue Khan, VP of Privacy and DPO, Flo Health.

In a keynote session, the Green Web Foundation, a Dutch non-profit, will discuss its efforts to increase the internet’s energy efficiency and speed its transition away from fossil fuels. The foundation stewards the largest open dataset that tracks websites running on renewables, with an open tool suite used over 3.5 billion times.

Closing the Forum will be Dy Jacqui Taylor, sharing her insight and expertise on the evolution of the digital enterprise, finding the golden thread of resilience as tech continues to change as well as how to achieve the NetZero agenda.

Along with the themes of the forum, this event preview was written using ChatGPT, with insight shared by our keynote speaker Howard Pyle. Share your thoughts on using these emerging technologies by registering here to join, the forum is free for qualified attendees and you can view the full programme here

CIO

Chatbots have been maturing steadily for years. In 2022, however, they showed that they’re ready to take a giant leap forward.

When ChatGPT was unveiled a few short weeks ago, the tech world was abuzz about it. The New York Times tech columnist Kevin Roose called it “quite simply, the best artificial intelligence chatbot ever released to the general public,” and social media was flooded with examples of its ability to crank out convincingly human-like prose.[1] Some venture capitalists even went so far as to say that its launch may be as earth shattering as the introduction of the iPhone in 2007.[2]

ChatGPT does indeed look like it represents a major step forward for artificial intelligence (AI) technology. But, as many users were quick to discover, it’s still marked by many flaws — some of them serious. Its advent signals not just a watershed moment for AI development, but an urgent call to reckon with a future that’s arriving more quickly than many expected.

Fundamentally, ChatGPT brings a new sense of urgency to the question: How can we develop and use this technology responsibly? Contact centers can’t answer this question on their own, but they do have a specific part to play.

ChatGPT: what’s all the hype about?

Answering that question first requires an understanding of just what ChatGPT is and what it represents. The technology is the brainchild of OpenAI, the San Francisco-based AI company that also released innovative image generator DALL-E 2 earlier this year. It was released to the public on Nov. 30, 2022, and quickly gained steam, reaching 1 million users within five days.

The bot’s capabilities stunned even Elon Musk, who originally co-founded OpenAI with Sam Altman. He echoed the sentiment of many people when he called ChatGPT’s language processing “scary good.”[3]

So, why all the hype? Is ChatGPT really that much better than any chatbot that’s come before? In many ways, it seems the answer is yes.

The bot’s knowledge base and language processing capabilities far outpace other technology on the market. It can churn out quick, essay-length answers to seemingly innumerable queries, covering a vast range of subjects and even answering in varied styles of prose based on user inputs. You can ask it to write a resignation letter in a formal style or craft a quick poem about your pet. It churns out academic essays with ease, and its prose is convincing and, in many cases, accurate. In the weeks after its launch, Twitter was flooded with examples of ChatGPT answering every type of question users could conceive of.

The technology is, as Roose points out, “Smarter. Weirder. More flexible.” It may truly usher in a sea of change in conversational AI.[1]

A wolf in sheep’s clothing: the dangers of veiled misinformation 

For all its impressive features, though, ChatGPT still showcases many of the same flaws that have become familiar in AI technology. In such a powerful package, however, these flaws seem more ominous.

Early users reported a host of concerning issues with the technology. For instance, like other chatbots, it quickly learned the biases of its users. Before long, ChatGPT was spouting offensive comments that women in lab coats were probably just janitors, or that only Asian or white men make good scientists. Despite the system’s reported guardrails, users were able to train it to make these types of biased responses fairly quickly.[4]

More concerning about ChatGPT, however, are its human-like qualities, which make its answers all the more convincing. Samantha Delouya, a journalist for Business Insider, asked it to write a story she’d already written — and was shocked by the results.

On the one hand, the resulting piece of “journalism” was remarkably on point and accurate, albeit somewhat predictable. In less than 10 seconds, it produced a 200-word article fairly similar to something Delouya may have written, so much so that she called it “alarmingly convincing.” The catch, however, was that the article contained fake quotes made up by ChatGPT. Delouya spotted them easily, but an unsuspecting reader may not have.[3]

Therein lies the rub with this type of technology. Its mission is to produce content and conversation that’s convincingly human, not necessarily to tell the truth. And that opens up frightening new possibilities for misinformation and — in the hands of nefarious users — more effective disinformation campaigns.

What are the implications, political and otherwise, of a chatbot this powerful? It’s hard to say — and that’s what’s scary. In recent years, we’ve already seen how easily misinformation can spread, not to mention the damage it can do. What happens if a chatbot can mislead more efficiently and convincingly?

AI can’t be left to its own devices: the testing solution

Like many reading the headlines about ChatGPT, contact center executives may be wide-eyed about the possibilities of deploying this advanced level of AI for their chatbot solutions. But they first must grapple with these questions and craft a plan for using this technology responsibly.

Careful use of ChatGPT — or whatever technology comes after it — is not a one-dimensional problem. No single actor can solve it alone, and it ultimately comes down to an array of questions involving not only developers and users but also public policy and governance. Still, all players should seek to do their part, and for contact centers, that means focusing on testing.

The surest pathway to chaos is to simply leave chatbots alone to work out every user question on their own without any human guidance. As we’ve already seen with even the most advanced form of this technology, that doesn’t always end well.

Instead, contact centers deploying increasingly advanced chatbot solutions must commit to regular, automated testing to expose any flaws and issues as they arise and before they snowball into bigger problems. Whether they’re simple customer experience (CX) defects or more dramatic information errors, you need to discover them early in order to correct the problem and retrain your bot.

Cyara Botium is designed to help contact centers keep chatbots in check. As a comprehensive chatbot testing solution, Botium can perform automated tests for natural language processing (NLP) scores, conversation flows, security issues, and overall performance. It’s not the only component in a complete plan for responsible chatbot use, but it’s a critical one that no contact center can afford to ignore.

Learn more about how Botium’s powerful chatbot testing solutions can help you keep your chatbots in check and reach out today to set up a demo.

[1] Kevin Roose, The Brilliance and Weirdness of ChatGPT, The New York Times, 12/5/2022.

[2] CNBC. “Why tech insiders are so excited about ChatGPT, a chatbot that answers questions and writes essays.”

[3] Business Insider. “I asked ChatGPT to do my work and write an Insider article for me. It quickly generated an alarmingly convincing article filled with misinformation.”

[4] Bloomberg. “OpenAI Chatbot Spits Out Biased Musings, Despite Guardrails.”

Artificial Intelligence, Machine Learning

The digital transformation bandwagon is a crowded one, with enterprises of all kinds heeding the call to modernize. The pace has only quickened in a post-pandemic age of enhanced digital collaboration and remote work. Nonetheless, 70% of digital transformation projects fall short of their goals, as organizations struggle to implement complex new technologies across the enterprise.

Fortunately, businesses can leverage AI and automation to better manage the speed, scale, and complexity of the changes that come with digital transformation. In particular, artificial intelligence for IT operations (or AIOps) platforms can be a game changer. AIOps solutions use machine learning to connect and contextualize operational data for decision support or even auto-resolution of issues. This simplifies and streamlines the transformation journey, especially as the enterprise scales up to larger and larger operations.

The benefits of automation and AIOps can only be realized, however, if companies choose solutions that put the power within reach – ones that package up the complexities and make AIOps accessible to users. And even then, teams must decide which business challenges to target with these solutions.  Let’s take a closer look at how to navigate these decisions about the solutions and use cases that can best leverage AI for maximum impact in the digital transformation journey.

Finding the right automation approach

Thousands of organizations in every part of the world see the advantages of AI-driven applications to streamline their IT and business operations. A “machine-first” approach frees staff from large portions of tedious, manual tasks while reducing risk and boosting output.

AIOps for decision support and automated issue resolution in the IT department can further add to the value derived from AI in an organization’s digital transformation.

Yet conversations with customers and prospects invariably touch on a shared complaint: Enterprise leaders know AI is a powerful ally in the digital transformation journey, but the technology can seem overwhelming and takes too long to scope and shop for all the components.  They’re looking for vendors to offer easier “on-ramps” to digital transformation. They want SaaS options and the availability of quick-install packages that feature just the functions that address a specific need or use case to leap into their intelligent automation journey.

Ultimately, a highly effective approach for leveraging AI in digital transformation involves so-called Out of the Box (OOTB) solutions that package up the complexity as pre-built knowledge that’s tailored for specific kinds of use cases that matter most to the organization.

Choosing the right use cases

Digital transformations are paradoxical in that you’re modernizing the whole organization over the course of time, but it’s impossible to “boil the ocean” and do it all at once. That’s why it’s so important to choose highly strategic and impactful use cases to get the ball rolling, demonstrate early wins, and then expand more broadly across the enterprise over time. 

OOTB solutions can help pare down the complexity. But it is just as important to choose the right use cases to apply such solutions. Even companies that know automation and AIOps are necessary to optimize and scale their systems can struggle with exactly where to apply them in the enterprise to reap the most value.

By way of a cheat sheet, here are four key areas that are ripe for transformation with AI, and where the value of AIOps solutions will shine through most clearly in the form of operational and revenue gains:

IT incident and event managementA robust AIOps solution can prevent outages and enhance event governance via predictive intelligence and autonomous event management. Once implemented, such a solution can render a 360° view of all alerts across all enterprise technology stacks – leveraging machine learning to remove unwanted event noise and autonomously resolve business-critical issues.Business health monitoring – A proactive AI-driven monitoring solution can manage the health of critical processes and business transactions, such as for the retail industry, for enhanced business continuity and revenue assurance. AI-powered diagnosis techniques can continually check the health of retail stores and e-commerce sites and automatically diagnose and resolve unhealthy components. Business SLA predictions – AI can be used to predict delays in business processes, give ahead-of-time notifications, and provide recommendations to prevent outages and Service Level Agreement (SLA) violations. Such a platform can be configured for automated monitoring, with timely anomaly detection and alerts across the entire workload ecosystem.IDoc management for SAP – Intermediate Document (IDoc) management breakdowns can slow progress in transferring data or information from SAP to other systems and vice versa. An AI platform with intelligent automation techniques can identify, prioritize, and then autonomously resolve issues across the entire IDoc landscape – thereby minimizing risk, optimizing supply chain performance, and enhancing business continuity. 

Conclusion

Organizations pursuing digital transformation are increasingly benefiting from enhanced AI-driven capabilities like AIOps that bring new levels of IT and business operations agility to advanced, multi-cloud environments.  As these options become more widespread, enterprises at all stages of the digital journey are learning the basic formula for maximizing the return on these technology investments: They’re solving the complexity problem with SaaS-based, pre-packaged solutions; and they’re becoming more strategic in selecting use cases ideally suited for AIOps and the power of machine learning.

To get up and running fast at any stage of your digital journey, visit Digitate to learn more.

Digital Transformation, IT Leadership

As transformational IT has increasingly become a business imperative, implementation partners have been looking to strengthen their value proposition for their customers. To differentiate themselves from transactional service providers, the more proactive partners are evolving their offerings and approaches, thereby becoming more strategic than they had been in the past.

While IT leaders can maximize the opportunity arising out of this shift by leveraging the partners’ strategies and advanced capabilities, it’s important for them to maintain focus on the risks. Here’s a look at how implementation providers are evolving and how CIOs should approach partnering with them for mutual success.

Shifting to a transformation approach

There is a perceptible change in the way implementation partners are now approaching their clients as compared to earlier, and it is all about becoming a strategic partner for transformational change.

“A partner now enters an account with a broader area of engagement in mind. The discussions may be around a specific project with a CIO, such as implementing a typical solution like Oracle or SAP ERP, but the partner’s core agenda is to bring about an extensive and comprehensive transformation of the client’s IT infrastructure,” says Harnath Babu, CIO at KPMG.

“As the project progresses, the partner discusses the CIO’s pain points and what could alleviate them. This could invariably lead to the partner’s scope getting expanded into, but not limited to, managing emerging technologies, enhancing cost and operational efficiencies, bringing about automation, application development, or improving the system of records,” he says. “Implementation partners are clearly moving from the earlier point approach to a transformation approach.”

Sharing an example of this as it unfolded at KPMG, Babu says, “We engaged with a system integrator to help us with L1/L2 support. In a short time, we scaled it to L3. We found that we could also leverage the partner for managing our infrastructure. Next, we asked the partner to help us with POD development as it was a big challenge to find skilled resources,” says Babu. “So, what started as an L1/L2 service engagement, eventually led to infrastructure management and resource augmentation.”

The POD, or product oriented delivery, is a software development model that entails building small, self-sufficient cross-functional teams that take care of specific requirements or tasks for a project.

Takeaways for CIOs from this trend: Leveraging one partner instead of many frees up CIOs and their teams from more boilerplate deployments, allowing them to focus on what is core to the business. “An implementation partner looks at the total value generated from an account. Therefore, if a CIO gives value to the partner, the latter will reciprocate. This will give CIOs the confidence of having a strong partner behind them. There can then be a project director to manage the project on a day-to-day basis and the CIO can intervene only when there is budget or strategy involved,” says Babu.

 

Building Centers of Excellence 

With the aim of adding value to their customers, implementation partners are increasingly realizing the importance of building technological expertise.

“To keep pace with the market and stay relevant, implementation partners are building on human capital and expertise. For instance, most partners lacked competency in cloud as there wasn’t much requirement related to it in the past. However, as cloud is gaining a strong traction, they have also upped the ante,” says Subramanya C, global CTO at business process management company Sagility (formerly HGS Healthcare). 

So, when Subramanya decided to move the company’s SAP, SharePoint portal, intranet, and other applications to the cloud, he roped in a partner who had a Center of Excellence on cloud and 12 to 15 subject matter experts (SME) on the technology.

“Partners with such capabilities were not seen in the past,” he says. “More than 100 servers had to be migrated in a few weeks. Immense planning, resources, and mitigation of risk were involved in the project. However, the partner’s strong technical expertise, which formed the basis of the center of excellence, made sure that the project got completed smoothly and as per the scheduled plan,” says Subramanya.

Takeaways for CIOs from this trend: Although implementation partners can provide deeper expertise than they could in the past, IT leaders should not be complacent when enlisting it. “For complex projects, like ours, strong governance is required from the enterprise technology leader’s end,” Subramanya says. “IT leaders can outsource a task or an activity to a partner and their SME, but they can’t outsource their responsibilities. Therefore, we ensured a strong governance framework was in place while implementing this project. We also had our own SME working in close collaboration with the partner’s experts.”

 

Collaborating with other partners

The evolution of technology, driven by modernization of applications and services, is catalyzing collaboration among system integrators.

As Archie Jackson, head of special initiatives, IT, and security at digital transformation company Incedo says, “I have seen system integrators coming together to offer solutions, a trend that wasn’t visible in the past. Today, products don’t work in silos. One product has multiple linkages with other products, and it orchestrates and expands into other areas. For instance, a security solution today is not limited only to the network. It is connected to end point and applications, too. Therefore, one project could spill over to another. A partner, however, may not have the expertise or the bandwidth to execute everything, which leads to collaboration with other partners.”

Incedo was in talks with a partner some time back for implementing managed links for connectivity. The end-to-end managed service would have offered remote connectivity to access corporate network from anywhere in the world.

“During the conversations, the partner suggested he could bring another implementation partner to enhance the cybersecurity of the links. It came across as a logical fit because the links had to be secure, but I had not seen a partner collaborating with another one like this in the past,” says Jackson. Takeaways for CIOs from this trend: One implementation partner bringing another partner may help a CIO, but it could also increase the cost of the project. “This is a good option only if a CIO wants to build capability. The primary partner will build his margin into the project for which he is getting the second partner, thereby increasing the cost for the CIO.  If CIOs have the capacity to architect a solution more efficiently, they should do so in-house,” says Jackson.

IT Strategy

Developing and deploying artificial intelligence (AI) solutions efficiently and successfully in businesses requires a new set of skills, for both individuals and organizations.  In a recent study, over half of companies that have successfully deployed AI applications have embraced an enterprise-wide strategy that is inclusive, open, and pragmatic, using homegrown AI models 90% of the time. They have spent time understanding and documenting consistent and effective ways of rolling out projects and processes to drive efficiency. 

AI is Booming. Wanted: More People & Best Practices.

AI in business is advancing at a brisk pace. The market is forecast to grow at a Compound Annual Growth Rate (CAGR) of 36.2% between 2022 and 2027, when it will reach $407 billion, according to a recent study by MarketsandMarkets. But the report cautioned that: “The limited number of AI technology experts is the key restraint to the market.” The same lack of enough skilled personnel, along with established processes for deploying AI, was also cited in a recent global study of 2000 businesses by IDC.

Thirty-one percent of companies surveyed were actively using AI while the others were still in prototyping, experimentation, or evaluation stages. Significantly, companies using AI – considered early adopters – have integrated their AI platforms with the rest of their data center and cloud environments instead of running AI in silos used by separate groups. They have defined holistic, organization-wide AI strategies or visions along with clearly defined policies, guidelines, and processes. 

Another characteristic of these early AI adopters is that they use internal staff instead of external vendors to deploy AI applications. They also prioritize training line of business managers to use outcomes from algorithms and to tap these stakeholders to help guide new projects. This connection between IT and business leaders results in a high degree of support from C-level executives on down. 

AI Environments are Complex

To provide the massive compute power and data storage resources required for AI applications, businesses typically use systems with graphical processing units (GPUs) that accelerate applications running on the CPU by offloading some of the compute-intensive and time-consuming portions of the code. High-speed storage, parallel processing, in-memory computing, and containerized applications running in clusters are other techniques that are part of AI solution environments. 

Working with such complex technology requires the right training and experience. According to Datamation, there are 55,000 jobs currently listed under “artificial intelligence” on LinkedIn. Many if not most of these jobs (e.g., AI engineer, data scientist, AI/ML architect, AIOps/MLOps engineer) require years of education and advanced degrees. Yet the IDC study makes clear how much more effective AI projects are with these personnel designing models and collaborating with stakeholders in-house.

Scaling an AI Environment for Critical Healthcare Diagnoses

A leading pathology diagnostics firm in the U.S., that works with top biopharmaceutical and medical organizations around the world, has developed its own best practices for designing and deploying AI applications. Project teams at the firm include IT professionals, machine learning engineers, and data scientists who specialize in the biomedical industry.  Line of business managers also help guide the development of algorithms, 90% of which are developed based on the use of inhouse models. 

Many team members work primarily alone, then collaborate to deliver complex projects. With fluid, continually evolving project requirements, the company uses the Agile software development process that anticipates the need for flexibility in a finished product. To ensure that the technology they use (including GPU-based compute with high-speed and object-based storage and file-based access to Kubernetes clusters) is kept up-to-date and future proofed, the firm relies on close partnerships with vendors to review product roadmaps and anticipate and incorporate new features.

Agile development requires a pragmatic approach. IT managers at the firm insist that developers evaluate their work critically in the design phase and be willing to start from scratch if an approach isn’t working. In IDC’s survey, the companies actively using AI take an average of three months to build machine learning and deep learning models where AI laggards commit a fraction of that time. Deployment in AI early adopter companies like the pathology diagnostics firm, however, is accelerated because developers have already done their homework and obtained buy-in on models and validation from data scientists on technology purchases. 

Summary of Best Practices for Effective Use of AI 

As more C-level and line of business executives recognize and prioritize the use of AI as an effective tool to enhance competitiveness and drive efficiencies, the barriers to adoption have also become clear. Companies achieving success with AI have invested in people with skills and expertise. They have established vendor partnerships to future-proof solutions by staying up-to-date on evolving product roadmaps. They have fostered collaborative and highly flexible development environments that can alter course based on changing business dynamics. Using mostly homegrown models, they are committed to taking the time required to get the design of algorithms right before moving to well-defined established deployment processes.  Finally, AI development teams mentor business stakeholders, working with them to uncover and apply actionable insights from data analytics. 

Download the new IDC report to learn more about what is separating AI leaders and laggards. 

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

Artificial Intelligence

The role of the CMO is more invested in technology than ever, and CMOs have no choice but to engage with the CIO and align business and tech objectives. Key to the success between CMO and CIO is how both roles can collaborate around data.

Related reading: How the CMO can leverage the data of retail networks to deliver better outcomes for their organisations.

On the surface, there is a perceived tension between CMOs, CIOs, the rest of the executive team, and data. CMOs need to look for ways to leverage customer data to deliver superior and highly tailored experiences to customers. CIOs need to ensure that the business’ use of data is compliant, secure, and done according to best practices. They need to assure the board that the risk from data is minimised.

“Understanding that global data policies and regulation are ever-evolving, CIOs must plan around regulation in effect today, and also what could be adopted in the future,” Melanie Hoptman, Chief Operating Officer, APAC, at LiveRamp said. “By taking a forward-thinking approach to privacy and security, CIOs will set a sustainable and durable foundation for data ethics practices at their organization.”

In Europe, for example – often considered the leader in global trends when it comes to compliance law – the GDPR alone costs more than $US1 million to be in full compliance, on average, and in terms of penalties, companies were fined more than €1 billion in 2021 alone.

However, as data enablement platform, LiveRamp, has noted, CIOs are well across these requirements, and are now increasingly in a position where they can start to focus on enablement for people like the CMO. “The good news for many CIOs is that they’ve already laid the groundwork through investments in data governance and migration to the cloud,” LiveRamp noted in a recent report.

“While the passage and enforcement of GDPR, CCPA in California, and other data regulations may have once been seen as seismic events affecting brands and publishers alike, they’ve actually been a forcing function for companies to organize their data, remove data silos, and clearly document what they have access to and how it can be used.”

Gaining Executive Buy-In

Successfully capitalising on the data opportunity requires a whole-of-business approach. However, LiveRamp notes that there are three particular executives that CIOs and CMOs should collaborate most closely with so they can drive buy-in across the organisation.

CEO & CFO – “Bring your stakeholders along your journey, proving your strategy’s value by being transparent on the metrics you’re tracking and how you’re faring. In doing this, you’ll soon find partners within the organization who are willing to lean in and help.”Chief Data Ethics Officer or General Counsel – “Working directly with these executives will also give you a sense of the types of leading-edge technologies that they are willing to explore.”Chief Analytics Officer – “The right technical data management tools can reduce that time significantly for marketing, data, and analytics teams, accelerating insights that can spark innovation.”

The goal – at least in the initial instance – will be to reduce the siloing effect across organisations. As noted on Tech Target, data silos create a number of headaches for organisations and often make maintaining compliance more difficult:

Incomplete data sets, which hinder efforts to build data warehouses and data lakes for business intelligence and analytics applications.

Inconsistent data, which can result in inaccuracies in interacting with customers, and affect the internal operational use of data.

Less collaboration, when different teams have access to different data sets, the opportunities to work together and share data between departments is reduced.

Data security, the decentralised nature of where data is stored when it is siloed can expose the organisation to increased security and privacy risks.

In this context, there is a natural alignment across the organisation to address the challenges of siloing. The CMO wants to free the data up for better collaboration and customer interactions, while recognising the need for the CIO and others to ensure the organisation adheres to best practices for the increasingly strict compliance environment.

However, the challenge is that one line of business will not always want data accessible to another line of business – and indeed that in itself can become a compliance risk. Marketing should not have access to elements of the finance team’s data, for example. The CIO should work with their counterparts like the CMO and others to ensure teams have access only to the data necessary to drive their specific business outcomes     .

“Businesses must think of the CIO and CMO as equal champions whose partnership makes innovation possible,” Hoptman said. “When the CIO unites siloed customer service data with CRM data, marketers can create new opportunities for upsells, data monetization and better personalization, or leverage even purchase data to send targeted offers to customers in-store or at the register. Either use case shifts the perception of marketing from cost-center to revenue-driver, while increasing ROI for tech investments. This is a win-win for CIOs and CMOs.” 

Rather than allow that to undermine efforts to embrace cross-business collaboration and de-siloing, LiveRamp instead recommends privacy-enhancing technologies (PETs). “PETs represent an ever-growing group of cryptographic and encryption protocols—math, basically—that offer businesses the ability to accelerate safe data collaboration, build customer intelligence, and maximize the value of data without relinquishing control or compromising consumer privacy,” Hoptman said.

The LiveRamp platform provides that to organisations, giving them the ability to collect first-party data as a single source, leverage third-party data in conjunction with first- and second-party data securely, and collaborate both internally and externally by building secure data partnerships with sources (silos) that would have been otherwise inaccessible.

In delivering this capability to their organizations, CIOs can position themselves at the centre of enablement, giving CMOs access to the critical data that they need for marketing efforts, and articulating the value of doing so to more risk-averse executives, all while maintaining data best practices.

“With additional data regulation undoubtedly in our future, customer intelligence will only become more challenging, increasing the need for enterprises to unite their internal data and build the infrastructure to support safe, secure collaboration with trusted external partners,” Hopman said. “The CIOs who plan for this future now will be the ones poised to reap greater returns on their current investments.”

Read the full report here.

Data Management

Many people associate high-performance computing (HPC), also known as supercomputing, with far-reaching government-funded research or consortia-led efforts to map the human genome or to pursue the latest cancer cure.

But HPC can also be tapped to advance more traditional business outcomes — from fraud detection and intelligent operations to helping advance digital transformation. The challenge: making complex compute-intensive technology accessible for mainstream use.

As companies digitally transform and steer toward becoming data-driven businesses, there is a need for increased computing horsepower to manage and extract business intelligence and drive data-intensive workloads at scale. The rise of artificial intelligence (AI), machine learning (ML), and real-time analytics applications, often deployed at the edge, can utilize HPC resources to unlock insights from data and efficiently run increasingly large and more complex models and simulations.

The convergence of HPC with AI-based analytics is impacting nearly every industry and across a wide range of applications, including space exploration, drug discovery, financial modeling, automotive design, and systems engineering.

“HPC is becoming a utility in our lives — people aren’t thinking about what it takes to design this tire, validate a chip design, parse and analyze customer preferences, do risk management, or build a 3D structure of the COVID-19 virus,” notes Max Alt, distinguished technologist and director of Hybrid HPC at HPE. “HPC is everywhere, but you don’t think about it, because it’s hidden at the core.”

HPC’s scalable architecture is particularly well suited for AI applications, given the nature of computation required and the unpredictable growth of data associated with these workflows. HPC’s use of graphics-processing-unit (GPU) parallel processing power — coupled with its simultaneous processing of compute, storage, interconnects, and software — raises the bar on AI efficiencies. At the same time, such applications and workflows can operate and scale more readily.

Even with widespread usage, there is more opportunity to leverage HPC for better and faster outcomes and insights. HPC architecture — typically clusters of CPU and GPUs working in parallel and connected to a high-speed network and data storage system — is expensive, requiring a significant capital investment. HPC workloads are typically associated with vast data sets, which means that public cloud might be an expensive option due to requirements regarding latency and performance issues. In addition, data security and data gravity concerns often rule out public cloud.

Another major barrier to more widespread deployment: a lack of in-house specialized expertise and talent. HPC infrastructure is far more complex than traditional IT infrastructure, requiring specialized skills for managing, scheduling, and monitoring workloads. “You have tightly coupled computing with HPC, so all of the servers need to be well synchronized and performing operations in parallel together,” Alt explains. “With HPC, everything needs to be in sync, and if one node goes down, it can fail a large, expensive job. So you need to make sure there is support for fault tolerance.”

HPE GreenLake for HPC Is a Game Changer

An as-a-service approach can address many of these challenges and unlock the power of HPC for digital transformation. HPE GreenLake for HPC enables companies to unleash the power of HPC without having to make big up-front investments on their own. This as-a-service-based delivery model enables enterprises to pay for HPC resources based on the capacity they use. At the same time, it provides access to third-party experts who can manage and maintain the environment in a company-owned data center or colocation facility while freeing up internal IT departments.

“The trend of consuming what used to be a boutique computing environment now as-a-service is growing exponentially,” Alt says.

HPE GreenLake for HPC bundles the core components of an HPC solution (high-speed storage, parallel file systems, low-latency interconnect, and high-bandwidth networking) in an integrated software stack that can be assembled to meet an organization’s specific workload needs.

As part of the HPE GreenLake edge-to-cloud platform, HPE GreenLake for HPC gives organizations access to turnkey and easily scalable HPC capabilities through a cloud service consumption model that’s available on-premises. The HPE GreenLake platform experience provides transparency for HPC usage and costs and delivers self-service capabilities; users pay only for the HPC resources they consume, and built-in buffer capacity allows for scalability, including unexpected spikes in demand. HPE experts also manage the HPC environment, freeing up IT resources and delivering access to the specialized performance tuning, capacity planning, and life cycle management skills.

To meet the needs of the most demanding compute and data-intensive workloads, including AI and ML initiatives, HPE has turbocharged HPE GreenLake for HPC with purpose-built HPC capabilities. Among the more notable features are expanded GPU capabilities, including NVIDIA Tensor Core models; support for high-performance HPE Parallel File System Storage; multicloud connector APIs; and HPE Slingshot, a high-performance Ethernet fabric designed to meet the needs of data-intensive AI workloads. HPE also released lower entry points to HPC to make the capabilities more accessible for customers looking to test and scale workloads.

As organizations pursue HPC capabilities, they should consider the following:

Stop thinking of HPC in terms of a specialized boutique technology; think of it more as a common utility used to drive business outcomes.Look for HPC options that are supported by a rich ecosystem of complementary tools and services to drive better results and deliver customer excellence.Evaluate the HPE GreenLake for HPC model. Organizations can dial capabilities up and down, depending on need, while simplifying access and lowering costs.

HPC horsepower is critical, as data-intensive workloads, including AI, take center stage. An as-a-service model democratizes what’s traditionally been out of reach for most, delivering an accessible path to HPC while accelerating data-first business.

For more information, visit https://www.hpe.com/us/en/greenlake/high-performance-compute.html

High-Performance Computing