With the AI hype cycle and subsequent backlash both in full swing, IT leaders find themselves at a tenuous inflection point regarding use of artificial intelligence in the enterprise.

Following stern warnings from Elon Musk and revered AI pioneer Geoffrey Hinton, who recently left Google and is broadcasting AI’s risks and a call to pause, IT leaders are reaching out to institutions, consulting firms, and attorneys across the globe to get advice on the path forward. 

“The recent cautionary remarks of tech CEOs such as Elon Musk about the potential dangers of artificial intelligence demonstrate that we are not doing enough to mitigate the consequences of our innovation,” says Atti Riazi, SVP and CIO of Hearst. “It is our duty as innovators to innovate responsibly and to understand the implications of technology on human life, society, and culture.”

That sentiment is echoed by many IT leaders, who believe innovation in a free market society is inevitable and should be encouraged, especially in this era of digital transformation — but only with the right rules and regulations in place to prevent corporate catastrophe or worse.

“I agree a pause may be appropriate for some industries or certain high-stake use cases but in many other situations we should be pushing ahead and exploring at speed what opportunities these tools provide,” says Bob McCowan, CIO at Regeneron Pharmaceuticals.

“Many board members are questioning if these technologies should be adopted or are they going to create too many risks?” McCowan adds. “I see it as both. Ignore it or shut it down and you will be missing out on significant opportunity, but giving unfettered access [to employees] without controls in place could also put your organization at risk.”

While AI tools have been in use for years, the recent release of ChatGPT to the masses has stirred up considerably more controversy, giving many CIOs — and their boards — pause on how to proceed. Some CIOs take the risks to industry — and humanity — very seriously.

“Every day, I worry about this more,” says Steve Randich, CIO of The Financial Industry Regulatory Authority (FINRA), a key regulatory agency that reports to the SEC.

Randich notes a graph he saw recently that states that the ‘mental’ capacity of an AI program just exceeded that of a mouse and in 10 years will exceed the capacity of all of humankind. “Consider me concerned, especially if the AI programs can be influenced by bad actors and are able to hack, such as at nuclear codes,” he says.

George Westerman, a senior lecturer at MIT Sloan School of Management, says executives at enterprises across the globe are reaching out for advice from MIT Sloan and other institutions about the ethics, risks, and potential liabilities of using generative AI. Still, Westerman believes most CIOs have already engaged with their top executives and board of directors and that generative AI itself imposes no new legal liabilities that corporations and their executives don’t abide today.

“I would expect that just like all other officers of companies that there’s [legal] coverage there for your official duties,” Westerman says of CIOs’ personal legal exposure to AI fallout, noting the exception of using the technology inappropriately for personal gain.

Playing catchup on generative AI

Meanwhile, the release of ChatGPT has rattled regulatory oversight efforts. The EU had planned to enact its AI Act last month but opted to stall after ChatGPT was released given that many were concerned the policies would be outdated before going into effect. And as the European Commission and its related governing bodies work to sort out the implications of generative AI, company executives in Europe and the US are taking the warning bells seriously.

“As AI becomes a key part of our landscape and narrow AI turns into general AI — who becomes liable? The heads of technology, the inanimate machine models? The human interveners ratifying/changing training models? The technology is moving fast, but the controls and ethics around it are not,” says Adriana Karaboutis, group chief information and digital officer at National Grid, which is based in the UK but operates in the northeast US as well.

“There is a catchup game here. To this end and in the meantime managing AI in the enterprise lies with CxOs that oversee corporate and organizational risk. CTO/CIO/CTO/CDO/CISOs are no longer the owners of information risk” given the rise of AI, the CIDO maintains. “IT relies on the CEO and all CxOs, which means corporate culture and awareness to the huge benefits of AI as well as the risks must be owned.”

Stockholm-based telecom Ericsson sees huge upside in generative AI and is investing in creating multiple generative AI models, including large language models, says Rickard Wieselfors, vice president and head of enterprise automation and AI at Ericsson.

“There is a sound self-criticism within the AI industry and we are taking responsible AI very seriously,” he says. “There are multiple questions without answer in terms of intellectual property rights to text or source code used in the training. Furthermore, data leakage in querying the models, bias, factual mistakes, lack of completeness, granularity or lack of model accuracy certainly limits what you can use the models for.

“With great capability comes great responsibility and we support and participate in the current spirit of self-criticism and philosophical reflections on what AI could bring to the world,” Wieselfors says.

Some CIOs, such as Choice Hotels’ Brian Kirkland, are monitoring the technology but do not think generative AI is fully ready for commercial use.

“I do believe it is important for industry to make sure that they are aware of the risk, reward, and impact of using generative AI technologies, like ChatGPT. There are risks to data ownership and generated content that must be understood and managed to avoid negative impacts to the company,” Kirkland says. “At the same time, there is a lot of upside and opportunity to consider. The upside will be significant when there is an ability to safely and securely merge a private data set with the public data in those systems.

“There is going to be a dramatic change in how AI and machine learning enable business value through everything from generated AI content to complex and meaningful business analytics and decision making,” the Choice Hotels CIO says.

No one is suggesting a total hold on such a powerful and life changing technology.

In a recent Gartner poll of more than 2,500 executives, 45% indicated that attention around ChatGPT has caused them to increase their AI investments. More than 70% maintain their enterprise is currently exploring generative AI and 19% have pilots or production use under way, with projects from companies such as Unilever and CarMax already showing promise.

At the MIT Sloan CIO conference starting May 15, Irving Wladawsky-Berger will host a panel on the potential risks and rewards of entering generative AI waters. Recently, he hosted a pre-conference discussion on the technology.

“We’re all excited about generative AI today,” said the former longtime IBM researcher and current affiliate researcher at MIT Sloan, citing major advances in genomics expected due to AI.

But Wladawsky-Berger noted that the due diligence required of those who adopt the technology will not be a simple task. “It just takes so much work,” he said. “[We must] figure out what works, what is safe, and what trials to do. That’s the part that takes time.”

Another CIO on the panel, Wafaa Mamilli, chief digital and technology officer at Zoetis, said generative AI is giving pharmaceutical companies increased confidence of curing chronic human illnesses.

“Because of the advances of generative AI technologies and computing power on genetic research, there are now trials in the US and outside of the US, Japan, and Europe that are targeting to cure diabetes,” she said.

Guardrails and guidelines: Generative AI essentials

Wall Street has more than taken notice of the industry’s swift embrace of generative AI. According to IDC, 2022 was a record-breaking year for investments in generative AI startups, with equity funding exceeding $2.6 billion.

“Whether it is content creation with Jasper.ai, image creation with Midjourney, or text processing with Azure OpenAI services, there is a generative AI foundation model to boost various aspects of your business,” according to one of several recent IDC reports on generative AI.

And CIOs already have the means of putting guardrails in place to securely move forward with generative AI pilots, Regeneron’s McCowan notes.

“It’s of critical importance that you have policy and guidelines to manage access and behaviors of those that plan to use the technologies and to remind your staff to protect intellectual property, PII [Personable Identifiable Information], as well as reiterating that what gets shared may become public,” McCowan says.

“Get your innovators and your lawyers together to find a risk-based model of using these tools and be clear what data you may expose, and what rights you have to the output from these solutions,” he says. “Start using the technologies with less risky use cases and learn from each iteration. Get started or you will lose out.”

Forrester Research analyst David Truog notes that AI leaders are right to put the warning label on generative AI before enterprises begin pilots and using generative AI in production. But he too is confident it can be done.   

“I don’t think stopping or pausing AI is the right path,” Truog says. “The more pragmatic and constructive path is to be judicious in selecting use cases where specialized AIs can help, embed thoughtful guardrails, and have an intentional air-gapping strategy. That would be a starting point.”

One DevOps IT chief at a consulting firm points to several ways CIOs may mitigate risk when using generative AI, including thinking like a venture capitalist; clearly understanding the technology’s value; determining ethical and legal considerations in advance of testing; experimenting, but not rushing into investments; and considering the implications from the customer point of view.

“Smart CIOs will form oversight committees or partner with outside consultants who can guide the organization through the implementation and help set up guidelines to promote responsible use,” says Rod Cope, CTO at Minneapolis-based Perforce.  “While investing in AI provides tremendous value for the enterprise, implementing it into your tech stack requires thoughtful consideration to protect you, your organization, and your customers.”

While the rise of generative AI will certainly impact human jobs, some IT leaders, such as Ed Fox, CTO at managed services provider MetTel, believe the fallout may be exaggerated, although everyone will likely have to adapt or fall behind.

“Some people will lose jobs during this awakening of generative AI but not to the extent some are forecasting,” Fox says. “Those of us that don’t embrace the real-time encyclopedia will be passed by.”

Still, if there’s one theme for certain it’s that for most CIOs proceeding with caution is the best path forward. So too is getting involved.

CIOs must strike a balance between “strict regulations that stifle innovation and guidelines to ensure that AI is developed and used responsibly,” says Tom Richer, general manager of Wipro’s Google Business Group, noting he is collaborating with his alma mater, Cornell, and its AI Initiative, to proceed prudently.

“It’s vital for CIOs and IT executives to be aware of the potential risks and benefits of generative AI and to work with experts in the field to develop responsible strategies for its use,” Richer says. “This collaboration needs to involve universities, big tech, think tanks, and government research centers to develop best practices and guidelines for the development and deployment of AI technologies.”

Artificial Intelligence, IT Leadership

As IT organizations attempt wide-scale cloud adoption, the importance of common best practices across applications and products is growing, sparking an exciting new conversation about platform teams and related disciplines like platform engineering.

The problem statement driving the investment in platform teams is clear: developing, operating, and optimizing a modern application is becoming too complex for many product delivery teams to solve independently. In response to this friction, leading organizations are taking a new approach, allowing workstreams following a given application pattern—perhaps a Java microservice or Kubeflow data pipeline—to use a repeatable, secure set of starters, UX, and automation—a “golden path.”

The game-changing insight of golden paths is an application-centric and workstream-focused approach. From IDE to production, golden paths align development teams with an organization’s cross-functional best practices. The correct way to work becomes the easy way. As a result, many platform teams deliver a more than 50% improvement in developer onboarding speed, eliminating friction and uncertainty for all the cloud applications they enable.

Repeatability is fundamental to realizing value from the cloud at scale. Before adopting a golden path culture, autonomous application teams might share version control systems or continuous integration tools but deploy and update similar applications in needlessly variable ways. Golden paths propose a new level of standardization, implementing internal developer platforms (IDPs) and shared workflows, which guide an application from its first commit to Day 2 operations.

Effectively diffusing cloud best practices requires both a technological and cultural evolution. Many cloud application teams are entirely disconnected; other groups may have no repeatable access to their hard-won cloud and application expertise. The shared patterns and workflows on internal developer platforms become a nexus of repeatable standards, continuously improved by updates from across the organization.

Spotify experienced a lack of repeatability and responded with a significant update to its cloud strategy. They shifted their culture to center around a collaborative set of best practices and workflows as a baseline for every cloud application–golden paths enabled by their platform team. The results proved so decisive they wondered how they operated without them, recalling: “There was a time when engineers at Spotify couldn’t imagine life with golden paths; now we can’t imagine life without them.”

Building a cross-Functional Golden Path

Platform teams building golden paths codify and automate the application patterns across architectural, operational, and security domains. Comprised of experts in cloud architecture, DevOps, security, and automation, platform teams work closely with application development teams to enable an end-to-end experience.

Many application teams are building similar cloud applications but need a cultural prompt and shared platform capabilities to collaborate on architectural best practices. A platform team’s first task is to enable simplified creation and deployment of an organization’s most common application types–preferably with a simplified application deployment manifest.

Common and simplified deployment manifests unlock a powerful new way of thinking of cloud applications as fleets vs. one-off projects. Applications with a tolerance for horizontal scaling employ an easy-to-automate and consistent approach to secure application networking and autoscaling. New security affordances and controls may also be cumbersome to implement without a shared and developer-friendly platform but become the easy-to-consume defaults with one.

More secure by default often becomes a top focus for golden paths. Platform teams should use common automation for container builds, removing developer toil and allowing ongoing Day 2 updates. Security posture and vulnerabilities should be holistically tracked and exposed with an intuitive UX by the platform team. More secure infrastructure configurations can be automated by the platform approach such as enabling mTLS for application traffic by default.

If this new approach sounds right for your cloud strategy, here are some aspirational outcomes and metrics proven to help guide and motivate a platform team investment:

1: Ease developer onboarding: When a new engineer joins the team, it’s a living test of your cloud productivity experience. Spotify recorded a rough halving of their total time to value for new hires with golden paths. What’s easier to learn the first time is also often far easier to repeat reliably–making this a great first metric to focus on for a platform team.

2: Reduce inefficient manual tickets: While the promise of cloud-based delivery is on-demand automation through declarative APIs, the reality is often many layers of approval tickets and forms. Golden paths bake in policies and best practices by default. Writing code, not tickets, is a rallying cry for successful golden paths in enterprises. Reducing the number of tickets to initiate and deploy an app by more than 50 percent is a common starting goal.

3: Automate Day 2 operations: Day 2 operations on applications and the platform infrastructure become more automated and frequent. Successful platform teams often rebuild their entire estate, including security updates, weekly. Golden paths also leverage more secure-by-default automation whenever possible, relieving the burden on developers and improving security posture. 

This unique combination of outcomes across developer velocity, day 2 operations, and security is powering an exciting industry shift in cloud consumption. As Gartner analyst Mark O’Neill recently observed: “Looking at our inquiry trends on platform engineering… What a rocket ship of a topic.”

Expect golden paths and platform engineering  to remain one of the most important trends as our industry grows from early cloud platform explorations to repeatable execution at scale.

IT Leadership, Managed Cloud Services

Andy Callow was appointed Group CDIO at the University Hospitals of Northamptonshire in December 2020, and has spent the last three years unifying the Kettering and Northampton hospitals through one digital strategy, taking strides to adopt cloud, build an RPA Centre of Excellence, and roll-out AI proof-of-concepts.

Then the call came that CEO Simon Weldon was going on sick leave, and looking in-house for his replacement.

“It wasn’t part of my trajectory but I agreed to do it out of loyalty to him,” says Callow. And although it was a departure for him to helm an unfamiliar leadership role, unique opportunities presented themselves like fresh intellectual stimulation, addressing white privilege, and plans to stabilise the hospitals through winter.

Gaining a new perspective

Having started the interim CEO role in September, and appointed an interim successor for his CDIO role, Callow admits he’s still coming to grips with the new structure. In the first few weeks, he spent time preparing the organisation for a challenging winter, opening internal conferences, addressing Black History Month, and hearing from staff around the wards. Knowing that his role is temporary, his focus is on not letting anything slip through the cracks, as he adjusts working at a system level with less hands-on, day-to-day involvement, and more emphasis on being a facilitator for outcomes.

It’s still early days and Callow is unsure if he’d pursue a CEO role in future, but he’s enthused about a new perspective.

“The technical challenge of my substantive role as CDIO provides a lot of intellectual stimulation, but I’ve been pleasantly surprised to find similar stimulation in the new challenges I find on my plate now,” he says. “What I didn’t appreciate is that I’d also get that buzz from some really tricky problems you’re trying to deal with, which are wider organisational issues. I’ve been involved in conversations about the money for a while, but now I’ve got accountability for that to happen, rather than being part of the solution.”

This leap into the unknown can be unsettling, even for the most experienced leaders. Callow casts his mind back to earlier in his career when a series of promotions pushed him further into leadership roles and away from his love of coding. A “grieving process” ensued, as he moved away from a skillset he had built his reputation on, but he believes it won’t happen this time around.

“I’ve not felt that I’m losing all the techie stuff,” says Callow, formerly the head of technology delivery at NHS UK and programme director at NHS Digital. “I’ve thought that this is actually helping people do their best work in a different guise.”

A CIO’s leadership principles

Callow attributes his transparent and reflective leadership style to workplace experience and his own development, and cites Daniel Pink’s Drive as an influencing factor in letting teams become autonomous and take ownership, continuously improve, and buy into the mission of the NHS.

Callow also believes in the value to reflect on past achievements in order to tackle future obstacles and land key messages in meetings. The weekly notes he writes have also become a routine that helps crystallize successes and challenges, but also prompts new conversations with colleagues and third parties, helping to make sense of the more troubling weeks.

“I look back [at my notes] and say, ‘There was that situation’ or, ‘That conversation was fantastic’. Or, actually, ‘There’s a situation I need to put more effort into progressing’, or, ‘There’s a person I need to give more time to.’ If no one else read them, I’d still do them because it’s a discipline to look back on and think about what you’re doing.”

Callow keeps what he calls shadow notes of circumstances he’d rather not make public, and attributes this activity to the importance of being open, a key NHS principle that’s pinned to his wall at his office in Kettering, in North Northamptonshire. He takes a similar approach to Twitter, saying the social media platform doesn’t have to be about mudslinging, but an opportunity to forge connections. He recalls a time he Tweeted about the possibility of machine learning being used to improve bed management, an idea that would eventually spark online conversations, NHSX funding, and a proof-of-concept on bed scheduling with AI start-up faculty.

“That has now gone into a product that’s available, and that code is open-sourced on GitHub,” he says.

A CIO’s guide to addressing white privilege

Ranked in the top five of this year’s CIO UK 100, Callow drew high praise from the judges for a proactive approach to tackling diversity, equity and inclusion.

Last autumn, he bought 10 copies of White Fragility by Robin DiAngelo and invited the 300 staff across the digital directorate of both hospitals to borrow them. He also bought each member of the board a copy of the book. Later that year, Callow hosted discussions about tackling diversity and discrimination with the directorate and trust board, leading to a joint board development session on how to address racism.

The University Hospitals of Northamptonshire would go on to launch a new leadership programme for Black and Asian staff in the spring, while Callow has since recruited up to 25 board members to volunteer their time for career coaching sessions to these same professionals. Callow himself offers two hours a month.

“A lot of colleagues don’t have access to somebody who can have those kind of conversations, particularly if you’ve come from overseas and you haven’t built up a network,” says Callow, who is executive sponsor of both Trust REACH (Race, Equality and Cultural Heritage) staff networks. But he admits that addressing such issues can only begin with leaders getting uncomfortable, and tackling subjects that may be beyond their expertise.

“Reading White Fragility was a pivotal moment,” he says. “It made me feel more equipped to have some of these conversations.”

2023 is about stability and the next job

Callow says he is most proud of his automated coding project of endoscopy patient episodes, whereby the Trust has used AI to automatically code 87% of monthly endoscopy activity, with an average accuracy of primary diagnosis and procure assignment of 94%—approximately the same as a human coder.

He acknowledges there are challenges ahead for his successor Dan Howard to the CDIO post, from integrating digital strategies to rolling out electronic patient records, but as interim CEO, Callow is looking at the bigger picture of improving clinical collaboration, managing rising costs, and supporting staff through a difficult winter.

“We need to strip out some of those things that are no longer needed [from Covid],” he says. “And that’s hard when you’ve still got your emergency department full, ambulances queuing, and wards where people wait a long time to be discharged.”

Callow believes that CIOs are equally equipped to take the CEO role as other board members, and admits he would be more interested in a deputy CEO position than six months ago. Yet a return to familiar territory beckons.

From mid-January, Callow will become CDIO at the University Hospitals of Nottingham, a move influenced in part by a new challenge as well as a shorter commute. “There’s a lot I can contribute to their digital progression and I like the established links with the university that I can be part of,” he said. “The focus for the new year will be on getting up to speed with the NUH CDIO role and strong delivery.”

Diversity and Inclusion, IT Leadership, IT Management

Modernization journeys are complex and typically highly custom, dependent on an enterprise’s core business challenges and overall competitive goals. Yet one way to simplify transformation and accelerate the process is using an industry-specific approach. Any vertical modernization approach should balance in-depth, vertical sector expertise with a solutions-based methodology that caters to specific business needs.

As part of their partnership, IBM and Amazon Web Services (AWS) are pursuing a variety of industry-specific blueprints and solutions designed to help customers modernize apps for a hybrid IT environment, which includes AWS Cloud.

The solutions, some in pilot stage and others in early development, transcend a variety of core industries, including manufacturing, financial services, healthcare, and transportation.

These industry solutions bring to bear both IBM and AWS’ deep-seated expertise in the specific security, interoperability, and data governance requirements impacting vertical sectors. Such an approach ensures that app modernization efforts meet any relevant certification requirements and solve business-specific problems.

“A general modernization path brings the technical assets together whereas an industry-focused initiative is more of a problem-solving, solutions-oriented design,” says Praveena Varadarajan, modernization offering leader and strategist for IBM’s Hybrid Cloud Migration Group.

With the right industry solution and implementation partner in place, organizations can steer towards effective modernization. Along with the proper technologies and tools, the right consulting partners can help accelerate transformation, specifically if they can together demonstrate deep and diverse expertise, modernization patterns, and industry-specific blueprints.

Consider the critical area of security controls, for example. Companies across industries have core requirements related to data security and governance controls, yet different industries have uniquely focused considerations.

In healthcare, securing personal health data is key, governed by national standards laid out by the Health Insurance Portability and Accountability Act (HIPAA).The financial services industry must adhere to a different set of security requirements, from protecting Personal Identifiable Information (PII) to safeguards that meet Payment Card Industry (PCI) compliance, meant to protect credit card holder’s information.

“Industry verticals have different compliance and regulatory issues that have to be taken into consideration when doing any type of refactoring or app modernization,” notes Hilton Howard, global migration and modernization lead at AWS. “Healthcare and life sciences companies have different governance and compliance concerns along with issues on how data is managed compared to technology companies or those in energy and financial services.”

AWS/IBM’s Industry Edge

IBM and AWS have put several mechanisms and programs in place to codify their rich vertical industry expertise and make it easily accessible to customers in critical sectors. IBM and AWS experts collaborate to identify potential joint offerings and solution blueprints designed to provide a modernization roadmap that is a level up from a general technical guide. Much of the guidance and deliverables is codified from joint initiatives conducted with large customers to provide an accelerated problem-solving path to a wider audience. The deliverables could be reference architectures or an industry-specific proof of concept—the goal is to offer institutional knowledge and near-turn-key solutions meant to streamline modernization and accelerate time-to-value.

“Sometimes it’s best practices or a solution design or some combination,” Varadarajan says. “It’s about bringing internal or external tools to bear to solve specific business issues.”

In addition, AWS and IBM are working on complex transformation aimed at large-scale transformation and modernization efforts. This will help enterprise customers adopt new digital operating models structurally and prescriptively, and transform with AWS to deliver strategic business outcomes. The program builds a meaningful partnership between AWS, IBM, and the client, and delivers an integrated program underpinned by a tailored playbook that delivers the clients’ prioritized initiatives enabled by AWS, while developing sustainable organizational capabilities for continuous transformation.

“Applying an industry lens keeps solutions grounded to the guiding principles of the business,” Varadarajan says. “The goal of transformation is not just to become more modern, but to change the way companies adapt to the new norms of running a business in the digital world.”

United’s Revenue Management Modernization Takes Flight

United Airlines took to the cloud to modernize its Revenue Management system to reduce costs, but also to land on a platform that didn’t limit its ability to apply modern revenue management processes. The airline also sought to provide analysts with finer data access controls so they could be more analytical and creative when driving revenue management decisions.

Working with AWS and IBM, United created and scaled a data warehouse using Amazon Redshift, an off-the-shelf service that manages terabytes of data with ease. Critical success factors included embracing DevOps practices, emphasis on disaster recovery, and system stability, and continuous review of design and migration decisions. Next stop: Migrating a complex forecasting module planned for later in 2022.

To learn more visit https://www.ibm.com/consulting/aws

Application Management

The idea of having “The right tool for the job” applies across domains. Two thousand years ago Greek mathematician Archimedes is reported to have said “Give me a place to stand, and the right lever, and I could lift the earth.”

Fast forward to today’s cloud-centric environment, and application developers are nodding in enthusiastic agreement with Archimedes; and while things may be considered abundantly more complicated than in 250 BC., Google Cloud partner Aiven has made it their job to streamline some of the complications that can inhibit cloud-centric application development.

“Our mission here at Aiven is quite simple,” says Troy Sellers, presales Solution Architect at Aiven. “It’s to make the developer’s life easier. And when you’re a company that is looking at driving innovations or transformations into the cloud, for example, they need the right tools to support that activity.”

Aiven provides open source solutions that stand up a cloud based data infrastructure, freeing developers to focus on high value projects, and in the process, accelerate cloud migration and modernization.

“Having the right tool is just as important as having the ideas, because it allows the people with the ideas to get on and focus on the things that are important,” says Sellers in Google Cloud’s podcast series “The Principles of a Cloud Data Strategy.”

Dealing with complexity

As digital transformation evolves into broader modernization efforts, organizations face a common milestone — they need to expand their cloud-based services, but they lack the staff and skills to do so at scale.

It’s not just a resource question, though. Sellers says, “The challenges today, they’re worlds apart from the days gone by where I used to be building applications myself. I remember, we used to go and talk to customers, when big data was like a gigabyte.”

Today’s modern data and application development stacks contain many moving parts, and different tiers of logic — not to mention the sheer volume of data in play, the need to be aware of regulatory compliance and security issues, and the pressure to keep up with today’s expectations of Continuous Integration and Continuous Delivery (CICD) for applications.

“There’s this expectation on developers that releases go from, rather than once every three months, to once every month, to every lunchtime at 11 o’clock,” Sellers says. “Time to market is just getting faster and faster and faster. And you definitely are in a race with your competitors to do that.”

“This is probably one of the main reasons that developers turn to companies like Google Cloud and Aiven for fully managed services, because it just takes a lot of that headache out of managing that. And they can get to market really, really fast.”

The Open Source Advantage

Aiven has leaned into Open Source for cloud data infrastructure since its inception in 2016. The advantages: cost savings, agility, no vendor lock-in, productivity, and efficiency.

“We manage database services for our customers, database services in the cloud, open source technologies such as Postgres, MySQL, Apache, Kafka,” says Sellers.  “We help customers adopt those services so they can focus on what they do best, which is building technology for their customers.”

Check “The Principles of a Cloud Data Strategy”  podcast series from Google Cloud on Google podcasts, Apple podcasts, Spotify, or wherever you get your podcasts.

Google Cloud Platform

One type of infrastructure that has gained popularity is hyperconverged infrastructure (HCI). Interest in HCI and other hybrid technologies such as Azure Arc is growing as enterprise organizations embrace hybrid and multi-cloud environments as part of their digital transformation initiatives. Survey data from IDC shows broad HCI adoption among enterprises of all sizes, with more than 80% of the organizations surveyed planning to move toward HCI for their core infrastructure going forward.

“Hyperconverged infrastructure has matured considerably in the past decade, giving enterprises a chance to simplify the way they deploy, manage, and maintain IT infrastructure,” Carol Sliwa, Research Director with IDC’s Infrastructure Platforms and Technologies Group, said on a recent webinar sponsored by Microsoft and Intel.

“Enterprises need to simplify deployment and management to stay agile to gain greater business benefit from the data they’re collecting,” Sliwa said. “They also need infrastructure that can deploy flexibly and unify management across hybrid cloud environments. Software-defined HCI is well suited to meet their hybrid cloud needs.”

IDC research shows that most enterprises currently use HCI in core data centers and co-location sites, often for mission-critical workloads. Sliwa also expects usage to grow in edge locations as enterprises modernize their IT infrastructure to simplify deployment, management, and maintenance of new IoT, analytics, and business applications.

Sliwa was joined on the webinar by speakers from Microsoft and Intel, who discussed the benefits of HCI for managing and optimizing both hybrid/multi-cloud and edge computing environments.

Jeff Woolsey, Principal Program Manager for Azure Edge & Platform at Microsoft, explained how Microsoft’s Azure Stack HCI and Azure Arc enable consistent cloud management across cloud and on-premises environments.

“Azure Stack HCI provides central monitoring and comprehensive configuration management, built into the box, so that your cloud and on-premises HCI infrastructure are the same,” Woolsey said. “That ultimately means lower OPEX because instead of training and retraining on bespoke solutions, you’re using and managing the same solution across cloud and on-prem.”

Azure Arc provides a bridge for the Azure ecosystem of services and applications to run on a variety of hardware and IoT devices across Azure, multi-cloud, data centers, and edge environments, Woolsey said. The service provides a consistent and flexible development, operations, and security model for both new and existing applications, allowing customers “to innovate anywhere,” he added.

Christine McMonigal, Director of Hyperconverged Marketing at Intel, explained how the Intel-Microsoft partnership has resulted in consistent, secure, end-to-end infrastructure that delivers a number of price/performance benefits to customers.

“We see how customers are demanding a more scalable and flexible compute infrastructure to support their increasing and changing workload demands,” said McMonigal. “Our Intel Select Solutions for Microsoft Azure Stack HCI have optimized configurations for the edge and for the data center. These reduce your time to evaluate, select, and purchase, streamlining the time to deploy new infrastructure.”

Watch the full webinar here: 

For more information on how HCI use is growing for mission-critical workloads, read the IDC Spotlight paper.

Edge Computing, Hybrid Cloud

Since its beginnings as a German engineering company founded in the wake of the Second World War, Sick AG has evolved into an enterprise that lives by the motto of “sensor intelligence,” specializing in factory, logistics, and process automation.

But in recent years, the global manufacturer of sensors and sensor solutions has had to adjust its supply chain system to match its goals of blurring the lines between the physical and digital worlds.

The lack of integration between communication systems – not all of them digital – was slowing decision-making and incurring unnecessary costs.

“If a big company ordered some material, you’d have every person looking underneath his or her table to see if that material was there,” recalled Roland Avar, Sick AG’s Head of Product Management I Localization (RTLS).

What Sick AG needed was a real-time, global locating system – an integrated solution that would close the information gap.

“You waste a lot of time searching for material or an asset. Localization data enables agile planning of production and logistical processes for better delivery quality and reliability.” Avar pointed out. “With a digitized system, we would know where everything was at any time.”

Not a lottery game

Sick AG’s product portfolio includes RFID (radio frequency identification) readers, light grids, vision sensors, opto-electronic protective devices, bar code scanners, and analyzers for gas and liquid, along with gas flow measuring tools.

With more than 10,000 employees worldwide, the company serves as a guidepost for the Fourth Industrial Revolution, or Industry 4.0, a time of rapid change to industries, technology, and societal patterns due to smart automation.

As technology expands, so have the company’s processes in logistics and production. But as Avar noted, “Complexity definitely increases when you have multiple systems talking to each other. That’s why we needed to have the systems connected. Without that interconnectivity, it’s like playing the lottery, where you’re guessing numbers and hope that you are right.”

No one involved in the implementation of the new solution expected the process to be easy. “Let me say it like this,” Avar remembered. “We had some really challenging discussions.”

The name says it all

Even the dialogue about the platform’s name was spirited. In the end, developers settled on smaRTLog – for smart, real-time logistics.

The effort was a collaboration between Sick AG and multi-national, enterprise resource software (ERP) leader SAP.

smaRTLog connects real-time location-based information coming from Tag-LOC System, together with SAP business transactions. In other words, Tag-LOC System consists of the hardware, antennas, and tags for indoor tracking, and a technology independent intelligent hub Asset Analytics which interacts with SAP technologies for further analytics of all inventory and logistics – capturing, processing, and storing the information that would provide complete transparency and blot out the gray areas.

When items were delayed, all affected workers would receive alerts, along with updates on arrival times. Built-in mechanisms would spot defective material and eliminate it from the production process. Incidents previously characterized as “unexpected” were now telegraphed in advance, increasing predictability, along with the quality of warehouse management.

Growing within the system

In less than a year after its July 2021 deployment, the SmaRTLog solution had transformed manufacturing, automation, and logistics for Sick AG and its clients.

The average search time for missing materials decreased from 45 to 15 minutes, while the period between a requisition and delivery was reduced by 20%.

Companies that have adopted the platform report a more engaged workforce. “People aren’t worrying about, ‘Did I scan this? Did I not scan this? Did I do a good job? Did I do a bad job?’” Avar said. “And they have more time to concentrate on more crucial jobs, more satisfying jobs. Their confidence level grew like the system itself.”

Earlier this year, Sick AG was recognized for the creation of smaRTLog by being designated a 2022 finalist at the SAP Innovation Awards, a yearly event honoring organizations that have used SAP technologies to improve both business and society. You can read about Sick’s amazing accomplishment in their Innovation Awards pitch deck.

Avar believes that, with infrastructure considerations now diminished, the solution gives companies the flexibility to grow in ways that hadn’t existed before. The platform “can change or support their digital strategies,” he said, “and it’s a blueprint that other companies can follow.”

To learn more about Sick’s creation of smaRTLog, read their Innovation Awards pitch deck.

Collaboration Software

In the coming years, NASA’s James Webb telescope will discover the edge of the observable universe, allowing astronomers to search for the very earliest stars and galaxies, formed more than 13 billion years ago.

That’s quite a contrast to today’s network operations visibility, which can sometimes feel like the lens cap has been left on the telescope. Explosive growth in new technology adoption, growing complexity, and the explosive use of internet and cloud networks has created unprecedented blind spots in how we monitor network delivery.

These visibility gaps obscure knowledge about critical applications and service performance. They can also hide security threats making them more difficult to detect. Ultimately, it can impact customer experience, revenue growth, and brand perception.

A global survey by Dimensional Research finds that 81% of organizations have network blind spots. More than 60% of larger companies state they have 50,000 or more network devices and 73% indicate it is growing increasingly difficult to manage their network. According to the study, removing network blind spots and increased monitoring coverage will improve security, reliability, and performance.

Dimensional Research also reports that current monitoring and operations solutions are ill-equipped for the tasks at hand and unable to support a massive influx of new technology over the next two years, leading to slower adoption and deployment with increased business risk.

Without solutions that deliver expanded visibility into remote locations, un-managed networks, and traffic patterns, IT can become overly dependent on end-users to report service issues after these problems have impacted performance. And no organization wants that to happen.

Performance insights across the edge infrastructure and beyond 

The massive adoption of SaaS and Cloud apps has made the job of IT even harder when it comes to understanding the performance of business functions. With no visibility into the internet that delivers these apps to users, IT is forced to resort to status pages and support tickets to determine if an outage does or does not affect users.

Now is the time to rethink network operations and evolve traditional NetOps into Experience-Driven NetOps. You need to extend visibility beyond the edge of the enterprise network to internet monitoring and bring modern capabilities like end-user experience monitoring, active testing of network delivery, and network path tracing into the network operations center. Only by being equipped with such capabilities can organizations ensure networks are experience-proven and network operations teams are experience-driven.  As a result, they gain credibility and build confidence in business users while delivering hybrid working and cloud transformations.

Take the real-world example of a major oil and gas services company. Most employees were set to work from home at the outset of the pandemic. The organization needed to scale up the WAN infrastructure from 10,000 to 60,000 users in just a few weeks. The challenge was to see into VPN gateways, ISP links, and Internet router performance to manage this increase in use.  By standardizing on a modern network monitoring platform, the company benefited from unified performance and capacity analytics that enabled making the right upgrade decisions to increase the number of remote workers by six-fold.

You can learn more about how to tackle the challenges of network visibility in this new eBook, Guide To Visibility Anywhere. Read now and discover how organizations can create network visibility anywhere.