Organizations have been transitioning away from legacy, monolithic platforms as these decades-old IT systems bog down management, flexibility, and agility with their tightly entangled components. CIOs have shifted toward building their own web application platforms with a set of best-in-class tools for more flexibility, customizations, and agile DevOps. This choice, however, isn’t right in all circumstances. In fact, it could be locking you into rigid choices, just like a monolithic platform.

Gartner warns that building your own platform is complex, time consuming, and may not save you money. Independently developing, testing, deploying, and scaling your infrastructure requires expertise, agility, and a shift in team responsibilities. One proven way to ensure a robust, flexible, and streamlined solution is to invest in a standardized front-end platform you can build on. Here’s why.

Building distracts from your core business

Companies (e.g., ecommerce businesses) opting to build their own platform will ultimately find themselves focused on the platform instead of their core business—selling their product. Platform development includes design, coding, testing, securing, and deploying. No platform is a fire-and-forget type of affair.

What’s also overlooked is managing the platform’s non-functional requirements (NFR), such as ensuring maintenance, reliability, visibility, etc. Developing a custom platform requires the expertise of top talent. This talent typically prefers to create, not maintain, so this type of talent is difficult to retain. While you can foster the loyalty of your employees by investing in them—it’s never as predictable as paying a fee for an always-available, all-in-one solution.

Platforms offer predictable total cost of ownership

Large IT projects are hard to execute, particularly when in-house staff is often pulled into multiple directions and distracted by other priorities. This can be costly for organizations: A recent study found that 25 to 40% percent of IT projects exceed their budget or schedules by more than 50%.

Modern platforms, like Edgio’s, are built to unify application tools to lower the total cost of ownership, increase efficiencies, and reduce errors. A comprehensive and streamlined solution can save you from overworking your team to deploy new updates on time and under budget.

In-house innovation can lead to lock-in and employee frustration

A Freshworks survey revealed that nine out of 10 employees are frustrated with their workplace technology, and the majority will consider finding a new employer if they are not provided the tools, technology, and information they need to do their jobs.

Custom platforms are usually cobbled together with different tools from multiple vendors, making them difficult to use. The more customized the in-house platform, the more entrenched the company becomes in it. This limits the ability to adopt new tools, techniques, and technologies to innovate. It’s much like a vendor lock-in with a monolithic platform, but one that was built inside the company.

This, in turn, can cause slower workflows and growing frustration. Over 5,000 DevOps professionals shared details about their processes, and 69% reported wanting more consolidation due to hidden costs, insufficient agility, and the time maintenance takes away from managing security and compliance.

Don’t lose your employees and operational efficiency to ineffective and inefficient tools and workflows.

The multi-billion dollar aggregate investment

Custom platforms are often poorly documented and maintained, and increasingly difficult to use, which increases time to market. This is unforgiving in today’s current economy. In fact, McKinsey found organizations with higher developer velocity outperform competitors in the market by up to five times.

A standardized front-end platform that facilitates continuous integration and continuous deployment (CI/CD), for example the use of serverless functions driven by all the companies using the solution, can drive significant value to each. There are many other companies using the solution, and their aggregate investment will always exceed your potential investment in your own platform. Investing in your own tooling will never scale like that.

To build or not to build?

In today’s rapidly evolving software development landscape, the investment in a robust platform provides a more cost-effective and streamlined solution. It enables companies to focus on their core business objectives and reduce the burden of developing and maintaining customized platforms that limit their ability to innovate.

Companies need to be strategic in their tool choices and recognize the importance of investing in a reliable front-end platform for their web applications that facilitates CI/CD and allows you to build with flexibility. 

Edgio operates a globally scaled edge CDN network with a vertically integrated frontend platform for web apps and APIs. Click here to learn more.

Enterprise Applications, SaaS

By Bryan Kirschner, Vice President, Strategy at DataStax

Imagine getting a recommendation for the perfect “rainy Sunday playlist” midway through your third Zoom meeting on Monday.

Or a receiving text about a like-for-like substitute for a product that was out of stock at your preferred e-commerce site 10 minutes after you’d already paid a premium for it on another.

Or arriving late for lunch with a long-time friend and being notified that “to have arrived earlier, you should have avoided the freeway.”

We all expect apps to be both “smart” and “fast.” We can probably all call to mind some that do both so well that they delight us. We can also probably agree that failures like those above are a recipe for brand damage and customer frustration—if not white-hot rage.

We’re at a critical juncture for how every organization calibrates their definition of  “fast” and “smart” when it comes to apps—which brings significant implications for their technology architecture.

It’s now critical to ensure that all of an enterprise’s real-time apps will be artificial-intelligence capable, while every AI app is capable of real-time learning.

“Fast enough” isn’t any more

First: Meeting customer expectations for what “fast enough” means has already become table stakes. By 2018, for example, the BBC knew that for every additional second a web page took to load, 10% of users would leave—and the media company was already building technical strategy and implementation accordingly. Today, Google considers load time such an important positive experience that it factors into rankings in search results—making “the speed you need” a moving target that’s as much up to competitors as not.

The bar will keep rising, and your organization needs to embrace that.

Dumb apps = broken apps

Second: AI has gotten real, and we’re in the thick of competition to deploy use cases that create leverage or drive growth. Today’s winning chatbots satisfy customers. Today’s winning recommendation systems deliver revenue uplift. The steady march toward every app doing some data-driven work on behalf of the customer in the very moment that it matters most—whether that’s a spot-on “next best action” recommendation or a delivery time guarantee—isn’t going to stop.

Your organization needs to embrace the idea that a “dumb app” is synonymous with a “broken app.”

We can already see this pattern emerging: In a 2022 survey of more than 500 US organizations, 96%of those who currently have AI or ML in wide deployment expect all or most of their applications to be real-time within three years.

Beyond the batch job

The third point is less obvious—but no less important. There’s a key difference between applications that serve “smarts” in real time and those capable of “getting smarter” in real time. The former rely on batch processing to train machine learning models and generate features (measurable properties of a phenomenon). These apps accept some temporal gap between what’s happening in the moment and the data driving an app’s AI.

If you’re predicting the future position of tectonic plates or glaciers, a gap of even a few months might not matter. But what if you are predicting “time to curb?”

Uber doesn’t rely solely on what old data predicts traffic “ought to be” when you order a ride: it processes real-time traffic data to deliver bang-on promises you can count on. Netflix uses session data to customize the artwork you see in real time.

When the bits and atoms that drive your business are moving quickly, going beyond the batch job to make applications smarter becomes critical. And this is why yesterday’s AI and ML architectures won’t be fit for purpose tomorrow: The inevitable trend is for more things to move more quickly.

Instacart offers an example: the scope and scale of e-commerce and the digital interconnectedness of supply chains are creating a world in which predictions about item availability based on historical data can be unreliable. Today, Instacart apps can get smarter about real-time availability using a unique data asset: the previous 15 minutes of shopper activity.

‘I just wish this AI was a little dumber,’ said no one

Your organization needs to embrace the opportunity to bring true real-time AI to real-time applications.

Amazon founder Jeff Bezos famously said, “I very frequently get the question: ‘What’s going to change in the next 10 years?’ … I almost never get the question: ‘What’s not going to change in the next 10 years?’ And I submit to you that that second question is actually the more important of the two—because you can build a business strategy around the things that are stable in time.”

This sounds like a simple principle, but many companies fail to execute on it.

He articulated a clear north star: “It’s impossible to imagine a future 10 years from now where a customer comes up and says, ‘Jeff, I love Amazon; I just wish the prices were a little higher.’ ‘I love Amazon; I just wish you’d deliver a little more slowly.’ Impossible.”

What we know today is that it’s impossible to imagine a future a decade from now where any customer says, “I just wish the app was a little slower,” “I just wish the AI was a little dumber,” or “I just wish its data was a little staler.”

The tools to build for that future are ready and waiting for those with the conviction to act on this.

Learn how DataStax enables real-time AI.

About Bryan Kirschner:

Bryan is Vice President, Strategy at DataStax. For more than 20 years he has helped large organizations build and execute strategy when they are seeking new ways forward and a future materially different from their past. He specializes in removing fear, uncertainty, and doubt from strategic decision-making through empirical data and market sensing.

Artificial Intelligence, IT Leadership

In the first of this two part CIO webinar series ‘Driving business success with true enterprise applications’, a group of leading tech leaders heard from DXC Technology, customer Ventia and analysts Ecosystm about the challenges and benefits of “Overcoming barriers to application modernisation with SAP.

As we all know, enterprise applications were only really put on the c-level agenda when organisations had outgrown their legacy systems.

But as the hyper-competitive digital landscape continues to evolve, and with it ever more powerful and innovative capabilities in the cloud, businesses really need to make deployment of enterprise applications a strategic priority.

For many organisations, legacy technologies are actually impeding their efforts to modernise, while they face increasing threats from new-entrant competitors unburdened by the past.

That said, not all legacy is bad, with the onus on CIOs and other technology leaders to derive value from existing investments where possible.

In fact, Alan Hesketh, principal analyst with Ecosystm defines ‘legacy’ as anything you turned on yesterday.

“Because once in production, those things just increase the legacy that you have in place and that you need to be able to manage – and every organisation really wants to focus on new activities, not the things that they’ve actually done previously,” he says.

“And there are now so many alternative sources of application services, that with each component that you implement – and shadow IT is a particular challenge here – increases the complexity of your environment. And as your complexity increases, so do dependencies.”

The upshot, Hesketh stresses, is unless organisations figure out how to address this complexity and develop more effective application frameworks, they will see their lead times for delivering products and delivering value balloon.

Merging app ecosystems

The challenges of managing sprawling application ecosystems are especially acute during major M&A projects, something Karen O’Driscoll, group executive for digital services with Ventia and Michelle Sly, business development leader with DXC Technology can certainly attest to.

Back in late 2019, the already formidable Australian infrastructure services company agreed to merge with rival Broad Spectrum Infrastructure to form a true powerhouse generating more than $5 billion in annual revenues, providing operational and maintenance services to a wide range of private sector and government clients and their customers. Ventia itself was formed back in 2015 through the merger of latent contractor services, Thiess Services and Vision Stream, further underscoring the integration challenge.

“[With the] the historical acquisitions and mergers of companies, and the way in which the business was structured, there was quite a lot of work to do to be able to bring the platforms and the systems together, and also to standardise those across multiple divisions and operating entities,” explains Karen O’Driscoll, digital services executive with Ventia.

And deciding that this would happen within 12-18 months introduced a whole new degree of difficulty which led to an “awkward silence” followed by questions like “you want to get it done by when?”.

“Whilst we were excited about the opportunity, [we were] pretty daunted .. around the timeline that we wanted to get this done in.”

O’Driscoll and her team opted for the tighter deadline in a bid to reduce costs and ultimately deliver value faster. But the board took some convincing given the task was much more than a ‘lift and shift’.

“You know, there’s a lot of change management required there as well. And a lot of things that we knew that we could break, if we went so fast that we weren’t careful about what we were doing.”

One plus one

The project was run according to the mantra ‘one plus one equals one’.

So we wanted to run the combined organisation at the same cost as we ran one organisation from an IT overhead perspective,” O’Driscoll adds.

“There was a big objective to be able to quickly deliver the value of the integration of the two companies.”

Ventia had also listed on the stock exchange part way through the program, adding further pressure on the team to succeed.

The strength of its partnership and natural cultural fit with DXC Technology was evident at the start, becoming even more apparent as the project progressed, requiring increasingly intense “storming sessions” during which frank discussions often occurred, with more than a few disagreements along the way.

Michelle Sly, business development lead at DXC Technology, recalls a degree of discomfort at the level of risk Ventia appeared to be taking on.

“From our perspective it was very complex, and the aggressive timeframes were quite scary initially.”

“But Ventia knows their business far better than another supplier does and they probably looked at DXC thinking ‘you’re a little bit risk averse’.”

With so much at stake it was agreed that DXC would commission an independent review.

“That independent review gave us other options, and the ability to have very open and transparent conversations with Ventia, which then meant they could see where we were coming from,” Sly notes.

No project is the same, with large undertakings like this underscoring the importance of having a genuine partnership to properly navigate all of the many moving parts, O’Driscoll notes. “You can’t force it – the partnership approach enabled us to pivot and drive to a successful outcome”.

In addition to bringing a strong sense of collaboration to the table, she adds that DXC also brought a highly experienced, disciplined team able to quickly come to grips with the Ventia and Broadspectrum businesses. Furthermore, M&As are also in DXC’s DNA, informing part of their extensive suite of tools, templates and overall knowledge-base developed over many years.

For Ventia, while DXC did seem to bring a more conservative approach to the table, its decision to go with them was nevertheless somewhat unorthodox compared with the alternative of one of the big accounting firms.

Working together the two companies were able develop more agile working teams and processes that led to real value being delivered incrementally throughout the project. And this  was key to maintaining support from the executive.

“What we wanted to do is to be able to not call something that we couldn’t make it until we really couldn’t make it,” O’Driscoll explains.

“DXC would tell us a couple of months before, ‘we’re not sure we’re going to make it’ and we’re like ‘we don’t have to make that decision yet’.”

“And so we pushed DXC to not make those decisions too early in the programme and to actually go further along with us making decisions on the way until we got to a point in which we could go with that phase or wherever we were. And actually every phase, we were able to achieve on time.”

SAP

Enterprise organizations have faced a compendium of challenges, but today it seems like the focus is on three things: speed, speed, and more speed. It is all about time to value and application velocity — getting applications delivered and then staying agile to evolve the application as needs arise.

In order to get maximum speed, the first requirement is to make developers maximally productive. They can’t be if they don’t have the tools they need, are waiting for someone else to set up their environment, or have to get up-to-speed on a new environment. And it is irritating as well. For many, cloud services are the antidote to these inefficiencies.

Getting the technology you want with less hassle

Cloud services — functionality that is hosted and managed in the cloud — provide a clean separation of the service’s features and effort that goes into administering the service. They provide the best of both worlds if you are looking at them through the lens of a development team under pressure — they provide the technology you want with none of the hassles of acquiring hardware, managing uptime, or updating software.

Another big win is that cloud services are available almost immediately — no waiting around for installation and configuration. The icing on the cake is that cloud services may be cheaper in the long run because you only pay for what you use. No more shelfware!

There are a lot of cloud services out there — some come from the cloud providers themselves, and some come from vendors like us. Far from being competitive, it is a very complementary situation. We provide a different experience.

Red Hat is all about ensuring a consistent and curated user experience across hybrid-cloud environments for development and DevOps teams, which is all good with the hyperscalers. At the end of the day, they just want to sell clouds, and the more options for users means more cloud consumption.

Which cloud service is right for your environment?

Another dimension of choice is flexibility versus velocity. Some teams want to have access to every knob and dial to address edge cases and use every ounce of knowledge they have about the internal workings of the services.

At the other end of the spectrum are teams that want no part in the details, and want someone else (someone experienced) to just make those decisions so they can focus on developing business applications. At Red Hat, we target our self-managed products at the first group, and our cloud services at the second group.

Let me give you some specifics:

Consider container platforms — Kubernetes has won the war as the underlying technology of choice, but it is anything but easy to build out the stack and manage. Kubernetes is powerful, but it can be like trying to fly a rocket ship if you have to administer it. We offer cloud services that come in “curated” configurations, where we make certain decisions about settings and the ecosystem. Our goal with these cloud services is to make using technology (like Kubernetes) more like driving an automatic automobile.Or consider our API management service — Our users, for example, do not get to select the underlying database. In most situations, users don’t want to, and they are happy to have someone else take care of it.Or consider our streaming data Kafka service — Those who have used Apache Kafka know that you need more than just the broker to build applications. You need interfaces, metrics, monitoring, discovery, connectors, and more. We have made (informed) decisions about which projects to include and how. We use our experience to deliver a curated Kafka experience that makes Kafka much easier and more efficient to use.And also, consider the hosted and managed AI/ML service — Businesses strive to inject intelligence into enterprise apps to eke out additional competitive advantage, but not every organization is prepared to build their own AI/ML engine.

Benefits for the whole team

While the developer is an important user of cloud services, there are other members of the organization who benefit from cloud services. IT ops professionals benefit because much of the complexity of standing up these technologies is removed, and the line of business leaders, who care about achieving business outcomes quickly and cutting costs, recognize that keeping developers and IT ops happy and productive is the fastest means to an end. When using Red Hat cloud services, DevOps teams also benefit from being able to create CI/CD pipelines once, and have them run across all clouds — public and private.

With so many teams looking to build new applications, or modernize existing ones, the only question left is how to get started. Another beauty of cloud services is that they are already there just waiting for you to connect and try them out. There is no need to install, host or configure. And which cloud service should you start with? I would suggest a foundational service such as Red Hat OpenShift API Management, Red Hat OpenShift Streams for Apache Kafka, or Red Hat OpenShift Data Science.

Cloud Computing