Technical debt is no longer just a “technical” problem. As recent, widely publicized events have shown, it is a business problem that can have serious consequences for organizations. The government and Congress are taking notice of unfair consumer experiences, and it is crucial for businesses to address their technical debt and minimize the risk of negative press, government fines, and damaged reputations.

What is technical debt?

Technical debt can be defined as the accumulation of legacy systems and applications that are difficult to maintain and support, as well as poorly written or hastily implemented code that increases risk over time. These technical challenges can significantly impact the performance and stability of critical operations, and it is essential that these be addressed before they cause damage to your organization. By listening to the voice of customers, employees, and other users, businesses can identify potential technical debt early and prioritize their modernization efforts.

Addressing technical debt can be challenging, especially for overworked and understaffed IT teams who are tasked with maintaining aging systems while also learning new development frameworks, languages, and techniques. Band-aid fixes may be easy to implement, but they can be difficult to maintain in the long term and often do not adhere to industry best practices. Prioritizing old fixes can feel like a waste of time to the technical team if things are working now, especially when they may be understaffed and overwhelmed with current workloads. The need to learn new systems while keeping the old systems up and running smoothly can generate staffing issues, as the IT team is tasked with simultaneously maintaining aging systems and learning new techniques.

The warning signs of technical debt are clear. Employees may complain that the technology they use is cumbersome and time-consuming, ultimately hindering their job performance. Customers may describe applications as clunky, buggy, and outdated. If these complaints sound familiar, then it is time to act and reduce technical debt.

How to break free

There are several options that companies can consider before getting started with reducing their technical debt:

Perform a short code review to provide a comprehensive overview of the level of risk and identify critical issues that need to be addressed.

One of the key components of reducing technical debt is to have a clear understanding of the underlying issues and challenges within one or many applications. This can involve a comprehensive analysis of the current technology infrastructure, identifying systems and processes that are causing the most pain and need to be addressed first. A code review process can provide valuable insights into technical debt, including identifying code that is outdated, poorly written, or difficult to maintain. This information can help prioritize the modernization efforts, ensuring that the most critical issues are addressed first.

Conduct an application modernization quick start workshop to develop a roadmap of modernization efforts, outlining the steps needed to improve the technology infrastructure.

An application modernization quick start workshop can help organizations take the first steps towards reducing their technical debt. A workshop can provide a roadmap for modernization efforts, including the development of a detailed plan outlining the steps required to improve the technology infrastructure. The workshop can also provide valuable insights into the modernization process, including best practices for modernizing legacy systems, optimizing application performance, and improving the customer and employee experience.

Develop an application modernization program to manage the intake process, governance, technical architecture, DevOps, and end-to-end development, reducing risk, accommodating change, and delivering better customer and employee experiences.

An application modernization program can provide a comprehensive solution to reducing technical debt. This program can manage the intake process, governance, technical architecture, DevOps, and end-to-end development, reducing risk, accommodating change, and that deliver better customer and employee experiences.

At Protiviti, we are dedicated to helping organizations navigate their application modernization journeys and achieve success in improving user experiences and reducing technical debt and business risks. Our team of experts understands each organization’s unique needs and provides tailored solutions to ensure the success of modernization efforts. We help companies take the first step towards reducing technical debt and improving both technology infrastructure and brand with modern applications that are intelligent, engaging, and easy to use.

Learn more about Protiviti’s Innovation vs. Technical Debt Tug of War survey results.

Connect with the Authors

Amanda Downs
Managing Director, Technology Consulting

Alina Zamorskaya
Senior Manager, Technology Consulting

Digital Transformation

Six out of ten organizations today are using a mix of infrastructures, including private cloud, public cloud, multi-cloud, on-premises, and hosted data centers, according to the 5th Annual Nutanix Enterprise Cloud Index. Managing applications and data, especially when they’re moving across these environments, is extremely challenging. Only 40% of IT decision-makers said that they have complete visibility into where their data resides, and 85% have issues managing cloud costs. Addressing these challenges will require simplification, so it’s no surprise that essentially everyone (94%) wants a single, unified place to manage data and applications in mixed environments.

In particular, there are three big challenges that rise to the top when it comes to managing data across multiple environments. The first is data protection.

“Because we can’t go faster than the speed of light, if you want to recover data, unless you already have the snapshots and copies where that recovered data is needed, it’ll take some time,” said Induprakas Keri, SVP of Engineering for Nutanix Cloud Infrastructure. “It’s much faster to spin up a backup where the data is rather than moving it, but that requires moving backups or snapshots ahead of time to where they will be spun up, and developers don’t want to think about things like that. IT needs an automated solution.”

Another huge problem is managing cost—so much so that 46% of organizations are thinking about repatriating cloud applications to on-premises, which would have been unthinkable just a few years ago.

“I’m familiar with a young company whose R&D spend was $18 million and the cloud spend was $23 million, with utilization of just 11%,” Keri said. “This wasn’t as much of a concern when money was free, but those days are over, and increasingly, organizations are looking to get their cloud spend under control.”

Cloud data management is complex, and without keeping an eye on it, costs can quickly get out of control.

The final big problem is moving workloads between infrastructures. It’s especially hard moving legacy applications to the cloud because of all the refactoring, and it’s easy for that effort to get far out of scope. Keri has experienced this issue firsthand many times in his career. 

“What we often see with customers at Nutanix is that the journey of moving applications to the cloud, especially legacy applications, is one that many had underestimated,” Keri said. “For example, while at Intuit as CISO, I was part of the team that moved TurboTax onto AWS, which took us several years to complete and involved several hundred developers.”

Nutanix provides a unified infrastructure layer that enables IT to seamlessly run applications on a single underlying platform, whether it’s on-premises, in the cloud, or even a hybrid environment. And data protection and security are integral parts of the platform, so IT doesn’t have to worry about whether data will be local for recovery or whether data is secure—the platform takes care of it.

“Whether you’re moving apps which need to be run on a platform or whether you’re building net-new applications, Nutanix provides an easy way to move them back and forth,” Keri said. “If you start with a legacy application on prem, we provide the tools to move it into the public cloud. If you want to start in the cloud with containerized apps and then want to move them on-prem or to another cloud service provider, we provide the tools to do that. Plus, our underlying platform offers data protection and security, so you don’t have to worry about mundane things like where your data needs to be. We can take the pain away from developers.”

For more information on how Nutanix can help your organization control costs, gain agility, and simplify management of apps and data across multiple environments, visit Nutanix here.

Data Management

KPN, the largest infrastructure provider in the Netherlands, offers a high-performance fixed-line and mobile network in addition to enterprise-class IT infrastructure and a wide range of cloud offerings, including Infrastructure-as-a-Service (IaaS) and Security-as-a-Service. Drawing on its extensive track record of success providing VMware Cloud Verified services and solutions, KPN is now one of a distinguished group of providers to have earned the VMware Sovereign Cloud distinction.

“With the exceptionally strong, high-performance network we offer, this is truly a sovereign cloud. Government agencies, healthcare companies, and organizations with highly sensitive and confidential data can confidentially comply with industry-specific regulations such as GDPR, Government Information Security Baseline, Royal Netherlands Standardization Institute, and the Network and Information Security Directive,” said Babak Fouladi, Chief Technology & Digital Officer and Member of the Board of Management at KPN. “KPN places data and applications in a virtual private cloud that is controlled, tested, managed, and secured in the Netherlands, without third-party interference.”

KPN’s sovereign cloud, CloudNL, reflects a rapidly changing landscape in which many companies need to move data to a sovereign cloud. Reasons why include a dramatic increase in remote or hybrid work, evolving geopolitical events and threats, and fast-changing international regulations.

“The more you digitize an enterprise, the greater the variety of data and applications you must manage,” says Fouladi. “Each requires the right cloud environment based on the required security level, efficiency, and ease of use. On the one hand, this might include confidential customer information that requires extra protection, and which must remain within the nation’s boundaries. Just as importantly, the information must never be exposed to any foreign nationals at any time. On the other hand, you have workloads that are entirely appropriate for the public cloud and benefit from the economy and scale the cloud offers.”

Fouladi stresses that this is why so many organizations are embracing a multi-cloud strategy. It’s a strategy he believes is fundamentally enriched with a sovereign cloud.

Based on VMware technologies, CloudNL is designed to satisfy the highest security requirements and features stringent guarantees verified through independent audits. All data and applications are stored in KPN’s data centers within the Netherlands – all of which are operated and maintained by fully-vetted citizens of the Netherlands.

ValidSign, a KPN CloudNL customer, is a rapidly growing provider of cloud-based solutions that automate document signings. ValidSign’s CEO John Lageman notes that the company’s use of a fully sovereign cloud in Holland is particularly important for the security-minded organizations the company serves, among them notaries, law firms, and government institutions.

“The documents, permits, and contracts that we sign must remain guaranteed in the Netherlands,” says Lageman. “Digitally and legally signing and using certificates used to be very expensive. Moving to the cloud was the solution, but not with an American cloud provider – our customers would no longer be sure where the data would be stored or who could have access to it. With CloudNL they have that control.”

The Bottom Line

There are many reasons to move data to a sovereign cloud, among them an increase in remote or hybrid work, changing geopolitical events, or fast-changing international regulations. KPN CloudNL empowers enterprises to handle these challenges with ease by incorporating sovereign cloud into their multi-cloud strategy.

Learn more about KPN CloudNL here and its partnership with VMware here.

Cloud Computing

High performance computing (HPC) is becoming mainstream for organizations, spurred on by their increasing use of artificial intelligence (AI) and data analytics. A 2021 study by Insersect360 Research found that 81% of organizations that use HPC reported they are running AI and machine learning or are planning to implement them soon. It’s happening globally and contributing to worldwide spending on HPC that is poised to exceed $59.65 billion in 2025, according to Grandview Research.

Simultaneously, the intersection of HPC, AI, and analytics workflows are putting pressure on systems administrators to support ever more complex environments. Admins are being asked to complete time-consuming manual configurations and reconfigurations of servers, storage and networking as they move nodes between clusters to provide the resources required for different workload demands. The resulting cluster sprawl consumes inordinate amounts of information technology (IT) resources. 

The answer? For many organizations, it’s a greater reliance on open-source software.

Reaping the Benefits of Open-Source Software & Communities

Developers at some organizations have found that open-source software is an effective way to advance the HPC software stack beyond the limitations of any one vendor. Examples of open-source software used for HPC include Apache Ignite, Open MPI, OpenSFS, OpenFOAM, and OpenStack. Almost all major original equipment manufacturers (OEMs) participate in the OpenHPC community, along with key HPC independent software vendors (ISVs) and top 

HPC sites. 

Organizations like Arizona State University Research Computing have turned to open-source software like Omnia, a set of tools for automating the deployment of open source or publicly available Slurm and Kubernetes workload management along with libraries, frameworks, operators, services, platforms and applications.

The Omnia software stack was created to help simplify and speed the process of building environments for mixed workloads by abstracting away the manual steps that can slow provisioning and lead to configuration errors. It was designed to speed and simplify the process of deploying and managing environments for mixed workloads, including simulation, high throughput computing, machine learning, deep learning and data analytics.

Members of the open-source software community contribute code and documentation updates to feature requests and bug reports. They also provide open forums for conversations about feature ideas and potential implementation solutions. As the open-source project grows and expands, so does the technical governance committee, with representation from top contributors and stakeholders.

“We have ASU engineers on my team working directly with the Dell engineers on the Omnia team,” said Douglas Jennewein, senior director of Arizona State University (ASU) Research Computing. “We’re working on code and providing feedback and direction on what we should look at next. It’s been a very rewarding effort… We’re paving not just the path for ASU but the path for advanced computing.”

ASU teams also use Open OnDemand, an open source HPC portal that allows users to log in to a HPC cluster via a traditional Secure Shell Protocol (SSH) terminal or via a web-based interface that uses Open OnDemand. Once connected, they can upload and download files; create, edit, submit and monitor jobs; run applications; and more via a web browser in a cloud-like experience with no client software to install and configure

Some Hot New Features of Open-Source Software for HPC  

Here is a sampling of some of the latest features in open-source software available to HPC application developers.

Dynamically change a user’s environment by adding or removing directories to the PATH environment variable. This makes it easier to run specific software in specific folders without updating the PATH environment variable and rebooting. It’s especially useful when third-party applications point to conflicting versions of the same libraries or objects.Choice of host operating system (OS) provisioned on bare metal. The speed and accuracy of applications are inherently affected by the host OS installed on the compute node. This provides bare metal options of different operating systems in the lab to be able to choose the one working optimally at any given time and best suited for an HPC application.Provide low-cost block storage that natively uses Network File System (NFS).  This adds flexible scalability and is ideal for persistent, long-term storage. Use telemetry and visualization on Red Hat Enterprise Linux. Users of Red Hat Enterprise Linux can take advantage of telemetry and visualization features to view power consumption, temperatures, and other operational metrics. BOSS RAID controller support. Redundant array of independent disks (RAID) arrays use multiple drives to split the I/O load, and are often preferred by HPC developers. 

The benefits of open-source software for HPC are significant. They include the ability to deploy faster, leverage fluid pools of resources, and integrate complete lifecycle management for unified data analytics, AI and HPC clusters.

For more information on and to contribute to the Omnia community, which includes Dell, Intel, university research environments, and many others, visit the Omnia github.

***

Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

IT Leadership

By Aaron Ploetz, Developer Advocate

There are many statistics that link business success to application speed and responsiveness. Google tells us that a one-second delay in mobile load times can impact mobile conversions by up to 20%. And a 0.1 second improvement in load times improved retail customer engagement by 5.2%, according to a study by Deloitte.

It’s not only the whims and expectations of consumers that drive the need for real-time or near real-time responsiveness. Think of a bank’s requirement to detect and flag suspicious activity in the fleeting moments before real financial damage can happen. Or an e-tailer providing locally relevant product promotions to drive sales in a store. Real-time data is what makes all of this possible.

Let’s face it – latency is a buzz kill. The time that it takes for a database to receive a request, process the transaction, and return a response to an app can be a real detriment to an application’s success. Keeping it at acceptable levels requires an underlying data architecture that can handle the demands of globally deployed real-time applications. The open source NoSQL database Apache Cassandra®  has two defining characteristics that make it perfectly suited to meet these needs: it’s geographically distributed, and it can respond to spikes in traffic without adverse effects to its unmatched throughput and low latency.

Let’s explore what both of these mean to real-time applications and the businesses that build them.

Real-time data around the world

Even as the world has gotten smaller, exactly where your data lives still makes a difference in terms of speed and latency. When users reside in disparate geographies, supporting responsive, fast applications for all of them can be a challenge.

Say your data center is in Ireland, and you have data workloads and end users in India. Your data might pass through several routers to get to the database, and this can introduce significant latency into the time between when an application or user makes a request and the time it takes for the response to be sent back.

To reduce latency and deliver the best user experience, the data need to be as close to the end user as possible. If your users are global, this means replicating data in geographies where they reside.

Cassandra, built by Facebook in 2007, is designed as a distributed system for deployment of large numbers of nodes across multiple data centers. Key features of Cassandra’s distributed architecture are specifically tailored for deployment across multiple data centers. These features are robust and flexible enough that you can configure clusters (collections of Cassandra nodes, which are visualized as a ring) for optimal geographical distribution, for redundancy, for failover and disaster recovery, or even for creating a dedicated analytics center that’s replicated from your main data storage centers.

But even if your data is geographically distributed, you still need a database that’s designed for speed at scale.

The power of a fast, transactional database

NoSQL databases primarily evolved over the last decade as an alternative to single-instance relational database management systems (RDBMS) which had trouble keeping up with the throughput demands and sheer volume of web-scale internet traffic.

They solve scalability problems through a process known as horizontal scaling, where multiple server instances of the database are linked to each other to form a cluster.

Some NoSQL database products were also engineered with data center awareness, meaning the database is configured to logically group together certain instances to optimize the distribution of user data and workloads. Cassandra is both horizontally scalable and data-center aware. 

Cassandra’s seamless and consistent ability to scale to hundreds of terabytes, along with its exceptional performance under heavy loads, has made it a key part of the data infrastructures of companies that operate real-time applications – the kind that are expected to be extremely responsive, regardless of the scale at which they’re operating. Think of the modern applications and workloads that have to be reliable, like online banking services, or those that operate at huge, distributed scale, such as airline booking systems or popular retail apps.

Logate, an enterprise software solution provider, chose Cassandra as the data store for the applications it builds for clients, including user authentication, authorization, and accounting platforms for the telecom industry.

“From a performance point of view, with Cassandra we can now achieve tens of thousands of transactions per second with a geo-redundant set-up, which was just not possible with our previous application technology stack,” said Logate CEO and CTO Predrag Biskupovic.

Or what about Netflix? When it launched its streaming service in 2007, it used an Oracle database in a single data center. As the number of users and devices (and data) grew rapidly, the limitations on scalability and the potential for failures became a serious threat to Netflix’s success. Cassandra, with its distributed architecture, was a natural choice, and by 2013, most of Netflix’s data was housed there. Netflix still uses Cassandra today, but not only for its scalability and rock-solid reliability. Its performance is key to the streaming media company –  Cassandra runs 30 million operations per second on its most active single cluster, and 98% of the company’s streaming data is stored on Cassandra.

Cassandra has been shown to perform exceptionally well under heavy load. It can consistently show very fast throughput for writes per second on a basic commodity workstation. All of Cassandra’s desirable properties are maintained as more servers are added, without sacrificing performance.

Business decisions that need to be made in real time require high-performing data storage, wherever the principal users may be. Cassandra enables enterprises to ingest and act on that data in real time, at scale, around the world. If acting quickly on business data is where an organization needs to be, then Cassandra can help you get there.

Learn more about DataStax here.

About Aaron Ploetz:

DataStax

Aaron has been a professional software developer since 1997 and has several years of experience working on and leading DevOps teams for startups and Fortune 50 enterprises.

IT Leadership, NoSQL Databases