Cloud-based platforms, the “work from anywhere” culture, and other trends are upending traditional network monitoring. This is because some or all of the infrastructure is no longer owned by the IT organization, instead, relying on home network infrastructure, the Internet, and SaaS/public cloud networks.

A study by Dimensional Research reveals that current monitoring solutions are inadequate when it comes to supporting this growing scale and complexity as well as new technologies, devices, and network architectures. Some 97% of network and operations professionals report network challenges, for example, with the primary consequence being the impact on employee productivity (reported by 52%), followed by executives being brought into the loop because network issues are impacting the business (39%).

Network delivery of the user experience does not exist within the four walls of the data center anymore. With more employees working remotely and more workloads running on cloud platforms, it is harder to gain visibility into the end-to-end user experience. Network monitoring must reach services beyond the edge of the corporate infrastructure; it must utilize user-experience metrics through standard operating procedures and workflows to not only ensure reliable network delivery but an exceptional customer experience.

This large contact center outsourcer, for example, at one time managed 14 sites. Owing to the pandemic and call center agents working from home, that number has risen to 8,000 sites – and every connection is different. The challenge for the outsourcer is to keep operations running smoothly and to maintain the same quality as when call center services were centralized.

How network professionals can reimagine the digital experience

Network teams need a modern, innovative approach to managing digital experience in this new, complex ecosystem. Teams that transition will align themselves better with core business metrics and provide more value to their organization. Those that don’t will quickly be marginalized, becoming yet another IT organization where the CEO says, “they just don’t get it.”

Understanding the digital experience can be a moving target in a highly decentralized and hybrid enterprise world. As a result, network teams can’t choose between network performance monitoring and digital experience monitoring. They really need both. To solve this dilemma, IT leaders must rethink their network operations and evolve traditional NetOps strategies into modern Experience-Driven NetOps.

With Experience-Driven NetOps, organizations benefit from unified network visibility of digital services running on traditional and modern software-defined network architectures. This single pane of glass insight enables network professionals to understand, manage and optimize the performance of every digital service – through their standard troubleshooting procedures – from the core network to the edge, to the end-user.

Now is the time for action. To stay in front of change, organizations need to deliver experience-proven connections and ensure network operations teams are experience-driven. This modern monitoring approach is closely aligned with key business outcomes, improving customer experience and making the IT organization a better partner driving accelerated digital transformation.

You can learn more about how to tackle the challenges of modern network monitoring in this eBook, 4 Imperatives for Monitoring Modern Networks. Read now and discover how organizations can plan their monitoring strategy for the next-generation network technologies.

Networking

‘Mind the gap’ is an automated announcement used by London Underground for more than 50 years to warn passengers about the gap between the train and the platform edge.

It’s a message that would resonate well in IT operations. Enterprises increasingly rely on “work from anywhere” (WFA) infrastructure, software as a service (SaaS), and public cloud networks. In this complex platform mix, visibility gaps can quickly surface in the performance of ISP and cloud networks, along with remote work environments.

Gaps are also inherent in today’s IT standard operating procedures. Network teams follow a certain set of rules to begin troubleshooting and ultimately isolate and fix issues. If these standardized workflows are missing core features, or teams need multiple tools to run these troubleshooting procedures, this can quickly result in delayed remediation and potential business disruption.

Dimensional Research, for example, reveals that 97% of network and operations professionals report network challenges and 81% confirm network blind spots. Complete outages (37%) are the worst problem, although network issues have also delayed new projects (36%).

So how can IT operations close the gap? The enterprise needs network monitoring software that reaches beyond the data center infrastructure; providing end-to-end network delivery insights that correspond with users’ digital experience.

It’s time to re-think network monitoring. These are four key capabilities network professionals should consider for a modern network monitoring platform.

User experience: Moving business applications to multi-cloud platforms and co-located data centers makes third-party networks a performance dependency. Digital experience monitoring along the network, between the end-user and the cloud deployments becomes a necessity to ensure seamless user experiences.Scale: Demand for SaaS, unified communications as a service (UcaaS), contact center as a service (CcaaS), and the WFA culture is rapidly expanding the network edge. Network professionals need to harness the complexity and dynamic nature of these deployments.Security: The modern WAN infrastructure involves technologies such as software-defined WAN (SD-WAN), next-generation firewall (NGFW), and much more. Misconfigurations can easily be missed, resulting in performance issues or security breaches.Visibility: The remotely connected workplace introduces a new, uncharted network ecosystem. Visibility into these remote networks such as home WiFi/LAN is at best patchy, making issue resolution a guessing game.

The bottom line? IT teams need a complete, efficient view of their network infrastructure, including all applications, users, and locations. Without it, IT risks losing control of operations, ultimately eroding confidence in IT, and potentially forcing decision-makers to reallocate or reduce IT budgets.

Now is the time to rethink network operations and evolve traditional NetOps into Experience-Driven NetOps. With Experience-Driven NetOps, network teams can proactively identify the root cause of problems and isolate issues within a single tool that enables one-click access to all their standard operating procedures through out-of-the-box workflows and user-experience metrics. This industry-first approach delivers digital experience and network performance insights across the edge infrastructure, internet connections, and cloud services, allowing teams to plan for network support where it matters most.

Maybe it’s time for that “mind the gap” announcement to be broadcast in IT departments? With a possible slight change to, “mind the growing void” to ensure networks are experience-proven and network operations teams are experience-driven.

Tackle the new challenges of network monitoring in this eBook, 4 Imperatives for Monitoring Modern Networks. Read now and discover how organizations can plan their monitoring strategy for the next-generation network technologies.

Networking

By Aaron Ploetz, Developer Advocate

There are many statistics that link business success to application speed and responsiveness. Google tells us that a one-second delay in mobile load times can impact mobile conversions by up to 20%. And a 0.1 second improvement in load times improved retail customer engagement by 5.2%, according to a study by Deloitte.

It’s not only the whims and expectations of consumers that drive the need for real-time or near real-time responsiveness. Think of a bank’s requirement to detect and flag suspicious activity in the fleeting moments before real financial damage can happen. Or an e-tailer providing locally relevant product promotions to drive sales in a store. Real-time data is what makes all of this possible.

Let’s face it – latency is a buzz kill. The time that it takes for a database to receive a request, process the transaction, and return a response to an app can be a real detriment to an application’s success. Keeping it at acceptable levels requires an underlying data architecture that can handle the demands of globally deployed real-time applications. The open source NoSQL database Apache Cassandra®  has two defining characteristics that make it perfectly suited to meet these needs: it’s geographically distributed, and it can respond to spikes in traffic without adverse effects to its unmatched throughput and low latency.

Let’s explore what both of these mean to real-time applications and the businesses that build them.

Real-time data around the world

Even as the world has gotten smaller, exactly where your data lives still makes a difference in terms of speed and latency. When users reside in disparate geographies, supporting responsive, fast applications for all of them can be a challenge.

Say your data center is in Ireland, and you have data workloads and end users in India. Your data might pass through several routers to get to the database, and this can introduce significant latency into the time between when an application or user makes a request and the time it takes for the response to be sent back.

To reduce latency and deliver the best user experience, the data need to be as close to the end user as possible. If your users are global, this means replicating data in geographies where they reside.

Cassandra, built by Facebook in 2007, is designed as a distributed system for deployment of large numbers of nodes across multiple data centers. Key features of Cassandra’s distributed architecture are specifically tailored for deployment across multiple data centers. These features are robust and flexible enough that you can configure clusters (collections of Cassandra nodes, which are visualized as a ring) for optimal geographical distribution, for redundancy, for failover and disaster recovery, or even for creating a dedicated analytics center that’s replicated from your main data storage centers.

But even if your data is geographically distributed, you still need a database that’s designed for speed at scale.

The power of a fast, transactional database

NoSQL databases primarily evolved over the last decade as an alternative to single-instance relational database management systems (RDBMS) which had trouble keeping up with the throughput demands and sheer volume of web-scale internet traffic.

They solve scalability problems through a process known as horizontal scaling, where multiple server instances of the database are linked to each other to form a cluster.

Some NoSQL database products were also engineered with data center awareness, meaning the database is configured to logically group together certain instances to optimize the distribution of user data and workloads. Cassandra is both horizontally scalable and data-center aware. 

Cassandra’s seamless and consistent ability to scale to hundreds of terabytes, along with its exceptional performance under heavy loads, has made it a key part of the data infrastructures of companies that operate real-time applications – the kind that are expected to be extremely responsive, regardless of the scale at which they’re operating. Think of the modern applications and workloads that have to be reliable, like online banking services, or those that operate at huge, distributed scale, such as airline booking systems or popular retail apps.

Logate, an enterprise software solution provider, chose Cassandra as the data store for the applications it builds for clients, including user authentication, authorization, and accounting platforms for the telecom industry.

“From a performance point of view, with Cassandra we can now achieve tens of thousands of transactions per second with a geo-redundant set-up, which was just not possible with our previous application technology stack,” said Logate CEO and CTO Predrag Biskupovic.

Or what about Netflix? When it launched its streaming service in 2007, it used an Oracle database in a single data center. As the number of users and devices (and data) grew rapidly, the limitations on scalability and the potential for failures became a serious threat to Netflix’s success. Cassandra, with its distributed architecture, was a natural choice, and by 2013, most of Netflix’s data was housed there. Netflix still uses Cassandra today, but not only for its scalability and rock-solid reliability. Its performance is key to the streaming media company –  Cassandra runs 30 million operations per second on its most active single cluster, and 98% of the company’s streaming data is stored on Cassandra.

Cassandra has been shown to perform exceptionally well under heavy load. It can consistently show very fast throughput for writes per second on a basic commodity workstation. All of Cassandra’s desirable properties are maintained as more servers are added, without sacrificing performance.

Business decisions that need to be made in real time require high-performing data storage, wherever the principal users may be. Cassandra enables enterprises to ingest and act on that data in real time, at scale, around the world. If acting quickly on business data is where an organization needs to be, then Cassandra can help you get there.

Learn more about DataStax here.

About Aaron Ploetz:

DataStax

Aaron has been a professional software developer since 1997 and has several years of experience working on and leading DevOps teams for startups and Fortune 50 enterprises.

IT Leadership, NoSQL Databases