Large IT projects are hard to execute, particularly when in-house staff are often pulled into their day jobs and distracted by other priorities. This can be costly for organizations. In fact, McKinsey suggests that early cost and schedule overruns can cause projects to cost twice as much as anticipated. One common resolution to this challenge is for companies to seek outside support to ensure success. 

There are four critical ways that outside support can make a difference.

Rapid talent aggregation

One of the most challenging aspects of software engineering in today’s environment is assembling quality talent. CIO magazine identifies the top 10 most in-demand tech jobs and says that 86% of technology managers say it’s challenging to find skilled professionals. If your current team does not have the capacity or skills to tackle the required project, consider an outsourced partner.

The right partner can bring a team of highly skilled engineers together in a matter of days or weeks, allowing you to accelerate development and deliver your key projects in a timely fashion. Key talent can be added and removed from projects as needed. Selected properly, your outsourced provider will have a group of tried and tested experts with deep knowledge of the chosen tech stack and therefore can iterate and compose much faster.  

Developer velocity 

Time to market is critical for the success of any project, particularly when it impacts revenue. Therefore, IT projects must be scoped, ramped, and run expeditiously in order to take advantage of market dynamics. 

In an in-depth study of 440 large enterprises, McKinsey identified the most critical factors that enabled organizations to achieve high developer velocity. Four key areas have the greatest impact on software development performance: tools, culture, product management and talent management. The study revealed those with higher developer velocity outperform competitors by up to five times. 

When selecting an outsourcer, validate what tools and project management structure they will bring to the table; validate past project success in terms of both budget and on-time delivery. Inspect project plans to ensure it includes full and rigorous testing, especially around security, full quality assurance, and performance optimization. 

A project outsourced to an established applications platform provider with dedicated experts, like Edgio, will include rigorous testing and rollout plans, full quality assurance, and performance optimization—ensuring that your investment ultimately delivers peak efficiency for your customers and your business.

Knowledge sharing

Great professional services teams accumulate best practices over time and will bring complementary skill sets into the business they’re partnering with. Shared knowledge helps grow the skillset of your internal team, and enables them to contribute more meaningfully to the success of your business.

Your employee satisfaction can even increase from personal and professional progress felt when learning new technology, frameworks, or languages throughout major IT projects developed in partnership with external experts. 

This aspect cannot be overlooked, given that 91% of employees report being frustrated with inadequate workplace technology and 71% consider looking for a new employer as a consequence. Expert teams have the depth of knowledge on a breadth of tools that help save tremendous time and many headaches by creating ​efficient, automated workflows

Ensure that your team gets the opportunity to work directly with your outsourced development team to facilitate knowledge sharing.

4. Faster deployment cadence

Companies integrating software development with IT operations are seeing increased productivity and 83% faster releases. We’ve personally seen deployment cadences double through the use of Edgio’s integrated workflow for web application deployment. 

Leveraging experts who start on day one with automated deployment and testing, standardized processes, and improved development and operations communication can bring releases to market faster. Enable your team to innovate more and wait for code less. 

To outsource or not to outsource?

Large projects can take a significant toll on an organization if they are not managed properly. To be effective and efficient, project teams need a common vision, shared team processes, and a high-performance culture.

If you’re asking yourself the following questions, consider hiring a team of experts: 

What architecture do we need to support a next-generation operating model?How can we rapidly build, scale and sustain a cutting edge customer-centric tech stack?What technologies, frameworks, or API integrations provide a high-quality experience? How do we create the most secure workflow for fast releases and updates?

At first glance, outsourcing can seem an expensive option. However, I advise businesses considering software development outsourcing to think long-term. The right team will minimize costs and bring more value by delivering a better product quicker with a more robust and flexible IT architecture, and will ultimately generate significant ROI.

Edgio accelerates your web development and application performance. Learn more about Edgio and our expert services.

IT Leadership

By Bryan Kirschner, Vice President, Strategy at DataStax

Imagine getting a recommendation for the perfect “rainy Sunday playlist” midway through your third Zoom meeting on Monday.

Or a receiving text about a like-for-like substitute for a product that was out of stock at your preferred e-commerce site 10 minutes after you’d already paid a premium for it on another.

Or arriving late for lunch with a long-time friend and being notified that “to have arrived earlier, you should have avoided the freeway.”

We all expect apps to be both “smart” and “fast.” We can probably all call to mind some that do both so well that they delight us. We can also probably agree that failures like those above are a recipe for brand damage and customer frustration—if not white-hot rage.

We’re at a critical juncture for how every organization calibrates their definition of  “fast” and “smart” when it comes to apps—which brings significant implications for their technology architecture.

It’s now critical to ensure that all of an enterprise’s real-time apps will be artificial-intelligence capable, while every AI app is capable of real-time learning.

“Fast enough” isn’t any more

First: Meeting customer expectations for what “fast enough” means has already become table stakes. By 2018, for example, the BBC knew that for every additional second a web page took to load, 10% of users would leave—and the media company was already building technical strategy and implementation accordingly. Today, Google considers load time such an important positive experience that it factors into rankings in search results—making “the speed you need” a moving target that’s as much up to competitors as not.

The bar will keep rising, and your organization needs to embrace that.

Dumb apps = broken apps

Second: AI has gotten real, and we’re in the thick of competition to deploy use cases that create leverage or drive growth. Today’s winning chatbots satisfy customers. Today’s winning recommendation systems deliver revenue uplift. The steady march toward every app doing some data-driven work on behalf of the customer in the very moment that it matters most—whether that’s a spot-on “next best action” recommendation or a delivery time guarantee—isn’t going to stop.

Your organization needs to embrace the idea that a “dumb app” is synonymous with a “broken app.”

We can already see this pattern emerging: In a 2022 survey of more than 500 US organizations, 96%of those who currently have AI or ML in wide deployment expect all or most of their applications to be real-time within three years.

Beyond the batch job

The third point is less obvious—but no less important. There’s a key difference between applications that serve “smarts” in real time and those capable of “getting smarter” in real time. The former rely on batch processing to train machine learning models and generate features (measurable properties of a phenomenon). These apps accept some temporal gap between what’s happening in the moment and the data driving an app’s AI.

If you’re predicting the future position of tectonic plates or glaciers, a gap of even a few months might not matter. But what if you are predicting “time to curb?”

Uber doesn’t rely solely on what old data predicts traffic “ought to be” when you order a ride: it processes real-time traffic data to deliver bang-on promises you can count on. Netflix uses session data to customize the artwork you see in real time.

When the bits and atoms that drive your business are moving quickly, going beyond the batch job to make applications smarter becomes critical. And this is why yesterday’s AI and ML architectures won’t be fit for purpose tomorrow: The inevitable trend is for more things to move more quickly.

Instacart offers an example: the scope and scale of e-commerce and the digital interconnectedness of supply chains are creating a world in which predictions about item availability based on historical data can be unreliable. Today, Instacart apps can get smarter about real-time availability using a unique data asset: the previous 15 minutes of shopper activity.

‘I just wish this AI was a little dumber,’ said no one

Your organization needs to embrace the opportunity to bring true real-time AI to real-time applications.

Amazon founder Jeff Bezos famously said, “I very frequently get the question: ‘What’s going to change in the next 10 years?’ … I almost never get the question: ‘What’s not going to change in the next 10 years?’ And I submit to you that that second question is actually the more important of the two—because you can build a business strategy around the things that are stable in time.”

This sounds like a simple principle, but many companies fail to execute on it.

He articulated a clear north star: “It’s impossible to imagine a future 10 years from now where a customer comes up and says, ‘Jeff, I love Amazon; I just wish the prices were a little higher.’ ‘I love Amazon; I just wish you’d deliver a little more slowly.’ Impossible.”

What we know today is that it’s impossible to imagine a future a decade from now where any customer says, “I just wish the app was a little slower,” “I just wish the AI was a little dumber,” or “I just wish its data was a little staler.”

The tools to build for that future are ready and waiting for those with the conviction to act on this.

Learn how DataStax enables real-time AI.

About Bryan Kirschner:

Bryan is Vice President, Strategy at DataStax. For more than 20 years he has helped large organizations build and execute strategy when they are seeking new ways forward and a future materially different from their past. He specializes in removing fear, uncertainty, and doubt from strategic decision-making through empirical data and market sensing.

Artificial Intelligence, IT Leadership

Open standards are a critical consideration when evaluating data security platforms. Why should you care? A data security platform is an enterprise solution that will likely span your entire data ecosystem, touching and requiring integration with many different systems, unlike standalone or point solutions. When dealing with enterprise systems, standards matter.

What critical component of a data security platform universally ranks at the top of the list? Apache Ranger is the imperative component to enable, monitor, and manage data access in an open-source framework. It is widely used by over 3,000 organizations around the world, typically across Apache Hadoop, Hive, HBase, and other Apache components. Ranger is also used to manage access control for several of the top modern cloud-based data solutions. Well known and well used, Ranger is the defacto open standard.

A unified data security platform delivers a fully supported SaaS solution that greatly extends the functionality, breadth of capabilities, and data source and ecosystem integrations of open-source Apache Ranger. There are five primary reasons why a unified data security platform based on Apache Ranger is important:

Apache Ranger is proven.Tighter native integration.Abundance of available skills.Extensible.Prevents vendor lock-in.

The necessity of these components becomes more critical as a company requires an increasingly enterprise-capable solution.

An enterprise solution needs to be built on an established framework, one which has been used by thousands of organizations, proven across myriad conditions and with different requirements that demonstrate that the framework is highly performant and scalable. Often minimized or overlooked, performance and scalability for enterprise solutions are hard to demonstrate in a proof of concept. Performance and scalability must receive sufficient credence. A solution built on a strong, proven foundation greatly reduces the risk of downstream issues as you scale up and out of your solution.

Tight native integrations are also important, both for ease of integration as well as to ensure performance. A unified data security platform built on the same open standards as the source system provides tight native integration.

Finding and training skilled data administrators and engineers is no small task. Having to find and train people on proprietary systems is even harder. It’s much easier when a solution is based on widely used open standards. In this case, with open standards, it’s about more than just fungible and abundant resources. Open standards feature a skill set technical resources are more likely to adopt, since those skills will likely be in greater demand than a proprietary-system-based skill.

Open-standards solutions tend to be more extensible than proprietary solutions, since they are backed by a community of contributors who can build and share additional functionality and source connectors. These items can then be faster and more easily vetted, tested, integrated, and available to customers.

Finally, open standards help avoid vendor lock-in, from the point of view of both source integration and your data security platform provider. From the data source point of view, it is much easier to migrate data security from one source vendor to another when they are based on the same open standard. This makes a lift and shift or near-lift and shift approach, with respect to your data security policies, much more realistic and attainable. And the same argument can be made relative to your data security platform provider.

As you compile all your requirements for your enterprise data security, data governance, and data access solution, make sure “open-standards based” is a primary element in your evaluation criteria.

Get Privacera’s Buyer’s Guide: Data Governance for the Digital Age, filled with valuable information to build a more powerful, effective, and resilient enterprise, including five critical steps to your unified data governance strategy. Get your guide here.

Data and Information Security

Digital transformations can go off the rails in the best of times, but the past two years have wreaked additional havoc since employees began working remotely.

Timing being what it is, though, with organizations hyperfocused on digitization, it’s more important than ever to address issues and fix problematic projects. Organizations can’t afford to fail at digital transformations, given that “we have now entered the era of the digital business, where transformation must be part of enterprise DNA,’’ according to IDC’s 2023 FutureScape: Worldwide CIO Agenda 2023 Predictions.

IDC defines digital businesses as dynamic enterprises that should continuously evolve their operating models and the digital platforms underpinning their operations. “In this new world, IT isn’t an organization — it’s the very fabric of the enterprise,’’ the IDC report observes. “CIOs will have to find new ways to govern IT as the tentacles of digital technology extend ever deeper into the enterprise and its ecosystems.”

Here are eight reasons digital transformations continue to fail.

Transforming on the fly

When the pandemic hit in March 2020, “people looked at the challenges and came up with in-the-moment solutions” to address them, says Michael Spires, principal and technology transformation lead at Hackett Group.

Digital Transformation, IT Leadership, IT Strategy

The meager supply and high salaries of data scientists have led to a decision among many companies totally in keeping with artificial intelligence ― to automate whatever is possible. Case in point is machine learning. A Forrester study found that automated machine learning (AutoML) has been adopted by 61% of data and analytics decision makers in companies using AI, with another 25% of companies saying they’ll do so in the next year. 

Automated machine learning (AutoML) automates repetitive and manual machine learning tasks. That’s no small thing, especially when data scientists and data analysts now spend a majority of their time cleaning, sourcing, and preparing data. AutoML allows them to outsource these tasks to machines to more quickly develop and deploy AI models. 

If your company is still hesitating in adoption of AutoML, here are some very good reasons to deploy it sooner than later.

1. AutoML Super Empowers Data Scientists

AutoML transfers data to a training algorithm. It then searches for the best neural network for each desired use case. Results can be generated within 15 minutes instead of hours. Deep neural networks in particular are notoriously difficult for a non-expert to tune properly. AutoML automates the process of training a large selection of deep learning and other types of candidate models. 

With AutoML, data scientists can say goodbye to repetitive, tedious, time-consuming tasks. They can iterate faster and explore new approaches to what they’re modeling. The ease of use of AutoML allows more non-programmers and senior executives to get involved in conceiving and executing projects and experiments.

2. AutoML Can Have Big Financial Benefits

With automation comes acceleration. Acceleration can be monetized. 

Companies using AutoML have experienced increased revenue and savings from their use of the technology. A healthcare organization saved $2 million per year from reducing nursing hours and $10 million from reduced patient stays. A financial services firm saw revenue climb 1.5-4% by using AutoML to handle pricing optimization.

3. AutoML Improves AI Development Efforts

AutoML simplifies the process of choosing and optimizing the best algorithm for each machine learning model. The technology selects from a wide array of choices (e.g., decision trees, logistic regression, gradient boosted trees) and automatically optimizes the model. It then transfers data to each training algorithm to help determine the optimal architecture. Automating ML modeling also reduces the risk of human error.

One company reduced time-to-deployment of ML models by a factor of 10 over past projects. Others boosted lead scoring and prediction accuracy and reduced engineering time. Using ML models created with AutoML, customers have reduced customer churn, reduced inventory carryovers, improved email opening rates, and generated more revenue.

4. AutoML is Great at Many Use Cases

Use cases where AutoML excels include risk assessment in banking, financial services, and insurance; cybersecurity monitoring and testing; chatbot sentiment analysis; predictive analytics in marketing; content suggestions by entertainment firms; and inventory optimization in retail. AutoML is also being put to work in healthcare and research environments to analyze and develop actionable insights from large data sets.

AutoML is being used effectively to improve the accuracy and precision of fraud detection models. One large payments company improved the accuracy of their fraud detection model from 89% to 94.7% and created and deployed fraud models 6 times faster than before. Another company that connects retailers with manufacturers reduced false positive rates by 55% and sped up deployment of models from 3-4 weeks to 8 hours. 

A Booming Market for AutoML

The global AutoML market is booming, with revenue of $270 million in 2019 and predictions that the market will approach $15 billion by 2030, a CAGR of 44%. A report by P&S Intelligence summed up the primary areas of growth for the automation technology: “The major factors driving the market are the burgeoning requirement for efficient fraud detection solutions, soaring demand for personalized product recommendations, and increasing need for predictive lead scoring.”

Experts caution that AutoML is not going to replace data scientists any time soon. It is merely a powerful tool that accelerates their work and allows them to develop, test, and finetune their strategies. With AutoML, more people can participate in AI and ML projects, utilizing their understanding of their data and business and letting automation do much of the drudgery. 

The Easy Button

Whether you’re just getting started or you’ve been doing AI, ML and DL for some time, Dell Technologies can help you capitalize on the latest technological advances, making AI simpler, speeding time to insights with proven Validated Designs for AI.

Validated Designs for AI are jointly engineered and validated to make it quick and easy to deploy a hardware-software stack optimized to accelerate AI initiatives. These integrated solutions leverage for Automatic Machine Learning. NVIDIA AI Enterprise software can increase data scientist productivity, while VMware® vSphere with Tanzu simplifies IT operations. Customers report that Validated Designs enable 18–20% faster configuration and integration, save 12 employee hours a week with automated reconciliation feeds, and reduce support requirements by 25%.

Validated Designs for AI speed time to insight with automatic machine learning, MLOps and a comprehensive set of AI tools. Dell PowerScale storage improves AI model training accuracy with fast access to larger data sets, enabling AI at scale to drive real‑time, actionable responses. VxRail enables 44% faster deployment of new VMs, while Validated Designs enable 18x faster AI models.

You can confidently deploy an engineering‑tested AI solution backed by world‑class Dell Technologies Services and support for Dell Technologies and VMware solutions. Our worldwide Customer Solution Centers with AI Experience Zones enable you to leverage engineering expertise to test and optimize solutions for your environments. Our expert consulting services for AI help you plan, implement and optimize AI solutions, while more than 35,000 services experts can meet you where you are on your AI journey. 

AI for AI is here, making it easier and faster than ever to scale AI success. For more information, visit Dell Artificial Intelligence Solutions.  


Intel® Technologies Move Analytics Forward

Data analytics is the key to unlocking the most value you can extract from data across your organization. To create a productive, cost-effective analytics strategy that gets results, you need high performance hardware that’s optimized to work with the software you use.

Modern data analytics spans a range of technologies, from dedicated analytics platforms and databases to deep learning and artificial intelligence (AI). Just starting out with analytics? Ready to evolve your analytics strategy or improve your data quality? There’s always room to grow, and Intel is ready to help. With a deep ecosystem of analytics technologies and partners, Intel accelerates the efforts of data scientists, analysts, and developers in every industry. Find out more about Intel advanced analytics.

IT Leadership

It’s difficult to justify the need for enterprise composability when things are business as usual. Employees travel to the office. Contact center agents take calls. Businesses operate the same as they have for years or even decades. It was only until the unthinkable happened that organizations were forced to rethink everything they do and how. Brands across every industry had to rapidly accelerate efforts to improve communication and collaboration to modernize operations, enhance customer engagement, and support a more mobile, digitally enabled workplace. Enterprise composability became essential nearly overnight. 

A composable business model encompasses a mindset, technology, and processes that enable organizations to innovate and adapt quickly to meet every changing business need. You can use what Gartner refers to as packaged business capabilities that enable you to deploy low-code services in minutes with no developer involvement (ex: Avaya Virtual Agent) or build your own custom solution by adding onto what you already have (ex: embed chat into your company’s website to make it easier for customers to do business with you). 

In Omdia’s June 2022 “Future of Work Survey,” 37% of companies believed a “lack of capable technology and IT infrastructure,” and 34% felt “employee resistance to adoption/change” were the most significant barriers their organization faces to achieving successful outcomes from digital workplace investments. Organizations need to adopt a composable approach to maximize the return on their investments, an investment that does not force employees to change their workflow to fit the tools but a platform that adapts to suit their workflow. 

Here’s why the future of business will be driven by cloud-based, composable innovation…

Hybrid work is the new norm

Sixty percent of remote-capable employees recently polled by Gallup said they prefer a hybrid work model for the future. Several studies conducted by Avaya confirm this shift across multiple industries. Over 60% of banking employees told us they would like a hybrid work model and agree hybrid work is better for their well-being and happiness. Sixty-five percent of employees at media and entertainment companies agree. 

Companies looking to go hybrid must ensure purposeful and consistent employee experiences regardless of location (at home, in the office, and every possible combination in-between). A composable foundation allows them to build personalized communication experiences on a case-by-case basis, solving for hybrid-specific use cases that a single, generic cloud application simply can’t do.

Business communications are changing to deliver the total experience

The customer experience is about more than picking up the phone and having an ad-hoc interaction. We now live in an Experience Economy in which basic activities like eating, shopping, and driving are multifaceted and personalized thanks to AI, automation, and data analytics. We can order food through a smart mobile app and have it delivered straight to our door. We can communicate with an intelligent virtual assistant to book a hotel room or do our banking. Business experiences have evolved past what proprietary, on-premises communication systems can do. Composable innovation evolves beyond ad-hoc interaction to deliver complete, well-designed experiences using a platform approach and technologies from an ecosystem of specialized partners. You can experiment to deliver cutting-edge service experiences that add customer value and lock in loyalty.  

The employee experience is essential 

Organizations need to consider the total experience of their employees just as they do for their customers. The composable enterprise has everything they need at their fingertips to build personalized experiences for their workers by adding calling, SMS, MMS, video, and more to virtually any kind of app. For example, you can add calling to your sales app so your salespeople can follow up with leads directly while simultaneously viewing account info. Or you can add conferencing to your project management app to create more immersive team collaboration. The ability to compose personalized employee experiences is game-changing for productivity, retention, and brand perception. 

Enterprise composability is not a trend. It’s the bare minimum for competing in today’s Experience Economy. 

In addition to our award-winning Avaya OneCloud CPaaS, Avaya is unique in that we have Avaya Experience Builders. This community of development expertise gives companies all the resources and support they need to bring their composability ideas to life. Using Avaya OneCloud CPaaS and Avaya Experience Builders, companies have created AI-enhanced digital teaching and learning platforms, innovative telehealth solutions, virtual automotive showrooms, and more.

The future of business is enterprise composability, and Avaya is leading the way. View the infographic by Omdia Research about the market landscape for the composable enterprise. See how we can help you bring your ideas to life.

Digital Transformation

By Aaron Ploetz, Developer Advocate

There are many statistics that link business success to application speed and responsiveness. Google tells us that a one-second delay in mobile load times can impact mobile conversions by up to 20%. And a 0.1 second improvement in load times improved retail customer engagement by 5.2%, according to a study by Deloitte.

It’s not only the whims and expectations of consumers that drive the need for real-time or near real-time responsiveness. Think of a bank’s requirement to detect and flag suspicious activity in the fleeting moments before real financial damage can happen. Or an e-tailer providing locally relevant product promotions to drive sales in a store. Real-time data is what makes all of this possible.

Let’s face it – latency is a buzz kill. The time that it takes for a database to receive a request, process the transaction, and return a response to an app can be a real detriment to an application’s success. Keeping it at acceptable levels requires an underlying data architecture that can handle the demands of globally deployed real-time applications. The open source NoSQL database Apache Cassandra®  has two defining characteristics that make it perfectly suited to meet these needs: it’s geographically distributed, and it can respond to spikes in traffic without adverse effects to its unmatched throughput and low latency.

Let’s explore what both of these mean to real-time applications and the businesses that build them.

Real-time data around the world

Even as the world has gotten smaller, exactly where your data lives still makes a difference in terms of speed and latency. When users reside in disparate geographies, supporting responsive, fast applications for all of them can be a challenge.

Say your data center is in Ireland, and you have data workloads and end users in India. Your data might pass through several routers to get to the database, and this can introduce significant latency into the time between when an application or user makes a request and the time it takes for the response to be sent back.

To reduce latency and deliver the best user experience, the data need to be as close to the end user as possible. If your users are global, this means replicating data in geographies where they reside.

Cassandra, built by Facebook in 2007, is designed as a distributed system for deployment of large numbers of nodes across multiple data centers. Key features of Cassandra’s distributed architecture are specifically tailored for deployment across multiple data centers. These features are robust and flexible enough that you can configure clusters (collections of Cassandra nodes, which are visualized as a ring) for optimal geographical distribution, for redundancy, for failover and disaster recovery, or even for creating a dedicated analytics center that’s replicated from your main data storage centers.

But even if your data is geographically distributed, you still need a database that’s designed for speed at scale.

The power of a fast, transactional database

NoSQL databases primarily evolved over the last decade as an alternative to single-instance relational database management systems (RDBMS) which had trouble keeping up with the throughput demands and sheer volume of web-scale internet traffic.

They solve scalability problems through a process known as horizontal scaling, where multiple server instances of the database are linked to each other to form a cluster.

Some NoSQL database products were also engineered with data center awareness, meaning the database is configured to logically group together certain instances to optimize the distribution of user data and workloads. Cassandra is both horizontally scalable and data-center aware. 

Cassandra’s seamless and consistent ability to scale to hundreds of terabytes, along with its exceptional performance under heavy loads, has made it a key part of the data infrastructures of companies that operate real-time applications – the kind that are expected to be extremely responsive, regardless of the scale at which they’re operating. Think of the modern applications and workloads that have to be reliable, like online banking services, or those that operate at huge, distributed scale, such as airline booking systems or popular retail apps.

Logate, an enterprise software solution provider, chose Cassandra as the data store for the applications it builds for clients, including user authentication, authorization, and accounting platforms for the telecom industry.

“From a performance point of view, with Cassandra we can now achieve tens of thousands of transactions per second with a geo-redundant set-up, which was just not possible with our previous application technology stack,” said Logate CEO and CTO Predrag Biskupovic.

Or what about Netflix? When it launched its streaming service in 2007, it used an Oracle database in a single data center. As the number of users and devices (and data) grew rapidly, the limitations on scalability and the potential for failures became a serious threat to Netflix’s success. Cassandra, with its distributed architecture, was a natural choice, and by 2013, most of Netflix’s data was housed there. Netflix still uses Cassandra today, but not only for its scalability and rock-solid reliability. Its performance is key to the streaming media company –  Cassandra runs 30 million operations per second on its most active single cluster, and 98% of the company’s streaming data is stored on Cassandra.

Cassandra has been shown to perform exceptionally well under heavy load. It can consistently show very fast throughput for writes per second on a basic commodity workstation. All of Cassandra’s desirable properties are maintained as more servers are added, without sacrificing performance.

Business decisions that need to be made in real time require high-performing data storage, wherever the principal users may be. Cassandra enables enterprises to ingest and act on that data in real time, at scale, around the world. If acting quickly on business data is where an organization needs to be, then Cassandra can help you get there.

Learn more about DataStax here.

About Aaron Ploetz:


Aaron has been a professional software developer since 1997 and has several years of experience working on and leading DevOps teams for startups and Fortune 50 enterprises.

IT Leadership, NoSQL Databases