Insurance or not, many organizations are transforming themselves with agile models. We spoke to a top leader of an international insurance firm that is leveraging Agile approaches more often and in more projects. Here are some learnings we discovered.

What challenges did you need to overcome to be successful?

As we looked to scale Agile across our organization, one of the biggest problems that we experienced was that our tool wasn’t, well, agile. It was little more than a fancy looking spreadsheet and our staff spent their time battling with the tool rather than leveraging the tool to help the business. That just wasn’t sustainable.

In what ways do you address these issues?

Just like any other aspect of business, the ability to deliver work effectively using Agile requires a combination of the right information driving the ability to make sound decisions in a timely manner, and a tool that allows people to focus on doing their work rather than interacting with the tool. We needed to find a solution that could easily integrate with our other enterprise tools, and that could help us become more effective and efficient.

What was your end solution, and what impact did it have?

For us, Rally Software from Broadcom was the answer. We recently ran our first PI planning session using the tool and we cut the duration of the planning session by two hours. Multiply that across the number of people and the number of times we plan PIs and it becomes a material saving. And of course, that efficiency means staff time can be redirected into work that adds value to the business.

Rally integrates with our other tools — delivering information, consuming information, and generally improving workflow and automation. That means people have the information they need in a way that works for them, allowing them to focus on their tasks. We’re also planning to leverage Rally as a decision-making tool for the business — helping teams to prioritize and refine user stories and drive more improvements.

How is this driving your success?

We’re breaking down silos. With the ability to collaborate in a tool that actually helps us deliver, we are strengthening relationships between business and IT. That improves understanding and ultimately drives engagement in ensuring that the best possible solutions are delivered — so we can keep increasing customer and business value.

Conclusion

Through effective implementation of agile solutions such as Rally Software, teams can enhance innovation, optimally balance resources, and fuel dramatic improvements in delivery. Going agile is the first step toward more impactful Value Stream management — so what are you waiting for? If you find yourself in a similar business scenario and would like to learn best practices to unlock excellence with Agile analytics, be sure to download our eBook, “How To Interpret Data from Burnup / Burndown Charts.

Collaboration Software

As a CIO a lot of what you do is to design stuff, and that’s when you aren’t overseeing other people who design stuff. Or when you aren’t making sure the stuff everyone’s designing fits together the way it should.

There are some universal rules that govern good design no matter what’s being designed. The most famous is probably the great architect Louis Sullivan’s dictum that form follows function. Less well-known, but just as important (at least for our context) is one introduced by W. Edwards Deming: To optimize the whole we must suboptimize the parts.

This matters no matter what’s being designed, whether it’s a gadget, software, an organization, or a process. And it’s the key to understanding why so many CIOs get optimization wrong.

From queue to queue: The hidden process bottleneck

If CIOs could make a living on a single trick, process optimization would likely be it. It’s vital to IT performing its own role well, and a lot of what IT does for a living is to help business managers optimize their processes, too.

Process optimizers inside and outside IT have a wealth of frameworks and methodologies at their disposal. Lean is among the most popular, so let’s use that to illustrate the point.

Perhaps the most important but least recognized contribution Lean thinking has made to the world of process optimization is that processes aren’t collections of tasks that flow from one box to the next box to the next.

Instead they’re tasks that flow from queue to queue to queue. The difference may seem subtle, but it’s one reason optimizing a whole delivers different results from optimizing the parts of a whole. This may sound like academic hoo-ha, or IT koan, but understanding this difference is key to mastering process optimization.

Hear me out.

Imagine you’re managing a project that needs a new server to proceed, assuming for the moment IT hasn’t gone full cloud and still owns servers and a data center. You follow procedure and submit a request to the IT request queue.

Oversimplifying a bit, the box-to-box view of what follows would look something like the figure below:

IDG / Bob Lewis

It’s a straightforward flow. The teams responsible for each step long-ago optimized the procedures for addressing their responsibilities. The total effort and process cycle time are the same — for this hypothetical example, figure about eight hours, or one day on the project schedule.

But the box-to-box view of the process is wrong. The actual process looks more like the following figure:

IDG / Bob Lewis

Each step in the process is managed as a first in, first out (FIFO) queue. Teams work on requests only when the request has flowed through the queue and popped out for processing. The total effort is the same as estimated in the box-to-box view. But the cycle time includes both work time and time in queue — for this modeled process, five days more or less.

The actual analysis is more complicated than this. Usually, one step ends up being a bottleneck; work stacks up in its queue while other queues run dry, counterbalanced by all queues receiving requests from more than one source. But that doesn’t change the principle, only the complexity of the simulation.

This is real, not just theory. Not that many years ago a client, whose queue sizes were quite a bit longer than what’s depicted above, experienced multi-month project delays as their teams waited for the installation of approved servers they were depending on, even though a typical server required no more effort to acquire, configure, and install than what’s depicted above.

The root cause? The managers responsible for procurement, network administration, software installation, quality assurance, and deployment had all organized their departments’ work to maximize staff utilization and throughput.

They — the parts — had optimized themselves at the expense of each project’s whole.

Eliminating externalities

The solution, which DevOps devotees will immediately recognize and embrace, was to include IT infrastructure analysts on the core project team, and, even more important, to include infrastructure tasks such as setting up servers in each project’s work plan, assigning start dates and due dates based on when their work products would be needed.

With this change, server builds became part of the project schedule instead of being externalities over which the project manager had no control.

In exchange, the CIO had to accept that if projects were to deliver their results on time and within their budgets, the rest of the IT organization would have to allow some slack in their work management. Staff utilization targets wouldn’t and shouldn’t even approach 100%. (Pro tip: Invest some time researching Eliyahu Goldratt’s Critical Chain project management methodology for a more in-depth understanding of this point.)

The MBO meltdown

The optimization / suboptimization issue applies to much more than process design. Take, for example, management compensation.

Back in the day, Management by Objectives (MBO) was a popular theory of how to get the most out of the organization by getting the most out of every manager in the organization. Its fatal flaw was also a failure to recognize the inevitable but unintended consequences of optimizing the parts at the expense of the whole.

The way it worked — failed to work is a better way of saying it — was that, as the name implies, the company’s executives assigned each manager one or more objectives. Managers, given the improved clarity about what they were supposed to accomplish, set about accomplishing it with monomaniacal fervor, unimpeded by the distractions of what any other manager in the organization needed to accomplish their own objectives.

Modern organizations that suffer from what their inhabitants call “silo thinking” with their inability to collaborate are vestiges of the MBO era.

Helplessly helping the help desk

As someone once said — or really as just about every manager has said whenever the subject comes up — there are no perfect org charts. Deming’s optimization / sub-optimization principle is a key contributor to org chart imperfections.

Take the classic help desk and its position within IT’s organizational design. It has service-level targets for the delay between the first end-user contact and the help desk’s initial response; also a target for the time needed to resolve the end-user’s issue. Somewhere in there is also a goal of minimizing the cost per incident.

Figure that handling every reported incident includes time spent logging it, and either time spent trying to resolve it or time spent getting rid of it by handing it off to a different IT team.

The easiest way for the help desk to meet its initial response service level is to do as little as possible during the initial response, handing off every incident as fast as possible. This keeps help desk analysts free to answer the next call, and from getting bogged down trying to resolve problems they aren’t equipped to handle. Better yet, by directing problems to departments with more expertise, incidents will be resolved faster than if help desk analysts tried to solve them on their own.

Sadly, this approach also ensures help desk analysts never learn how to handle similar problems in the future. And while it also keeps the help desk’s costs down, it does so at the expense of distracting higher-priced talent from their current set of priorities, which, from the perspective of overall value, are probably more important.

Optimizing the help desk ends up as an exercise in unconstrained cost and responsibility shifting. The total cost of incident management increases in proportion to how much the help desk’s own costs decrease.

To optimize the whole, you have to suboptimize the parts. This guidance might not sound concrete and pragmatic, but don’t let its esoteric overtones put you off. If you want the best results, make sure everyone involved in delivering those results knows what they’re supposed to be.

Also that nobody will be penalized by collaborating to make them happen.

IT Leadership

Integrating a new network after an acquisition can be a sizable headache for any CIO. But for Koch Industries, a $125 billion global conglomerate that has acquired five companies in two years, including Infor for $13 billion in 2020, connecting those acquisitions’ networks to its own sprawling network has been a challenge of another magnitude.

Traditionally, to integrate its acquisitions, Koch would flatten the acquired company’s core network, says Matt Hoag, CTO of business solutions at Koch. While this approach makes connecting the network easier, it is a slow, arduous endeavor that gets more complex as more companies are acquired, he says.

Moreover, Koch itself is in the middle of a digital transformation that adds cloud networking to the mix, further complicating the challenge. Cloud networking comprises three layers: first from on-premises data centers to the cloud, then within a cloud that has multiple accounts or virtual private clouds, and finally, between individual clouds in a multicloud environment. It’s more complicated than standard networking, Hoag says.

“Cloud deployments typically come in the form of multiple accounts, including multiple LAN segments that need to be connected. This encompasses not only VMs but also many other services offered by the cloud provider,” he says.

The major tasks involved range from deploying core IP routing, to enabling connections among virtual workloads within a multitenant cloud, to connecting multiple clouds, to ensuring remote users can connect to the company’s cloud estate. It’s the kind of challenge few, if any, enterprises can take on without a partner today, analysts contend.

Laying the foundation

Koch Industries began its migration to Amazon Web Services in 2015, when it also started on the first layer of its cloud networking strategy.

Matt Hoag, CTO of business solutions, Koch Industries

Koch Industries

Leased lines and direct connects would remain in the data center as part of this strategy, but Hoag did not want to route users through the data center to access data on the cloud. Instead, Koch’s engineering team set about virtualizing the physical transports to build the SD-LAN and firewall within the cloud rather than in the data center.

The company invested a hefty amount of time — roughly 18 months — and engineering resources just to bring on-premises networking to the cloud. “It was more of a challenge than I thought it was going to be in the early days,” Hoag says.

For the second two layers of Koch’s cloud network infrastructure, Hoag partnered up with a specialist.

IDC analyst Brad Casemore notes that there are several multicloud networking suppliers, including Aviatrix, Alkira, F5 Networks, and Prosimo, as well as established datacenter SDN suppliers such as VMware, Cisco, and Juniper. Co-location providers that offer interconnection-oriented architectures — such as Equinix, Digital Realty, and CoreSite — partner with many of these suppliers.

Hoag brought in Alkira to help tackle the challenge.

When building out one portion of a transport construct, the CTO recalls an ‘aha’ moment that he had one afternoon in a conference room in Reno, Nev., with Alkira: Using a third-party platform to handle the abstraction of networking into a software service would vastly reduce the complexity for his own IT team.

Alkira’s network segmentation and resource sharing approach would enable Koch to unify its on-premises and multicloud networks with a few clicks of the mouse, Hoag says. So his team set about migrating the first layer of cloud networking it built from scratch to work within Alkira’s platform.

“Prior to Alkira, anytime we acquired a new company, it would take 12 to 24 months to integrate their network due to the massive complexity,” Hoag says. “Now, we can set policy and have the entire network abide within 24 hours.”

Modernizing the network

Hybrid and multicloud networking, such as Koch’s, represents the next level of cloud maturity, says IDC’s Casemore, who adds that it’s a category in which most enterprises are woefully behind.

“While compute and storage infrastructure have largely aligned with cloud principles and the needs of multicloud environments,” Casemore says, “the network has not kept pace. ”

For Casemore, network modernization is indispensable to multicloud success: “Enterprises often are not fully cognizant of their networks’ multicloud deficiencies and limitations until they experience them firsthand. By then, the network’s inability to accommodate new requirements has often compromised the realization of an organization’s digital business strategy,” he says.

Here, Hoag says, partnering can be beneficial, as third-party specialists such as Alkira have a deep understanding of cloud providers’ obscure but significant technical differences. Working with a partner can also vastly reduce maintenance and troubleshooting, Hoag says, adding that to date Koch is enjoying similar data transfer speeds in all three layers of its cloud networking architecture.

Koch’s partnership with Alkira has also enabled the CTO to build up his team’s cloud networking skills.

“There is a talent war going on,” Hoag says. “This helps us move our team up the talent chain so they can focus on working with applications teams in the company and produce much better business outcomes.”

Enterprise Management Associates analyst Shamus McGillicuddy agrees that most enterprise CIOs will need to tap a specialist to achieve seamless cloud networking — one of the final phases of their digital infrastructure.

“Building a network across multiple cloud providers and one or more private data centers is too complex because network and security teams have to use different tools depending on which cloud or data center they’re working with,” McGillicuddy says. “This solution is an overlay that removes this complexity.”

By abstracting the various networking and security features different cloud providers offer, enterprises “can manage everything from one place, with one set of design parameters, one set of network and security policies, and one console for operational monitoring and management,” he says.

One day, setting up cloud networking may be as easy as using a credit card to set up a cloud instance, Hoag says. But not now. “When you start to have the kind of user needs to potentially have connectivity between different clouds, that’s more difficult,” the CTO says.

Cloud Computing, Hybrid Cloud, Multi Cloud, Networking

By Chet Kapoor, Chairman and CEO, DataStax

There is no doubt that this decade will see more data produced than ever before.

But what’s truly going to transform our lives, define the trajectory of each of our organizations, and reshape industries is not the massive volume of data. It’s the unmatched degree to which this data can now be activated in applications that drive action in real time: minute by minute (or even second by second), across work, play, and commerce. Where technology might have been a constraint in the past, it’s now an enabler.

Here, we’ll take a look at why real-time apps are no longer just the domain of internet giants and discuss three ways that your organization can move toward delivering real-time data.

The future is here

IDC predicts that by next year there will be more than 500 million new cloud native digital apps and services – more than the total created over the past 40 years.

We’re already living in this future. We get turn-by-turn driving directions while listening to an AI-recommended playlist, and then arrive at the exact time our e-commerce order is brought to us curbside – along with a cup of hot coffee.

The real-time data powering apps that change industries is no longer just offered by a Google or a Spotify.

Companies like Target excel at it. The retailer delights customers with an app that shows users what they most want to see, ensures no one ever misses a deal, has a near-perfect record of intelligent substitutions for out-of-stock items, and gets users their orders on their terms (and it might just include a drink from Starbucks, another enterprise that is a real-time app powerhouse).

Smaller businesses are making real-time data core to their offerings, too. Ryzeo offers a marketing platform that leverages real-time data generated by events on its clients’ e-commerce websites. An item that a shopper views or searches for instantly results in an AI-driven recommendation through its “suggested items.”  Real-time data – and the technology that supports it – is how Ryzeo makes this happen. 

Inaction isn’t an option

The door is open to you and your organization, too.

The best-of-breed technologies that power winning real-time apps are open source and available as a service, on demand to all. There are tons of proven use cases across industries. When you leverage these use cases and technologies, there’s a big payoff – you increase your organization’s ability to innovate and turn data into delightful customer experiences.

This will not only transform how your business grows, but how your business works.

As consumers, we never want to go back to dumb apps that evolve slowly, don’t know our context, and fail to act intelligently on our behalf. In fact, we desire the opposite.

When you put the customer’s digital experience at the center of agile workflows, make fast decisions, and rapidly iterate, you create a powerful feedback loop. Every win shows the power of a new and more fulfilling way of working. So does every failure – by providing valuable learnings.

The one thing you can count on is that inaction is not an option. And at this moment in time, why would we want to wait?

There is no doubt that real time data can reduce waste, increase safety, help the environment, make people happier and healthier. And we’re only just getting started.

So how do you get started? You can make three important choices right now to set your organization on a path to excel at delivering real-time data.

Step 1: Pick up the right tools

The technology to deliver outstanding, data-powered, real-time experiences has arrived – and we’ve got it in spades. The best of breed tools are open source. They grew out of the “best of the internet” to solve novel problems about scale and data velocity. Apache Cassandra®, for example, was developed at Facebook to manage massive amounts of messaging data.

Joining the open source ecosystem means you don’t have to reinvent the wheel. This is important because what sets your organization’s real-time data experiences apart won’t be the infrastructure. It’ll be how you put your domain knowledge to use in new ways that delight your users.

Most of these technologies are available on demand as-a-service to everyone. If you didn’t add them to your data infrastructure yesterday, do it today.

Step 2: Assemble the right teams

When every company is a software company, every executive must also be a software executive. This includes your line of business owners, general managers, and functional leaders.

Winning companies reorganize team structures and accountability to match. The days of data scientists experimenting alone in an ivory tower and developers working under requirements that were “thrown over the wall” to IT are over. “The business” can no longer think of data and technology as “IT’s problem.”

All of your employees need to be trained to identify and capitalize on opportunities for using data and technology to drive business results. Your line of business owners must be held accountable for making it happen.

To empower them, assign your developers, data scientists, and technical product managers to cross-functional teams working side-by-side with their business domain colleagues that own customer experiences. This is a ticket out of “pilot purgatory” and a key to democratizing innovation across your company.

Step 3: Ask the right questions

As you advance on your journey, more and more smart systems will be working every minute of every day to answer your industry’s key questions, like “what’s the most compelling personalized offer for this customer?” or “what’s the optimal inventory for each store location?”

What those systems can’t do is ask questions that only humans can, such as “how do we want to evolve our relationship with our customers?” Or “how can we deploy our digital capabilities in ways that differentiate us from our competitors?”

No algorithm is going to kick out the brilliant and empathetic idea to “Show Us Your Tarzhay,” which turned what might have otherwise been the unfortunate necessity of having to shop on a limited budget into the opportunity to celebrate and share a distinctive personal style. Similarly, it took human creativity to expand the concept from clothing into a new category (groceries).

If you take the first two steps listed above, you will start to free up your people’s time to ask creative questions and improve their ability to deliver on the answers using best-of-breed technology. Equip, challenge, and inspire them to think big about where you want to take your customers next, and you’ll get your organization moving in the right direction to provide the benefits of real-time data to your customers.

Learn more about DataStax here.

About Chet Kapoor:

Chet is Chairman and CEO of DataStax. He is a proven leader and innovator in the tech industry with more than 20 years in leadership at innovative software and cloud companies, including Google, IBM, BEA Systems, WebMethods, and NeXT. As Chairman and CEO of Apigee, he led company-wide initiatives to build Apigee into a leading technology provider for digital business. Google (Apigee) is the cross-cloud API management platform that operates in a multi- and hybrid-cloud world. Chet successfully took Apigee public before the company was acquired by Google in 2016. Chet earned his B.S. in engineering from Arizona State University.

Data Management, IT Leadership