Developers are hired for their coding skills, but often spend too much time on information-finding, setup tasks, and manual processes. To combat wasted time and effort, Discover® Financial Services championed a few initiatives to help developers get back to what they do best: developing. The result? More than 100,000 hours of developer toil have been automated or eliminated.

“A happy developer is one who’s writing code,” said Joe Mills, Director of Transformation Strategy and Automation at Discover. “So, we strive to create an inspiring culture and an exciting place to build your career. We want it to be easy to deliver value with the skillsets you have and to harness opportunities to refine your craft.”

Streamlining development through tools, knowledge, community

DevWorx is a program that simplifies the developer experience, streamlines work, and frees up time to innovate. Specifically, DevWorx is an online hub where developers across Discover can access prescriptive guidance for repetitive setup or deployment tasks, developer environments, self-service or automation tools, and a community of other developers to collaborate with.

“It’s basically a developer-driven community where we remove barriers to getting work done, focus on efficiency, and really enjoy coding as opposed to it feeling like a slog,” said Jonathan Stoyko, senior manager of strategic projects.

Developers can use DevWorx to standardize duplicate processes and reduce manual tasks. “If there’s a code structure that has to be reused every time you’re creating an application, that structure can be standardized as a template,” said Stoyko. “And we can store it in a common location so everybody has access to it and can contribute to it.”

Increasing productivity with step-by-step tutorials

Golden Paths are a key element of DevWorx. Golden Paths provide step-by-step tutorials for accomplishing specific development tasks within Discover. From making submissions, gathering approvals, and filling out prerequisite forms, Golden Paths covers the entire production lifecycle.

“If someone gets dropped into a new team, they can start coding within minutes and skip months of playing catchup,” said Andrew Duckett, senior principal application engineer and architect. “With Golden Paths, these processes are all codified and readily accessible.”

Developers are encouraged to contribute to existing paths and build new ones based on their own experiences.

Duckett continues: “We believe that it’s better to let the engineering community determine what works best for them, not to put a bunch of people in an ivory tower and dictate what is right. These developers are hired to innovate and solve problems, so we let them do that.”

Reducing manual tasks through automation

Automating manual tasks and repetitive processes is crucial for increasing developer efficiency. “Employing automation for tasks that many engineers face throughout their SDLC helps to shift focus towards human value-add activities. This also increases overall delivery throughput, with higher confidence in our development lifecycle, and produces consistent processes across teams that would otherwise be handled one-off and uniquely” said Joe Mills.

Developers can engage a team of automation experts to assess certain processes and tasks and help uncover automation opportunities. The team uses a hub-and-spoke model to scale their efforts across development teams at Discover and can help teams with robotic process automation, business automation, or code automation.

Reducing friction through consistent development practices

In addition to these initiatives, engineers at Discover adhere to a set of practices, internally called CraftWorx, that define and direct the agile development process. Aligning engineers across these practices reduces friction because engineers and developers are following the same development practices.

“If you’re trying to solve a problem and you think, ‘where’s the answer?’ CraftWorx aims to be that answer,” said Colin Petford, director of technology capability enablement at Discover. “It’s also constantly evolving along with our craft. It will never be finished because technology doesn’t sit still.”

Learn how Discover developers are using automation, Golden Paths, CraftWorx, and more.

Digital Transformation

Nothing lasts forever in IT, and that includes your organizational structure.

Deciding on whether to scrap or keep existing infrastructure of any stripe isn’t easy. A complete rebuild can be disruptive, time-consuming, and risky. And if the initiative misses its goal, or runs over budget, the CIO’s job may be at stake.

Yet, as any IT leader knows, when technical infrastructure fails to meet enterprise needs, hampering productivity and innovation, it’s often time to rebuild from scratch. The same can be said for how IT operations, workflows, and teams are structured. Knowing when it’s time for a wholesale reorg requires even more from an IT leader than knowing when the bits and bytes have worn out their shelf life.

Has your organization’s IT structure outlasted its usefulness? To find out, check out the following danger signals.

1. Past incremental restructuring attempts have failed

Updates and improvised rearrangements can keep an aging organizational structure tottering along for a while, but the repair bucket eventually runs dry. At that point, the way work is done, including who does what and with whom, starts damaging workflows, decision-making, collaboration, customer service, and other critical processes.

When reorganizing any infrastructure, it’s important to understand what will be essential to retain. The same can be said for IT operations itself. “The goal isn’t to restructure with nothing remaining from what was valuable and meaningful,” says Andrew Sinclair, a managing director in consulting firm Accenture’s technology strategy and advisory practice. “Change requires something that is stable and durable.”

Restructuring can create uncertainty and stress in an organization, and it shouldn’t be used lightly or regularly, Sinclair explains. Yet a radical change is sometimes necessary. “Consider new ways of structuring how work is done, breaking down existing functional and organizational silos so teams can be combined with the full set of skills to be successful, while also reducing dependencies that can slow work.”

2. You’re fixing problems instead of delivering results

A department that lives in conflict typically struggles to efficiently execute strategies and tasks, observes Eric Lefebvre, CTO at Sovos, a tax compliance and regulatory reporting software provider. He notes that IT organizations can usually work through almost any problem as long as the parties grow together. “But if the design or role clarity is a constant source of friction, the results will inevitably be suboptimal.”

The restructuring plan’s first step should be assessing the existing environment as well as the ideal end state. “A solid understanding of the environment informs the structure and enables drilling into the next level of detail,” Lefebvre explains.

As the restructuring strategy takes shape, Lefebvre advises coordinating plans and decisions with the enterprise’s human resources leader to ensure that compliance and other important mandates will be met. “External peers in your network that have performed similar [restructuring] efforts are also a great resource for information on approaches and pitfalls to avoid,” he adds.

3. There’s been a major enterprise shakeup

Whenever a significant enterprise change occurs, such as a merger, major acquisition, or radical new business direction, the IT organization may have to be rebuilt to accommodate the new reality.

An important first step, once the decision to restructure IT has been made, is to be open and transparent with team members about the current situation, says Dena Campbell, CIO at Vaco, a global consulting firm. “Employees will want to know what it means for them and their roles,” she explains.

Campbell suggests that IT should work closely with HR to develop a comprehensive communication plan that ensures all parties fully understand what’s happening. “If you’re communicating with employees, any frustration or anxiety will be mitigated,” she says.

Establishing a realistic transition timeline is also necessary. The best way to set a timeline is to understand IT’s current position and then identify the restructuring goals, Campbell says. It’s also important to understand, and factor into the plan, that productivity and efficiency will dip when there’s so much sudden change. “Everyone will need time to adjust,” she advises. “Be realistic about setting a timeline and include time for the disruption itself, since it can take a while for the dust to settle and to get buy-in for the new plan going forward.”

4. An unhappy IT team

An obvious — and ominous — sign of org structure failure is when IT team leaders and members begin complaining about their tasks. “That’s something C-suite executives need to listen to,” says Tom Kirkham, CEO and CISO of cybersecurity firm IronTech Security.

The CIO in particular should know how to respond to internal strife. “A company culture that’s toxic will render less productivity and subpar outcomes, which will ultimately compromise the bottom line,” Kirkham warns. “A good executive, one who practices servant leadership, knows that, and how to respond to internal strife quickly and deliberately.”

As the restructuring planning begins, all stakeholders should be given the opportunity to voice their concerns and needs equally. “This is the only way to establish an IT culture that doesn’t erode from within,” Kirkham says. The restructuring should also establish priorities that contribute to the enterprise’s overall well-being, including its internal and external security.

5. Essential tasks are forever stuck in neutral

IT has fallen into a rut. Critical attributes, such as innovation, initiative, and transformation, are either absent or rarely seen. Decisions are made slowly, reluctantly, and infrequently. Meetings may be held to discuss critical issues, but end without resolution.

Meanwhile, necessary changes remain in limbo, as previous decisions are questioned during the execution phase. “These [signs] often signal there’s confusion as to who the key stakeholders are, where authority lies, or [there’s] a mismatch between organizational structure and how work is intended to flow,” says Ola Chowning, a partner with global technology research and advisory firm ISG.

“Delays or erratic workflows may be the result of organizational confusion,” Chowning observes. She notes that confusion is usually caused by a disconnect between the organizational structure and the operating model, and typically manifests over time. “This may be due to a new way of operating — such as a move to agile or product-oriented delivery, the distribution or centralization of major functions, or the influx and/or outflux of people,” Chowning says.

Full restructuring is a drastic move. Chowning believes that it’s a decision never to be made lightly. “Departments should make sure a reorganization is being done for a specific reason or need, and not as a knee-jerk reaction when a key leader exits, when a new CIO enters, or because it hasn’t been done in a while,” she explains. “Reorganization should signal to the entire department that you are expecting changes to outcomes and ways of working.”

Creating the new operating model will require a significant amount of time. “My experience has been anywhere from five to eight weeks for the complete design,” Chowning says. “Placing names in frames and selecting leaders, if that’s required, would follow, and that timeline is most often dependent on the HR practices of the enterprise.”

6. IT has a lousy internal reputation

The most important sign that something needs to change is when C-suite colleagues begin harboring a negative perception of the IT department, says Ben Grinnell, managing director at business and IT consulting firm North Highland. “Common perception issues include when IT is viewed as a cost center by the CFO; when IT is the last place the business turns to for help with digital innovation of its products and services; and when IT has more roles that don’t work directly with the business than those that do.”

To counter negative perceptions, the CIO should consider reconfiguring IT into a more flexible structure. “The organization’s efforts should be outward facing, with the goal of changing the perception of IT,” Grinnell explains. He advises CIOs to tap into discussions about how IT can drive revenue and increase margins through innovation, and what investments will be needed to enable change.

IT is an ecosystem, Grinnell states. “Any restructuring needs to include the entire workforce, including the employees, consultants, contractors, system integrators, and outsourced elements.”

Grinnell believes that IT restructuring should never treated as just another project. “It should be viewed as an always-on transformation, not a project that will one day be finished,” he explains. “That’s an unrealistic goal that sets the team up for failure.”

IT Leadership, IT Strategy

The effects of such an unpredictable environment are profound, and no organization in any industry is immune. Looking across our client base, we expect to see varying degrees of impact as the turbulence continues. The common thread? In almost every case, there’s an increased need for data insight and technology-enabled agility to reaffirm technology’s position at the center of investment strategy in order to achieve organizational growth.

So when it comes to securing funding and resources from the board, is the CIO put in the box seat if technology is at the center of investment strategy? Not necessarily. While investing in technology is key—and becoming more so—this doesn’t mean that CIO budgets won’t come under pressure, both for capital spend as well as for operations and maintenance (O&M). That’s why forward-thinking CIOs are taking action today to strengthen their position. And no matter the industry, we believe there are four smart moves that any CIO can make now to help them weather any economic storm.

1. Optimize cloud spend

It’s a good time for CIOs to conduct a financial health check on their technology budget. This includes running a benchmarking spend analysis on all categories relative to industry peers, as well as leading technology companies. Then, identify opportunities to reduce run costs and free up funds to invest in transformation and new technology capabilities. Specifically, look at your organization’s newer areas of technology spend, especially since the last economic downturn. What’s the biggest change you’ll find? Almost invariably, spending on cloud has leapt from low or even non-existent to high. However, in many cases, that money could be spent more effectively; we often see clients using cloud in a capital-intensive way that mimics how they used to use datacenters. Remember, you don’t own cloud servers, you just “rent” them. So your usage and costs should be elastic, expanding and contracting with workload. That’s a core benefit of cloud.

That’s why one of the first moves to consider is optimizing your cloud spend. An easy example? Shut down the testing environment when you’re not using it. And consider different types of storage for different classes of data: highly-available and responsive storage for transactional data, and higher-latency and lower-cost for data not needed immediately. You should also scrutinize the bills from your cloud providers. These are often extremely complicated, running into millions or hundreds of millions of line items. FinOps for cloud can help track and optimize this spending while reaping major benefits on top. For instance, a robust FinOps capability can prevent spend commitment mistakes, and help you switch from a “lift-and-shift” approach founded on a datacenter mentality to a true cloud-centric model that realizes cloud’s full potential.

2. Double down on automation

If your IT budget, and maybe your business as a whole, is under pressure in the current environment, then automating more business processes is a natural step. But it’s important to implement automation for the right reasons, looking beyond the obvious cost savings to consider how it contributes to broader enterprise strategy. Of course, automating procedural, repeatable tasks via robotic process automation (RPA) not only cuts cost but frees up talent for higher-value, more strategic activities, enabling the business to do more with fewer people and address talent supply issues. The results? Higher efficiency and better outcomes. While many organizations are already implementing RPA, few are doing it at scale, and most haven’t yet fully embraced the more advanced “intelligent” automation opportunities via artificial intelligence and machine learning that can unlock true end-to-end automation. Given this, the CIO should become the driver of enterprise automation. 

3. Be open with suppliers on budget constraints

Try talking to your suppliers about the cost squeeze you’re facing, and you might be pleasantly surprised at their response. If you treat them as true partners and give them the opportunity to make suggestions for ways to save costs, they’ll probably come back with creative ideas. This reflects our own experience: we’ve worked with clients through downturns in industries like steel and utilities, and we know they expect us to offer creative ways to do things more cost-effectively. Whether it involves outsourcing, insourcing or something else, your suppliers or partners will often have great ideas.

4. Review software licenses and subscriptions

Many organizations are over-licensed and oversubscribed on software, pushing costs higher than they need to be. There are several ways to tackle this problem. One is to take steps to optimize subscription fees on expensive licenses by verifying the user base uses a software product or even separately licensed/subscribed features. Another is to identify savings opportunities from using open-source components instead of commercial software. Further, most software license agreements include annual processes to reset maintenance costs when consumption patterns change. Then of course there’s rationalization of products that are functionally redundant or can be archived/retired. While CIOs can carry out this license management themselves, a more effective approach could be to use a partner with specific expertise, who can detect in real time where an application is being used, and help recommend approaches to reduce spend.

With those four moves in mind, and in the drive to reduce costs amid ongoing uncertainty, CIOs may be tempted to cancel a project in its final stages to stop spend. But if that project involves retiring an asset or getting rid of a datacenter, companies should press on for multiple reasons. One is that by stopping, they’ll prolong technical debt into the future for a short-term benefit. Another is that once finished, maintenance costs, like on on-premise servers, will go away. So don’t stop short of the finish line and neglect to collect the savings.

Agile Development, Budgeting, CIO, Cloud Management, Data Center Management, IT Leadership

The Electronic Health Record (EHR) is only becoming more critical in delivering patient care services and improving outcomes.  As a leading provider of the EHR, Epic Systems (Epic) supports a growing number of hospital systems and integrated health networks striving for innovative delivery of mission-critical systems. 

However, legacy methods of running Epic on-premises present a significant operational burden for healthcare providers. Implementing, maintaining, and scaling the solution can be slow, complicated, and costly. Furthermore, supporting Epic Honor Roll requirements, purchasing cycles, and disaster recovery places heavy demands on staff time, and recruiting, training, and retaining IT professionals can prove difficult.

The good news is that health systems now have options for managing their Epic solution, thanks to advancements in hybrid multicloud and integrated support services. In this article, discover how HPE GreenLake for EHR can help healthcare organizations simplify and overcome common challenges to achieve a more cost-effective, scalable, and sustainable solution.

The benefits of hybrid multicloud in healthcare

When it comes to cloud adoption, the healthcare industry has been slow to relinquish the traditional on-premises data center due to strict regulatory and security requirements and concerns around interoperability and data integration.

But as with many industries, the global pandemic served as a cloud accelerant. Increasingly, healthcare providers are embracing cloud services to leverage advancements in machine learning, artificial intelligence (AI), and data analytics, fueling emerging trends such as tele-healthcare, connected medical devices, and precision medicine. Flexible, hybrid multicloud service models enable healthcare providers to run mission-critical workloads anywhere, from on-premises to colos to all hyperscalers, moving data securely from edge to cloud.

In fact, in a recent survey, 90% of respondents agreed that hybrid multicloud provides an optimal solution for meeting the healthcare industry’s unique challenges. Hybrid multicloud delivers benefits such as:

Enhanced clinical operations, including tighter EHR system integration and improved access to integrated technology, a variety of cloud options, and software management service options.Enterprise-level standardization, simplifying the cloud experience, unifying systems under a common framework, and lowering the total cost of ownership.IT modernization and the ability to rapidly adopt new and emerging platforms during the contract term.Improved compliance across the hybrid cloud ecosystem.Business resiliency, including greater access to consumption-based infrastructure, disaster recovery, and business continuity services.Greater agility to embrace innovation and disruption and respond quickly to business opportunities.Increased sustainability, including reduced greenhouse gas emissions and energy consumption.

HPE GreenLake for EHR delivers private and public cloud options

That brings us to HPE GreenLake for EHR, which couples Epic software management with infrastructure-as-a-service (IaaS) for a complete, end-to-end managed solution. HPE GreenLake for EHR integrates with both private and public clouds to ease healthcare providers’ operational burden and deliver a highly secure, scalable, pay-as-you-go service.

From a design standpoint, HPE GreenLake for EHR is HIPAA and Target Platform compliant, helping a health system achieve Honor Roll and supporting Epic Enterprise and Community Connect accreditation. It is cloud-enabled to access Azure and AWS and can support both colocation centers and on-premises data centers while leveraging existing licensing agreements.

Furthermore, HPE GreenLake for EHR streamlines the addition of services through a single portal and enables deep data insights. It builds a foundation for healthcare organizations to support rapidly emerging requirements, innovate faster, and launch health system initiatives more quickly. It also helps optimize spending and lower risk while increasing patient satisfaction.

Finally, HPE eases the strain on IT staff by providing a dedicated team that designs, installs, and supports all technology aspects of Epic, including storage, network, and compute. Service components include:

Advise and optimize

Epic technology best practicesInfrastructure and security optimizationCompliance managementContinuous improvement


Patching and updates for the Epic infrastructure, associated software, application, and securityPerformance and capacity management


Incident management and problem resolutionEffect changes on listed resolutions


Automated alertingTriage24×7 surveillance

Contact GDT to learn more about HPE GreenLake for EHR

HPE and Epic have a long history of collaboration and excellence. In fact, 65% of Epic customers rely on HPE infrastructure. As a trusted solutions provider and HPE partner, GDT can support your organization’s implementation of HPE GreenLake for EHR from start to finish, accelerating solution time to value and freeing up your staff to focus on healthcare innovation and improved patient outcomes.

Contact the experts at GDT today to discover how your healthcare organization can benefit from HPE GreenLake for EHR.

Multi Cloud

By George Trujillo, Principal Data Strategist, DataStax

Innovation is driven by the ease and agility of working with data. Increasing ROI for the business requires a strategic understanding of — and the ability to clearly identify — where and how organizations win with data. It’s the only way to drive a strategy to execute at a high level, with speed and scale, and spread that success to other parts of the organization. Here, I’ll highlight the where and why of these important “data integration points” that are key determinants of success in an organization’s data and analytics strategy. 

A sea of complexity

For years, data ecosystems have gotten more complex due to discrete (and not necessarily strategic) data-platform decisions aimed at addressing new projects, use cases, or initiatives.  Layering technology on the overall data architecture introduces more complexity. Today, data architecture challenges and integration complexity impact the speed of innovation, data quality, data security, data governance, and just about anything important around generating value from data. For most organizations, if this complexity isn’t addressed, business outcomes will be diluted.

Increasing data volumes and velocity can reduce the speed that teams make additions or changes to the analytical data structures at data integration points — where data is correlated from multiple different sources into high-value business assets. For real-time decision-making use cases, these can be in a memory or database cache. For data warehouses, it can be a wide column analytical table.

Many companies reach a point where the rate of complexity exceeds the ability of data engineers and architects to support the data change management speed required for the business. Business analysts and data scientists put less trust in the data as data, process, and model drift increases across the different technology teams at integration points. The technical debt keeps increasing and everything around working with data gets harder. The cloud doesn’t necessarily solve this complexity — it’s a data problem, not an on-premise versus cloud problem.

Reducing complexity is particularly important as building new customer experiences; gaining 360-degree views of customers; and decisioning for mobile apps, IoT, and augmented reality are all accelerating the movement of real-time data to the center of data management and cloud strategy — and impacting the bottom line. New research has found that 71% of organizations link revenue growth to real-time data (continuous data in motion, like data from clickstreams and intelligent IoT devices or social media).

Waves of change

There are waves of change rippling across data architectures to help harness and leverage data for real results. Over 80% of new data is unstructured, which has helped to bring NoSQL databases to the forefront of database strategy. The increasing popularity of the data mesh concept highlights the fact that lines of business need to be more empowered with data. Data fabrics are picking up momentum to improve analytics across different analytical platforms. All this change requires technology leadership to refocus vision and strategy. The place to start is by looking at real-time data, as this is becoming the central data pipeline for an enterprise data ecosystem.

There’s a new concept that brings unity and synergy to applications, streaming technologies, databases, and cloud capabilities in a cloud-native architecture; we call this the “real-time data cloud.” It’s the foundational architecture and data integration capability for high-value data products. Data and cloud strategy must align. High-value data products can have board-level KPIs and metrics associated with them. The speed of managing change of real-time data structures for analytics will determine industry leaders as these capabilities will define the customer experience. 

Making the right data platform decisions

An important first step in making the right technology decisions for a real-time data cloud is to understand the capabilities and characteristics required of data platforms to execute an organization’s business operating model and road map. Delivering business value should be the foundation of a real-time data cloud platform; the ability to demonstrate to business leaders exactly how a data ecosystem will drive business value is critical. It also must deliver any data, of any type, at scale, in a way that development teams can easily take advantage of to build new applications.   

The article What Stands Between IT and Business Success highlights the importance of moving away from a siloed perspective and focusing on optimizing how data flows through a data ecosystem. Let’s look at this from an analytics perspective.

Data should flow through an ecosystem as freely as possible, from data sources to ingestion platforms to databases and analytic platforms. Data or derivatives of the data can also flow back into the data ecosystem. Data consumers (analytics teams and developers, for example) then generate insights and business value from analytics, machine learning, and AI. A data ecosystem needs to streamline the data flows, reduce complexity, and make it easier for the business and development teams to work with the data in the ecosystem.


IDC Market Research highlights that companies can lose up to 30% in revenue annually due to inefficiencies resulting from incorrect or siloed data. Frustrated business analysts and data scientists deal with these inefficiencies every day. Taking months to on-board new business analysts, difficulty in understanding and trusting data, and delays in business requests for changes to data are hidden costs; they can be difficult to understand, measure, and (more importantly) correct. Research from Crux shows that businesses underestimate their data pipeline costs by as much as 70%.

Data-in-motion is ingested into message queues, publish subscribe messaging (pub/sub), and event streaming platforms. Data integration points occur with data-in-motion in memory/data caches and dashboards that impact real-time decisioning and customer experiences. Data integration points also show up in databases. The quality of integration of data-in-motion and databases impact the quality of data integration in analytic platforms. The complexity at data integration points impacts the quality and speed of innovation for analytics, machine learning, and artificial intelligence across all lines of business.


Standardize to optimize

To reduce the complexity at data integration points and improve the ability to make decisions in real time, the number of technologies that converge at these points must be reduced. This is accomplished by working with a multi-purpose data ingestion platform that can support message queuing, pub/sub, and event streaming. Working with a multi-model database that can support a wide range of use cases reduces data integration from a wide range of single purpose databases. Kubernetes is also becoming the standard for managing cloud-native applications. Working with cloud-native data ingestion platforms and databases enables Kubernetes to align applications, data pipelines, and databases.

As noted in the book Enterprise Architecture as Strategy: Creating a Strategy for Business Execution, “Standardize, to optimize, to create a compound effect across the business.” In other words, streamlining a data ecosystem reduces complexity and increases the speed of innovation with data.

Where organizations win with data

Complexity generated from disparate data technology platforms increases technical debt, making data consumers more dependent on centralized teams and specialized experts.  Innovation with data occurs at data integration points. There’s been too much focus on selecting data platforms based on the technology specifications and mechanics for data ingestion and databases, versus standardizing on technologies that help drive business insights. 

Data platforms and data architectures need to be designed from the onset with a heavy focus on building high-value, analytic data assets and driving revenue, as well as for the ability for these data assets to evolve as business requirements change. Data technologies need to reduce complexity to accelerate business insights. Organizations should focus on data integration points because that’s where they win with data. A successful real-time data cloud platform needs to streamline and standardize data flows and their integrations throughout the data ecosystem.

Learn more about DataStax here.

About George Trujillo:

George is principal data strategist at DataStax. Previously, he built high-performance teams for data-value driven initiatives at organizations including Charles Schwab, Overstock, and VMware. George works with CDOs and data executives on the continual evolution of real-time data strategies for their enterprise data ecosystem. 

Data Management

A bank teller, a marketer, and an operations product owner at TruStone Financial Credit Union each had a knack for technology, but they didn’t think it would lead to a job in the IT department. Yet all three are now on CIO Gary Jeter’s IT team, and not because he’s desperate for bodies. Formal and informal programs at the credit union help Jeter find hidden IT gems inside the 600-person organization.

In the past year alone, six new additions to the IT team have come from other TruStone departments. IT’s “walk in their shoes” job shadowing initiative and the company’s formal leadership training program help employees find career growth inside the company, but Jeter credits the attraction to the IT department, in particular to its well-regarded culture and its career-progression track, which is harder to find in other areas of the midsize company.

Above all, these transfers must be a good culture fit with IT, Jeter says. “I want people who are running to us, not people who are running away from a situation,” he adds.

Finding IT talent inside the organization benefits both the employee and the CIO. Recent layoffs and reigned-in hiring trends at some organizations might make the IT department an appealing option for technology-inclined employees. At the same time, CIOs who are unable or reluctant to hire replacements at salaries that are significantly higher than those who left might be able to transition talent into IT without having to pay big salary bumps, which are estimated at 5-6% above existing levels for new hires, according to Janco and Associates. Plus, these employees already know the business. And of course, organizations benefit by retaining employees.

Here are five ways that companies are finding hidden IT talent inside their own organization.

Hire leaders and train skills

Jeter follows the mantra, hire leaders and train skills, “with leaders being people who have that drive to learn,” he says. In conversations with these interested employees, he’s looking for evidence of a curious mind, so he’ll ask about hobbies, for example. The teller didn’t have a college degree but explained she was on the robotics team in high school and taught herself Python coding.

“When you’re constantly going after [tech interests] outside of work, you’re probably going to come in and do a great job,” Jeter says. Today, the former teller is an IT systems analyst supporting mortgage applications.

Jeter will also evaluate the candidate’s reasoning skills by asking questions like, “How many piano tuners are there in Minneapolis?” Jeter says. “The answer doesn’t matter, it’s the logic that they use,” such as considering how many people play the piano, how many pianos could be in the city, and how many pianos must one tuner service to make a living.

Internal skills marketplaces

Internal skills marketplaces are emerging as a way to retain tech workers while also meeting demands for agile digital environments. Millennial tech workers often report feeling “trapped in the org chart” with a predefined job description that limits their work, says Jonathan Pearce, workforce strategies lead at Deloitte Consulting. The feeling is, “it would be easier to keep growing my career if I look outside the organization rather than inside. There’s no opportunity to put my skill sets out on the table.” Meanwhile, project managers need to connect work that needs to be done with the right set of skills, some that might come from some subfunction of IT. Internal skills marketplaces meet both needs by matching workers’ skill sets, not their job titles, with the work that needs to be done.

Navy Federal Credit Union discovers hidden IT talent with its talent optimization program, which began in 2016. “We knew there was tech talent in the credit union that doesn’t work in IT,” says CIO Tony Gallardy. “The question was how do we find these people?” His team used a talent assessment tool and identified 10 candidates for its pilot program. Each went through nine months of training and then integrated into IT. Today HR runs the talent optimization program and has expanded into other areas, including mission data, which is a subset of IT, and digital labs. More than 30 people have come to IT through the program, Gallardy says.

Some enterprises use AI-driven skills management platforms as a talent assessment tool to match peoples’ skills to IT. Consumer goods company Unilever, for instance, used its AI-driven internal talent marketplace to redeploy more than 8,000 employees during the pandemic. 

An internal talent marketplace can also reduce internal hiring bias and increase networking that promotes diversity. Hiring managers can focus just on skill sets and years of experience rather than education by removing that visible field, for instance. Others use the platform to build mentorship relationships that are senior-to-junior, junior-to-senior, peer-to-peer, and expert-to-novice, which breaks down taboos in relationships, connects people globally, and facilitates meaningful work and retention.


Training programs like IT bootcamps have become increasingly important tools for creating new opportunities for employees — all while helping to fill key IT roles.

Insurance company Progressive saw an opportunity to fill important roles by investing in its own employees who already have a wealth of knowledge about the organization, while also knocking down some of the eligibility barriers for some tech jobs.

The Progressive IT Bootcamp pilot program launched in 2021 with eight participants from customer support, underwriting and claims departments, who graduated in November and now work as IT apps programmer associates on teams across the company.

The bootcamp team worked with HR to identify certain customer-facing roles and invited members to apply. The team emphasized that employees didn’t need a tech background or a degree in tech — all the experience and background would be provided to them through the bootcamp.

Once bootcamp candidates were identified and accepted, they were taken out of their previous roles and put into the 15-week intensive training program where they learned C#, .NET, and other skills necessary for their new role.

Employees are paid during their training and are aided by a training assistant who is also a full-time Progressive programmer that helps connect the dots of what they are learning to how it would apply in their new roles. Program participants also report directly to an IT manager.

The company is now working on another version of the program, focusing on analyst roles, and plans to include other tech roles in the future.

Career-change programs

Capital One’s dedication to career development has helped motivate employees to stay despite waves of resignations in other organizations. One of its programs, the in-house Capital One Tech College gives employees both inside and outside of IT the opportunity to develop their tech skills. It gives access to thousands of free training and certification courses in subjects such as agile, cloud, cybersecurity, data, machine learning and AI, as well as mobile and software engineering. The Tech College offers both live classes and pre-recorded courses to fit employees’ schedules and learning styles.

Through the Tech College, Capital One can develop the necessary skills in-house, while also giving employees the opportunity to grow and expand their careers and skillsets, according to Mike Eason, senior vice president and CIO of enterprise data and machine learning engineering at Capital One.

Eason himself says that he’s had about 15 different roles at Capital One over the past 20 years and notes that the formal process around career development helps employees find what they’re passionate about without having to leave the company. “We really want to invest in the whole person versus getting them pigeonholed in doing the same thing,” says Eason.

Leveraging internal sources

Nobody knows the hidden IT talents of non-IT employees better than their managers and co-workers.  At TruStone, business leaders and managers are open to recognizing employees with IT potential that could benefit both the employee’s career and the company. “We’re transparent that this would be a great person for [an IT] career progression, so maybe they should come into IT,” Jeter says.

Jeter often discovers talent through his team’s product management consults inside the organization. “With a lot of scaled agile framework, we have product owners that sit outside of IT but within the business in areas like consumer lending, member services, or mortgages. We have technologies to align with them and they orchestrate the backlog” and other supporting duties, Jeter says. “They see what IT does, and we see what they do — and some of them want to come into IT.”

IT scored a new team member recently after a product owner in operations worked with IT on a product management consult. He had been with the company for nine years and worked in training before business operations. Jeter brought him into IT and today he works with consumer lending applications. “He knows the business and now he’s learning the technology.”

Getting these transfers up to speed and fully operational takes time, Jeter says. “Some learn the technical aspects of the business at different rates than others.” Jeter’s VPs and managers must pivot from “being a doer to being a coach,” he says. “We also spend a lot of time on performance management sessions and making sure we have development plans.” But the effort is worth it, he says.

“Showing that you invest in employees attracts talent internally,” Jeter says. “You’re giving them those skill sets to launch their career.”

Hiring, IT Training 

C-level executives are most interested in strategic assets and initiatives that will advance, transform, and grow their enterprises. They continually want to make “cost centers” more efficient and more cost-effective, while investing in what will accelerate, empower, and protect the business operations and its customer base.

Because data and digital technology have become so integral into any enterprise’s lifeblood, senior leadership teams must differentiate between the strategic aspects of IT and the tactical parts of IT cost centers. Storage has emerged in 2022 as a strategic asset that the C-suite, not just the CIO, can no longer overlook.

Enterprise storage can be used to improve your company’s cybersecurity, accelerate digital transformation, and reduce costs, while improving application and workload service levels. That’s going to get attention in the board room. Here’s how to equip yourself for that discussion with C-level executives. The following are three practical ways to make enterprise storage a strategic asset for your organization.

1. Make storage part of the corporate cybersecurity strategy

According to a Fortune 500 survey, 66% of Fortune 500 CEOs said their No. 1 concern in the next three years is cybersecurity. Similarly, in a KPMG CEO survey, CEOs also said cybersecurity is a top priority. The average number of days to identify and contain a data breach, according to security analysts, is 287 days. Given these facts, changing the paradigm from an overall corporate security perspective is needed.

Too many enterprises are not truly equipped and prepared to deal with it. Nonetheless, companies need to ensure that valuable corporate data is always available. This has created an urgent need for enterprises to modernize data protection and cyber resilient capabilities. The answer that CEOs, CIOs, CISOs and their IT teams need to take is an end-to-end approach to stay ahead of cybersecurity threats.

You need to think of your enterprise storage as part of your holistic corporate security strategy. This means that every possession in a company’s storage estate needs to be cyber resilient, designed to thwart ransomware, malware, internal cyber threats, and other potential attacks. Cybersecurity must go hand-in-hand with storage cyber resilience.

It’s prudent to evaluate the relationship across cybersecurity, storage, and cyber resilience. Both primary storage and secondary storage need to be protected, ranging from air gapping to real-time data encryption to immutable copies of your data to instantaneous recovery. 

What should you do? Perform a comprehensive analysis of your corporate data, determine what data needs to be encrypted and infused with cyber resilience and what doesn’t, and figure out how the protection needs to keep your company in compliance. You also need to decide what to do for modern data protection and you need to figure out what to do from a replication/snapshot perspective for disaster recovery and business continuity.

2. Use a hybrid cloud strategy to accelerate digital transformation

More than 75% of CIOs identified digital transformation as their top budget priority of the last year, according to Constellation Research. Companies are leveraging digital capabilities to better serve their customers, accelerate new products and services to market, and scale their operations. The growth and importance of data continue to proliferate exponentially.

The role of hybrid cloud infrastructure – part of your data on-premise – as the key enabler of this megatrend is at the forefront. A core value of cloud services is the support for digital transformation. Digital transformation is enabled and powered by hybrid cloud computing, offering increased flexibility, rapid application development and deployment, and consumption-based economics. This is essential to competing and remaining relevant in today’s world of data-driven business.

Data is the lifeblood of all modern enterprises. How to collect, manage, store, access, and use the data determines the level of success that a company will have. Enterprises can either innovate their data, or be strangled by the data, or even be held hostage for the data. This is why you need the strategy and the infrastructure to drive the future of data for your business.

As businesses evolve themselves digitally, a hybrid cloud strategy orchestrates all the different aspects of it in a mixed computing, storage, and services environment, comprised of on-premises infrastructure, private cloud services, and a public cloud such as AWS. Just in the last 18 months, advancements have been made for “on ramps” between private cloud and the public cloud. This hybrid cloud infrastructure becomes the cornerstone for an organization’s ability to be agile and accelerate business transformation.

3. Reduce IT costs

It can be challenging to identify areas in IT to reduce costs, while maintaining the level of service or capacity. But here’s a practical tip that can be a quick win for an enterprise: CIOs, CISOs and their IT teams can lower IT costs by consolidating storage arrays.

Because of the advancements in storage-defined storage technology, an enterprise can replace 50 arrays with two arrays, while still getting all the capacity, performance, availability, and reliability that are needed. This strategic consolidation saves on operational manpower, rack space, floor space, power expense, and cooling expense. In short, dramatically reducing your CAPEX and OPEX.

You can consolidate storage while, simultaneously, improving access to data across a hybrid cloud and a container-native environment for greater resilience, lower application and workload latency, and higher availability. For today’s enterprise requirements, 100% availability is a must.  

A hybrid cloud approach with a strong private cloud configuration creates the opportunity to consolidate storage arrays for maximum efficiency. Furthermore, with a private cloud, you have better, more exact control over cost structure and service level agreements (SLAs). Essentially, this strategy enables you to match an SLA, such as application performance and availability, with a higher level of control. 

Switching to consumption-based pricing models for storage is another way to reduce costs. Organizations can choose to flex up or flex down based on fluctuating needs for storage, utilizing storage-as-a-service. The worldwide analyst firm Gartner predicts: “by 2023, 43% of newly deployed storage capacity will be consumed as OPEX, up from less than 15% in 2020.”

Alternatively, companies can choose capacity on demand and seek out elastic pricing. All of these options have made storage more cost-effective. There are options across OPEX and CAPEX. You can even get a mix of OPEX and CAPEX to realize those cost savings.

Key takeaways

Think of your storage as part of your holistic enterprise security strategyA hybrid cloud infrastructure should be the cornerstone for your organization’s ability to be agile and accelerate business transformation.Strategic consolidation of storage arrays reduces CAPEX and OPEX.

To learn more about enterprise storage solutions, visit Infinidat

Data Management

Cloud migration has become a tech buzzword across enterprises worldwide. However, to be an effective cloud user means not only getting introduced to the concept, but also thoroughly evaluating your existing IT infrastructure and processes, identifying their potential in moving to cloud, and effectively planning your migration strategy. Given the many advantages of migration, businesses are looking to tap into the long-term benefits of cloud computing, which include:

AgilityCost savingsScalabilitySecurityMobility

Conducting an objective and accurate assessment of their existing services, applications, security, and network infrastructure has been a challenge for organizations. Numerous discovery tools, including Cloudscape, Cloudamize, Device42, and TSO Logic, can help you understand your on-premise infrastructure.

Though these discovery tools do a good job in terms of understanding the infra estate as well as other basic information like CPU, RAM, disk storage, and OS, they have their own limitations. Mostly, the assessments are far from being accurate during and after migration. This is because the organizations do not go deeper in terms of understanding the applications and the business. The most common challenges of cloud migration are:

Lack of a clear strategy that is determined by business objectivesNot having a clear understanding of environments – including infrastructure, applications, and dataFailure of crucial services and security weak pointsLack of skilled labor and scope for human errorsExceeding a planned budget

Broad basing the discovery

The good news, however, is that none of these challenges are insurmountable. To make the migration process as smooth as possible, we need to discover or analyze the source code, configurations, applications, and databases too – and not just focus on infra discovery.

This helps to better understand the internal dependencies of applications and the roadblocks in the migration process. Both static and dynamic analyzers should be used together with the infra discovery tool to have a fail-proof migration.

As static analyzers help understand the components of applications and their dependencies on 3rd-party applications, it helps analyze the impact of re-platforming or refactoring the application. This is where AI and ML can be used in conjunction with these mechanisms to get a better understanding.

The ML and AI journey to cloud

With Artificial Intelligence and Machine Learning (AI/ML) in cloud becoming mainstream, organizations are able to overcome these challenges. AI/ML automatically generate insights from data. From predictive maintenance in manufacturing plants and fraud detection in financial services to accelerating scientific discovery, businesses of all types can benefit from this technology.

This has also given rise to applications such as chatbots, virtual assistants, and search engines that rival human interaction capabilities. As the dynamic and complex business environments of the modern times require a shift to data-driven decision making, there is a growing demand for robust, lineage, governance, and risk mitigation tactics. 

ID2C – Changing the game of data discovery

ID2C is TCS’ proprietary ML-driven tool, which combines discovery tool and static analyzers outputs along with other available data and intelligently deduces technology stack and dependencies to derive more value. This enables accurate identification of a variety of different technologies from different vendors even when they are seemingly disconnected. TCS’ AWS business unit conducts assessment projects worth $5M every year while influencing more than $100M foundation, migration, and operations projects.

AI/ML-driven data discovery combined with anomaly detection is a critical aspect of big data and cloud cost optimization and has the potential to save enterprises significant amounts of money. So why did we create an artificial intelligence-based platform for enhanced data discovery? Benefits include:

A 30%-plus improvement in knowledge of some customer landscapesProven faster and reliable cloud migrations – around 20% less rollbacksEstimated savings of $5M due to fewer rollbacks and first-time right migrations and assessmentImproved assessment accuracy by at least 35%Improved technology stack identification – web server by 13%Improved runtime identification by 33%, and that of COTS products and its versions by 78% for a leading American insurance company Improvements in database server performance by 43% for a leading snacking company

As cloud native transformations are being increasingly sought after, TCS’ ID2C tool built on AWS cloud helps enterprises in their cloud journey by helping understand the on-premise environment better and thereby derives correct strategies to transform their application portfolio now and in the future. 

Author Bio


Ph: +91 9731397076


Guruprasad Kambaloor works as a Chief Architect in the AWSBU division of TCS. Guru has a rich experience of 26+ years in the IT industry spanning many domains like Healthcare, Life Sciences, E&R, Banking, and multiple technologies like Cloud, IoT, Blockchain, Quantum Computing. Currently he heads the Platform Engineering for AWSBU which has built platforms like Cloud Counsel, Cloud Mason, Migration Factory, Exponence to name a few. His current interests are AI/ML, Quantum Computing, and its relevance/usage in Cloud.

To learn more, visit us here.

Cloud Computing