Many companies today are rapidly adopting new technologies and tools to improve overall efficiencies, improve customer and client experiences, and support key initiatives that are related to business transformation. However, these efforts, while necessary, bring with them growing pains for the workforce.

As our global technologies transform, so must our teams. What we have discovered in implementing emerging technology at U.S. Bank over the years is that effectively deploying and making use of new tools requires a skilled and diverse workforce and a technology team with a strong engineering culture to support it.

Banking on technology and people

The largest technology investment for U.S. Bank came in 2022 when we announced Microsoft Azure as our primary cloud service provider. This move accelerated our ongoing technology transformation, part of which includes migrating more than two-thirds of our application footprint to the cloud by 2025. Harnessing the power of cloud is just one of many ways that technology is enabling our organization to bring products and services to our clients faster, while enhancing our operations’ scalability, resiliency, stability, and security.

The technology transformation at U.S. Bank is also focused on adopting a more holistic approach to both external and internal talent pipelines. Diversity is a key component of our team building because true innovation and problem-solving comes from people with different perspectives. To attract new, diverse talent to join our team, we supplement traditional recruitment methods with proactive techniques that help build our company’s reputation as a leader in technology and to give back to our community.

For example, we’re positioning some of our top subject matter experts at relevant conferences and councils to share lessons learned from our transformation journey and we’re engaging with educational programs, like Girls Who Code, Summit Academy, and Minneapolis Community and Technical College to both develop and recruit diverse talent.

Our top workforce priority, however, is retaining our current team and equipping them with the skills they’ll need today and in the future. Because technology changes so quickly, we have adopted a continuous learning mindset where our teams embed learning into their everyday responsibilities and see it as an investment in themselves. To do that, we created a strategy that focuses on four key areas: an employee’s time, establishing a personal plan, providing effective learning tools, and offering ways to apply what is learned. 

1. Time: Establishing a flexible learning environment

We created an environment and performance goals that encourage our technology teams to regularly dedicate time to continuous learning. Each member of my leadership team operates a different type of technology team with different priorities, work schedules, and deadlines, so they are empowered to decide how to create the time and space for their employees to achieve their learning goals. Some have opted to block all employees’ calendars during certain times of the month, and others leave it to their individual manager-employee relationships to determine what works best. We’ve found that, by empowering each team to make these decisions, our teammates are more likely to complete their learning goals.

2. Plan: Growing skillsets and knowledge

Just investing the time doesn’t necessarily mean our teams will develop the right skills. So, we created a program we call “Grow Your Knowledge,” where managers and employees have ongoing skills-related discussions to better understand employees’ current skills, skill interests, and potential skill gaps. This helps them collaboratively create a personalized development plan. We’re also able to use the information to help us measure impact and provide insights on new trainings we may need to meet a common skill gap.

3. Tools: Learning paths and programs

We assembled a cross-functional team of external consultants, HR representatives, learning and development experts, and technical professionals to develop the Tech Academy — a well-curated, one-stop shop for modern tech learning at U.S. Bank. This resource designed to support our teams to learn specific technical, functional, leadership, and power skills that are needed to drive current initiatives. Employees can take advantage of persona-aligned learning paths, targeted skill development programs, and experiential learning. We even developed a Modern Technology Leadership Development Program for managers to help them better understand how to support their teams through this transformation.

4. Application: Putting experiential learning into practice

Providing experiential opportunities where employees can further build their skills by practicing them is an essential part of our strategy. Right now, we offer programs such as certification festivals, hackathons, code-a-thons, bootcamps, and other communities of practice for our teammates to hone their newly acquired skills in psychologically and technologically safe, yet productive settings.

Our certification festival, called CERT-FEST, is our most successful experiential learning program so far. We leverage our own teammates to train others in a cohort-learning environment for eight weeks. To date, our employees have obtained several thousand Microsoft Azure certifications. Hackathons and code-a-thons take that certification to the next level by allowing our technology teammates to partner with the business in a friendly, competitive environment. The winning teams at this event build solutions for new products or services that meet a real business or client need.

Learn today for the needs of tomorrow

Since we’ve started this continuous learning journey with our teams, we’re seeing higher employee engagement, an increase in our team’s reported skills and certifications, and a stronger technology-to-business connection across U.S. Bank. These efforts have also shifted our employee culture to acknowledge that working in technology means you will always be learning and growing.

Finding new, more effective ways to address the ever-shifting needs of our customers will always be a priority. But in a continuous learning environment the question we now always ask is, “What do I need to know today, to learn today, to do my job better tomorrow?” This has been the driving force behind our success in growing, retaining, and motivating our technology workforce.

Financial Services Industry, IT Training 

A CIO has to understand the focus of the overall business, of course, but there are usually many segments or different dimensions to consider. In Martin Bernier’s case, as CIO of the University of Ottawa, managing the hyper-dynamic environment of 50,000 students, faculties and research groups is a discipline that requires both a holistic and granular approach across many departments in order to bring everything together in relative harmony. It’s an ongoing learning process that he’s honed over many years and positions.

“My career hasn’t been a straight line,” he says. “I started in the public sector, switched to the private sector, started my own consulting business, went back to private and public sectors, and now I’m in education. One thing that helped is to be a rebel. Sometimes, that’s not something positive, but in the beginning, my self-confidence was quite high. Everywhere I was I pushed the limit and I was confident I could manage things. I think every leader needs to develop that rebel side as well. You need to do the right thing for the right reason and prepare to fight for your team. If I need to lose my job by doing the right thing, I’ll do it and be okay with what happens. When I was younger, it was just taking the risk, but now it’s more calculated.”

Leading by such an example, Bernier knows that when building teams, certain skills stand out beyond technology. Considering the ongoing talent shortage in tech, he understands that broader abilities and strengths are becoming greater assets.

“A leader has to define what the motivator is,” he says. “With some people, it’s to grow their careers and move on. I never had that interest. I wasn’t looking to become a CIO; I was just interested to transform and improve the organization. So people need to develop that. They need to focus on people and the relationship they want to build, and the organization they want to be part of. If I am looking at the last 20 years as CIO, I think the reality is quite different. At that time, the focus was more technology, people did not want to talk with IT. Now everybody has IT tools. Everybody has mobile. So before, I focused on expertise and experience. Now I’m more about bringing the right people and embracing diversity. The role of CIO is getting more complex. It used to focus on the internal technology, and now our focus is everywhere, but that’s why I love the job.”

CIO.com editor Lee Rennick recently spoke with Martin Bernier, CIO at the University of Ottawa, about continuous learning, building diverse and equitable teams, and allyship to support diversity in technology. Here are some edited excerpts of that conversation. Watch the full video below for more insights.

On complexity: I’ve been in the field of IT for almost 30 years—20 specifically as CIO. I love change. Speaking as a leader, we need to get more involved and embrace diversity more. Every IT organization serves very diverse communities, so I’m involved in D&I and I’m active in my community, being part of many boards. My role at the university is simple and complex at the same time. On one hand, I need to shape the technology direction and oversee all the IT initiatives. That’s what is expected of me. Everything is for the business, the organization. I’m in charge of a large, centralized team, which includes the strategy, governance, architectures, policy, and so on. But on the other hand, the university is really decentralized with 10 different faculties and 42 services, so it’s a complex ecosystem with a really diverse reality. We have close to 50,000 students so this is a small city, and every city has its challenges. There’s a lot of diversity of expertise and point of view.

On collaboration: You need to understand your organization. And everything I learned throughout my career—from CRA and Brookfield, to my own business as well—I am able to use all that knowledge now because of having pushed myself outside my comfort zone. I’ve been at the university almost five years and I’ve been able to leverage that in light of the ecosystem’s complexity. But collaboration is essential. We all need to work together. We still debate, but building relationships and trust in every sector is vital because when you build trust, everything is possible. If not, you can’t move forward. I also promote inclusiveness and transparency. Everything in IT is a service so for me, everything is open. If somebody at the university is asking questions about the budget or capacity or anything like that, it’s open book. I want to lead by example and that is what I am trying to do.

On the human element: Technology is always the easy part and I have the feeling so many IT groups or organizations are working just on the technology side. Yes, that is our job. We need to focus on technology but it’s really simple. For me, what I like to focus on is the human aspect. Every human is different, and each human can be different from one day to the next. Someone could say, “I agree with you.” And the next morning they’ll call and say, “Oh, by the way, I was talking with my brother and now I disagree.” That’s why I love the people inside an organization. If I don’t feel connected, I’m not going to join that organization. So you need to have the passion for your organization and your industry. How could you transform something you don’t have passion for?

On male allies: When I joined the university, I asked about their women in IT initiative but they didn’t have a specific initiative in IT. So my goal was to provide support and help, like a male ally to be available where needed, but I was not looking to be visible. But one thing I quickly learned was to lead by my own example. That was not my goal at that time, but was the start of my journey and learning something new. We realized a lot of women wanted to participate but we had the wrong name, so we came up with Women in Innovation, which is more inclusive. That was four years ago and since then, I’ve done event panels and started another initiative that was similar to Women in Innovation but more like a male ally event. We are trying to be more strategic about the kind of event we wanted to do. I like to support my people but more backstage. But for this, I learned I needed to be up front and visible to be a good male ally. So my advice is ask people what you can do for them. I’m trying to promote diversity and concrete action. We really have the power to change things.

CIO, Diversity and Inclusion, IT Leadership, Relationship Building, Women in IT

In a bid to help enterprises offer better customer service and experience, Amazon Web Services (AWS) on Tuesday, at its annual re:Invent conference, said that it was adding new machine learning capabilities to its cloud-based contact center service, Amazon Connect.

AWS launched Amazon Connect in 2017 in an effort to offer a low-cost, high-value alternative to traditional customer service software suites.

As part of the announcement, the company said that it was making the forecasting, capacity planning, scheduling and Contact Lens feature of Amazon Connect generally available while introducing two new features in preview.

Forecasting, capacity planning and scheduling now available

The forecasting, capacity planning and scheduling features, which were announced in March and have been in preview until now, are geared toward helping enterprises predict contact center demand, plan staffing, and schedule agents as required.

In order to forecast demand, Amazon Connect uses machine learning models to analyze and predict contact volume and average handle time based on historical data, the company said, adding that the forecasts include predictions for inbound calls, transfer calls, and callback contacts in both voice and chat channels.

These forecasts are then combined with planning scenarios and metrics such as occupancy, daily attrition, and full-time equivalent (FTE) hours per week to help with staffing, the company said, adding that the capacity planning feature helps predict the number of agents required to meet service level targets for a certain period of time.

Amazon Connect uses the forecasts generated from historical data and combines them with metrics or inputs such as shift profiles and staffing groups to create schedules that match an enterprise’s requirements.

The schedules created can be edited or reviewed if needed and once the schedules are published, Amazon Connect notifies the agent and the supervisor that a new schedule has been made available.

Additionally, the scheduling feature now supports intraday agent request management which helps track time off or overtime for agents.

A machine learning model at the back end that drives scheduling can make real-time adjustments in context of the rules input by an enterprise, AWS said, adding that enterprises can take advantage of the new features by enabling them at the Amazon Connect Console.

After they have been activated via the Console, the capabilities can be accessed via the Amazon Connect Analytics and Optimization module within Connect.

The forecasting, capacity planning, and scheduling features are available initially across US East (North Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (London) Regions.

Contact Lens to provide conversational analytics

The Contact Lens service, which was added to Amazon Connect to analyze conversations in real time using natural language processing (NLP) and speech-to-text analytics, has been made generally available.

The capability to do analysis has been extended to text messages from Amazon Connect Chat, AWS said.

Contact Lens’ conversational analytics for chat helps you understand customer sentiment, redact sensitive customer information, and monitor agent compliance with company guidelines to improve agent performance and customer experience,” the company said in a statement.

Another feature within Contact Lens, dubbed contact search, will allow enterprises to search for chats based on specific keywords, customer sentiment score, contact categories, and other chat-specific analytics such as agent response time, the company said, adding that Lens will also offer a chat summarization feature.

This feature, according to the company, uses machine learning to classify, and highlight key parts of the customer’s conversation, such as issue, outcome, or action item.

New features allow for agent evaluation

AWS also said that it was adding two new capabilities—evaluating agents and recreating contact center workflow—to Amazon Connect, in preview. Using Contact Lens for Amazon Connect, enterprises will be able to create agent performance evaluation forms, the company said, adding that the service is now in preview and available across regions including  US East (North Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (London).

New evaluation criteria, such as agents’ adherence to scripts and compliance, can be added to the review forms, AWS said, adding that machine-learning based scoring can be activated.

The machine learning scoring will use the same underlying technology used by Contact Lens to analyze conversations.

Additionally, AWS said that it was giving enterprises the chance to create new workflows for agents who use the Amazon Connect Agent Workspace to do daily tasks.

“You can now also use Amazon Connect’s no-code, drag-and-drop interface to create custom workflows and step-by-step guides for your agents,” the company said in a statement.

Amazon Connect uses a pay-for-what-you-use model, and no upfront payments or long-term commitments are required to sign up for the service.

Cloud Computing, Enterprise Applications, Machine Learning

Education is changing. In part, this shift is driven by students, who increasingly demand virtual and hybrid learning experiences that better match the ways they like to consume content at home. Meanwhile, virtual education has become an essential element of resilience for educational institutions by ensuring that students don’t fall behind during closures.

In the schools and universities of tomorrow, hybrid and virtual learning will play a central role in enabling inclusive education that’s focused on the unique needs of individual students and better able to drive engagement at all levels. As a result, student outcomes will likely improve. Evidence from corporate training programmes suggest that this could be the case, demonstrating that virtual learning boosts retention rates by 25% to 60% compared to 8% to 10% using traditional methods.

However, as schools and universities make the move to virtual and hybrid learning, many are encountering barriers that are slowing progress considerably.

The key challenge is one of complexity. The average number of edtech tools in schools is over 1,400, and IT teams will likely struggle to ensure the efficacy of such a large number of systems.  There are also questions around the impact on students. With no easy way to monitor student engagement there is no clear path to optimising virtual and hybrid experiences. Similarly, a lack of necessary features and capabilities in many of the tools, such as the ability to combine live, real-time, and video functionality,  mean that institutions can struggle to offer a range of learning experiences, necessary if they’re to tailor virtual learning to the needs of different students. 

Overcoming these barriers is crucial for educators, for the simple reason that doing so unlocks a range of benefits. For one, the curriculum is extended to any location, and schools can benefit from a talent pool of educators that includes anywhere with a good broadband connection. Virtual and hybrid learning creates both global and remote learning and delivers accessibility and localisation for learners.

Of course, there are still some people for whom broadband access is still a problem. But if this gap is closed, then the approach unlocks a 24/7 model for learning for all, where content is always available to students, and they can learn in a self-paced asynchronous manner. Additionally, virtual and hybrid learning can support a range of content formats to support self-serve learners, such as video on demand (VoD). This is a much more tailored approach based on providing personalised learning journeys for students. And of course, virtual experiences are available regardless of whether schools and universities are open or not, helping to build resilience.

Thanks to the cloud, the barriers currently holding institutions back can be overcome. Kaltura’s Video Experience Cloud for Education is a case in point. Kaltura is a cloud company focused on providing compelling video capabilities to organisations.

Kaltura Video Cloud for Education powers real-time, live and video on-demand for online development and virtual learning. Its products include virtual classroom, LMS video, video portal, lecture capture, video messaging, virtual event platform, and other video solutions — all designed to create engaging, personalised, and accessible experiences during class and beyond.

Kaltura content, technology, and data is fully interoperable and seamlessly integrates with all major learning management systems, enabling schools to quickly deploy and get started in transforming learning for their students and staff. The Kaltura Video Cloud for Education helps drive interaction, build community, boost creativity, and improve learning outcomes

Built on the Amazon Web Services (AWS) Cloud, Kaltura provides an elastic, reliable, performant, and secure platform that can enable schools and universities to accelerate their move to virtual and hybrid learning. 

For more information on how to use video to drive student engagement online click here to discover Kaltura’s Video Cloud Experience.

Education and Training Software, Hybrid Cloud, Virtualization

Reposted from Stack Overflow’s blog

Stack Overflow is named as a Sample Vendor in the 2022 Gartner® Hype Cycle™ for Agile and DevOps for Communities of Practice. We believe this is a powerful step forward in enabling organizations of all sizes to build strong internal communities that foster collective learning.

But before we get into too much detail, here’s why this matters…

Great engineering cultures

Great engineering cultures enable autonomy without creating silos that prevent cross-team collaboration and learning. To put it simply, people can share what they know, find what they don’t know, and discover what others know.

These are also the characteristics of a community of practice (CoP). These are employee-led, self-directed, always-on communities that enable individuals to collaborate, share knowledge and collectively learn and grow their skills. Or, a more formal definition from Etienne and Beverly Wenger-Trayner is “Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.”

For a successful Agile and DevOps practice, organizations must think beyond tooling. 

Engineering organizations need a strong community of practice culture that supports the collecting and distributing of knowledge, greater cross-organizational collaboration, and breaks down the silos that can happen in companies of all sizes.

The Hype Cycle

DevOps and Agile have both become ubiquitous in software engineering organizations of all sizes, but at one point, they were new and shiny, the subjects of wonder, promotion, and misinformation. The Hype Cycle takes innovative new technologies and inflates expectations to where they can’t possibly be met. Once enough organizations see what that technology can actually do, they get over their disappointment and start getting to a point where that technology becomes a productive part of organizations everywhere.

The success of DevOps and Agile has led to a whole slew of technologies entering the hype cycle: value stream management platforms, observability, container management, chaos engineering, and more. While many of these technologies are innovative and solve significant problems, they might not be right for every organization.

Tool adoption can sometimes be driven by individual teams within a larger organization. 

Vendors will be more than happy to sell a company solutions, but they won’t actually solve anything if they are siloed. For an organization to implement Agile and DevOps practices effectively, tools alone will not cut it.

Agile and DevOps require more than tools

When most people discuss DevOps, they talk about CI/CD pipelines, automation, observability, and other categories of tooling. But the best tools won’t improve processes on their own. People are what make DevOps work.

In The DevOps Handbook by Gene Kim, Jez Humble, Patrick Debois, John Willis, and Nicole Forsgren, PhD, the authors explain the principles underpinning DevOps: flow, feedback, and continual learning and experimentation. 

In summary: The first way (flow) is about the process. The end-to-end process must be continuously improved. The second way (feedback) is about communication. The people involved in the process need to be able to communicate, and that communication needs to be continuously improved. The third way is about experimenting and learning. To make big improvements, people need to be able to experiment, learn from failures, and keep the experiments that succeed.

Think of the culture created when business processes are ad hoc, the people involved don’t or can’t communicate, and/or failure is punished until nobody risked trying anything new.

To make DevOps a success, processes must be defined, visible, and always improving; communication must be encouraged, captured, and discoverable; and the ability to experiment and learn must drive innovation through an empirical and scientific process.

Tools can facilitate some elements of flow, feedback, and continual learning and experimentation, but it’s the people and the culture that underpins everything.

All the tools in the world won’t fix a culture that encourages knowledge hoarding, punishes people for not knowing, or isolates teams from working together and learning from each other.

The culture won’t be fixed by ad hoc or management-defined timeboxed initiatives meant to build collaboration within set boundaries and around set agendas or thinking that knowledge sharing will happen by just putting something into a wiki or a document. Changing company culture requires a movement, not a mandate.

Creating a community of practice

Because communities of practice are employee-led and voluntary, anyone within an organization can start one.

The first step to creating a community of practice is finding your people. Who are the people who have a common passion or focus? Think beyond the people who have a specific title or a specific role. With the rise of T-shaped teams, people will likely have an interest or desire to learn in areas outside their core focus. Ask around for recommendations or suggestions for people who may be interested.

At this point, you all can decide if you want it to be an informal community or if you want to build it out using a structured approach.

A structured approach involves:

creating a joint vision of the goal of the communitydefining how/where people openly collaborate (what platforms will people use, regular hangouts/meetings)knowing how individuals will benefit from the community – what will people get out of the time they put inidentifying how the community will benefit the business

At the end of the blog post, we’ve provided a few resources for you to read more about launching a community of practice, what to expect around participation, how to maintain momentum once you get started, and getting executive sponsorship.

Here at Stack Overflow, a couple of analysts in different groups realized they were trying to solve similar problems and started informally working together. They wanted to build standard approaches and definitions that would be used across teams. They shared what they were learning, collaborated on possible approaches, and brainstormed ways to solve cross-team analytic challenges. When they heard about analysts in other groups or people who just had a passion for analytics, they invited them to join their weekly hangouts. Working this way, the group came up with innovative solutions across the board. Projects moved along faster, they received buy-in with less struggle, and the business value of what each individual delivered increased.

Stack Overflow itself is a community of practice (actually, it’s several!). Coders of all skill levels and types come to our sites to better their skills and share the knowledge they’ve gained in the field. The result is that coders as a whole are more efficient — if you have a question, chances are someone else has already asked it and a solution is there for you. We’ve taken that community framework and the knowledge sharing and collaboration needs that enterprises have and built Stack Overflow for Teams so that organizations can easily start their own internal community of practices.

Communities of practice have business value

Communities of practice reduce the distance (real and virtual) between people and collective knowledge and learning.

What business value does it offer when employees spend time collaborating, contributing, participating, learning, and consuming all that the community of practice offers? Does it justify the time taken away from their day-to-day activities? The answer is resoundingly yes.

The Gartner® Hype Cycle™ for Agile and DevOps lists the business impact of communities of practice as:

Shorten the learning curve for employeesProvide higher levels of employee satisfaction, leading to higher motivation and innovationRespond more rapidly to customer needs and inquiriesReduce duplication of effortSpawn new ideas for products and servicesHelp members develop capabilities that align with organizational needs

In short, communities of practice enable organizations to see improvements in:

Onboarding, retaining, and upskilling talentImproving productivityAccelerating innovation

Read more

We’re proud that Stack Overflow is named a Sample Vendor in the 2022 Gartner Hype Cycle for Agile and DevOps for Communities of Practice. Click here to receive complimentary access to the Gartner research report.

Other resources:

Building A Successful Community of Practice In Your Company – HowDoCommunity of Practice Essentials, April 2022 (requires Gartner subscription)Building a community of practice in 5 steps | Opensource.comCommunities of Practice, A Summary For LeadersThe DevOps Handbook Companion Guide

Gartner and Hype Cycle are registered trademarks of Gartner, Inc. and/or its affiliates in the U.S. and internationally and are used herein with permission. All rights reserved. Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner research organization and should not be construed as statements of fact. Gartner disclaims all warranties, express or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Collaboration Software

By Bryan Kirschner, Vice President, Strategy at DataStax

From delightful consumer experiences to attacking fuel costs and carbon emissions in the global supply chain, real-time data and machine learning (ML) work together to power apps that change industries.

New research co-authored by Marco Iansiti, the co-founder of the Digital Initiative at Harvard Business School, sheds further light on how a data platform with robust real-time capabilities contribute to delivering competitive, ML-driven experiences in large enterprises.

It’s yet another key piece of evidence showing that there is a tangible return on a data architecture that is cloud-based and modernized – or, as this new research puts it, “coherent.”

Data architecture coherence

In the new report, titled “Digital Transformation, Data Architecture, and Legacy Systems,” researchers defined a range of measures of what they summed up as “data architecture coherence.” Then, using rigorous empirical analysis of data collected from Fortune 1000 companies, they found that every “yes” answer to a question about data architecture coherence results in about 0.7–0.9 more machine learning use casesacross the company. Moving from the bottom quartile to the top quartile of data architecture coherence leads to more intensive machine learning capabilities across the corporation, and about14% more applications and use cases being developed and turned into products.

They identified two architectural elements for processing and delivering data: the “data platform,” which covers the sourcing, ingestion, and storage of data sets, and the “machine learning (ML) system,” which trains and productizes predictive models using input data.

They conclude that what they describe as coherent data platforms “deliver real-time capabilities in a robust manner:they can incorporate dynamic updates to data flows and return instantaneous results to end-user queries.”

These kinds of capabilities enable companies like Uniphore to build a platform that applies AI to sales and customer interactions to analyze sentiment in real-time and boost sales and customer satisfaction.

Putting data in the hands of the people that need it

The study results don’t surprise us. In the latest State of the Data Race survey report, over three quarters of the more than 500 tech leaders and practitioners  (78%) told us real-time data is a “must have.” And nearly as many (74%) have ML in production.

Coherent data platforms also can “combine data from various sources, merge new data with existing data, and transmit them across the data platform and among users,” according to Iansiti and his co-author Ruiqing Cao of the Stockholm School of Economics.

This is critical, because at the end of the day, competitive use cases are built, deployed, and iterated by people: developers, data scientists, and business owners – potentially collaborating in new ways at established companies.

The authors of the study call this “co-invention,” and it’s a key requirement. In their view a coherent data architecture “helps traditional corporations translate technical investments into user-centric co-inventions.” As they put it, “Such co-inventions include machine learning applications and predictive analytics embedded across the organization in various business processes, which increase the value of work conducted by data users and decision-makers.”

We agree and can bring some additional perspective on the upside of that kind of approach. In The State of the Data Race 2022 report, two-thirds (66%) of respondents at organizations that made a strategic commitment to leveraging real-time data said developer productivity had improved. And, specifically among developers, 86% of respondents from those organizations said, “technology is more exciting than ever.” That represents a 24-point bump over those organizations where real time data wasn’t a priority.

The focus on a modern data architecture has never been clearer

Nobody likes data sprawl, data silos, and manual or brittle processes – all aspects of a data architecture that hamper developer productivity and innovation. But the urgency and the upside of modernizing and optimizing the data architecture keeps coming into sharper focus.

For all the current macroeconomic uncertainty, this much is clear: the path to future growth depends on getting your data architecture fit to compete and primed to deliver real time, ML-driven applications and experiences.

Learn more about DataStax here.

About Bryan Kirschner:

Bryan is Vice President, Strategy at DataStax. For more than 20 years he has helped large organizations build and execute strategy when they are seeking new ways forward and a future materially different from their past. He specializes in removing fear, uncertainty, and doubt from strategic decision-making through empirical data and market sensing.

Data Architecture, IT Leadership

New York-based insurance provider Travelers, with 30,000 employees and 2021 revenues of about $35 billion, is in the business of risk. Managing all of its facets, of course, requires many different approaches and tools to achieve beneficial outcomes, and Mano Mannoochahr, the company’s SVP and chief data & analytics officer, has a crow’s nest perspective of immediate and long-term tasks to equally strengthen the company culture and customer needs.

“What’s unique about the [chief data officer] role is it sits at the cross-section of data, technology, and analytics,” he says. “And we recognized as a company that we needed to start thinking about how we leverage advancements in technology and tremendous amounts of data across our ecosystem, and tie it with machine learning technology and other things advancing the field of analytics. We needed to think about those disciplines together and make progress to maximize the benefit to our customers and our business overall.”

Another focus is on finding and nurturing talent. It’s a pressing issue not unique to Travelers, but Mannoochahr sees that in order to deliver on those disciplines advancing analytics to foster a healthier business, he and his team recognize the need to cast a wider net.

“We have a tremendous amount of capability already created helping our employees make the best decisions on our front lines,” he says. “But we have to bring in the right talent. This is kind of a team sport for us, so it’s not just data scientists but software engineers, data engineers, and even behavioral scientists to understand how we empathize and best leverage the experience that our frontline employees have, as well as position these capabilities in the best way so we can gain their trust and they can start to trust the data and the tool to make informed decisions. [The pandemic] slowed us down a little, as far as availability of talent, but I think we’ve doubled down on creating more opportunities for our existing talent, in helping them elevate their skills.”

Mannoochahr  recently spoke to Maryfran Johnson, CEO of Maryfran Johnson Media and host of the IDG Tech(talk) podcast, about how the CDO coordinates data, technology, and analytics to not only capitalize on advancements in machine learning and AI in real time, but better manage talent and help foster a forward-thinking and ambitious culture.

Here are some edited excerpts of that conversation. Watch the full video below for more insights.

On the role of the Chief Data Officer:

Due to the nature of our business, Travelers has always used data analytics to assess and price risk. What’s unique about the role is it sits at the cross-section of data, technology, and analytics. And we recognized as a company that we needed to start thinking about how we leverage advancements in technology and tremendous amounts of data across our ecosystem, and tie it with machine learning technology and other things advancing the field of analytics. We needed to think about those disciplines together and make progress to maximize the benefit to our customers and our business overall. It’s a unique role and it’s been a great journey. Collectively, the scope spans about 1,600 data analytics professionals in the company and we work closely with our technology partners—more than 3,000 of them—that cover areas of software engineering, infrastructure, cybersecurity, and architecture, for instance.

On business transformation:

We perform around our current business and want to meet to be able to deliver results. But at the same time, we’re thinking about the transformation of the business because opportunities are endless as you start to marry data, technology, and analytics. So the transformation of the next wave that we’re driving is really coming from the nexus of the infinite amount of data being generated, advancements in cloud computing and technology, and, of course, our ability to continue to expand our analytics expertise. We’ve always used these things in some form or fashion to appropriately price grids, set aside a group of reserves for being able to pay out claims, and, of course, serve our customers, agents, and brokers. But what’s changed is a greater world of possibilities. On a yearly basis, we respond to about two million folks from our brokers and agents and process over a million claims per year. So if you put it all together, every one of those transactions or interactions can be reinvented through a lens of technology, AI or machine learning. So we need to inform our front lines and workers how to make the most of the information available to do their job better. It’s an opportunity to reimagine some of the work on the front line that we’re getting excited about.

On having a data-first culture:

This is not about just the practitioners of this discipline or these capabilities. This is about being able to lift the rest of the more than 29,000 people in the organization and make them better and more informed employees through being able to deliver some set of training to elevate their capabilities. So we’ve been on a mission to raise the water mark for the entire organization. One of the things we’ve done is produce data culture knowledge map training, which is designed to help our broader operation understand that the data we create daily could be with us for decades to come, have a life outside an employee’s own desk, or inform about the many different ways data has been used. We have about 13,000 employees through this set of training and it’s received great feedback from the broader organization. Plus, we’ve also started to focus on our business operation leaders and help them understand how they can better utilize analytics and data, overcome biases from a management perspective, and continue validating them so they make the best decisions to run the business.

On sourcing talent:

We have a tremendous amount of capability already created with over 1,000 models being deployed in different parts of the business, helping our employees make the best decisions on our front lines. But opportunities lie ahead, so we have to ensure we bring in the right talent. And I would say this is kind of a team sport for us, so it’s not just data scientists but software engineers, data engineers, and even behavioral scientists to understand how we empathize and best leverage the experience that our frontline employees have, as well as be able to position these tools and capabilities in the best way so we can gain their trust and they can start to trust the data and the tool to make informed decisions. One of my goals, and one of our broader team, is we want to spread the message and help the talent out there understand a lot of the great, challenging problems we’re solving for the business, and how rewarding that work has been for us. But the challenge has only increased from a digitization perspective as COVID-19 hit, which created a lot of demand. It slowed us down a little, as far as availability of talent, but I think we’ve doubled down on creating more opportunities for our existing talent, in helping them elevate their skills.

Chief Data Officer

The shift to e-learning has changed education for good. Students and educators now expect anytime, anywhere access to their learning environments and are increasingly demanding access to modern, cloud-based technologies that enable them to work flexibly, cut down their workloads, and reach their full academic potential.

This means that institutions need to take a holistic approach to education technology (EdTech), including platforms used for teaching and learning, to not only meet these demands but to address ever-present challenges such as student success, retention, accessibility, and educational integrity.

However, for many embarking on this digital transformation journey and looking to more fully embrace EdTech, it can be daunting. Not only are IT leaders often faced with issues related to cost, infrastructure and security, but some solutions can make it challenging for schools to deliver inclusive, consistent educational experiences to all of their students. 

For example, some solutions may require an upheaval of existing tools and infrastructure, placing a strain on already-busy IT teams. Technology leaders are also looking to ensure the security of their schools’ digital ecosystem and that educators and students receive sufficient training in order to use these tools in the classroom.

Other EdTech solutions offer a one-size-fits-all approach to education, making it difficult for some students to keep up with online learning and for educators to adapt to pupils’ different needs. Similarly, while some solutions enable teachers and students to work and learn remotely, they struggle to adapt to hybrid teaching models.

Anthology’s learning management system (LMS), Blackboard Learn, takes a different approach. Designed to make the lives of educators and learners easier, Blackboard Learn creates experiences that are informed and personalised to support learning, teaching, and leading more effectively.

With students and teachers alike demanding more flexibility, Blackboard Learn can be used to replace or to supplement traditional face-to-face classes, enabling institutions to recognise the full benefits of a hybrid environment while ensuring nobody is left behind. For example, by providing personalised learning experiences, students are empowered to learn on-the-go and in ways that best meet their individual needs, ensuring educators can deliver inclusive, consistent experiences for learners of all abilities.

It also allows students to gain independence and become more autonomous. By providing real-time, data-driven insights, learners can keep track of their own progress, identify next steps, and get the support they need when they need it. These insights also enable educators to identify disengaged or struggling learners sooner to help promote more positive outcomes for students, while Blackboard’s customisable feedback ensures all students are on track for assessment success.

Anthology’s LMS can make life easier for IT leaders, too. The SaaS application code was built with security and privacy in mind and is LMS agnostic, ensuring seamless integration into the learning management system and existing workflows. What’s more, by using Amazon Web Services (AWS) Cloud, institutions benefit from continuous deliverability of smaller updates – which require zero downtime.

This also means that Anthology has the agility to develop capabilities and features quickly, such as its built-in accessibility and plagiarism tools. Because these features are out-of-the-box, institutions can save money while benefitting from a streamlined, scalable EdTech stack that can continue to evolve as they do.

With Blackboard Learn by Anthology, educators can rest assured they have the foundation of an EdTech ecosystem that equips all students and teachers with the flexibility to create more personalised learning experiences that support student success, while improving efficiency and setting their institution up for what’s to come in higher education.

For more insights into understanding student expectations, click here to read Anthology’s whitepaper.

Artificial Intelligence, Education and Training Software

Machine learning (ML) is a commonly used term across nearly every sector of IT today. And while ML has frequently been used to make sense of big data—to improve business performance and processes and help make predictions—it has also proven priceless in other applications, including cybersecurity. This article will share reasons why ML has risen to such importance in cybersecurity, share some of the challenges of this particular application of the technology and describe the future that machine learning enables.

Why Machine Learning Has Become Vital for Cybersecurity

The need for machine learning has to do with complexity. Many organizations today possess a growing number of Internet of Things (IoT) devices that aren’t all known or managed by IT. All data and applications aren’t running on-premises, as hybrid and multicloud are the new normal. Users are no longer mostly in the office, as remote work is widely accepted.

Not all that long ago, it was common for enterprises to rely on signature-based detection for malware, static firewall rules for network traffic and access control lists (ACLs) to define security policies. In a world with more devices, in more places than ever, the old ways of detecting potential security risks fail to keep up with the scale, scope and complexity.

Machine learning is all about training models to learn automatically from large amounts of data, and from the learning, a system can then identify trends, spot anomalies, make recommendations and ultimately execute actions. In order to address all the new security challenges that organizations face, there is a clear need for machine learning. Only machine learning can address the increasing number of challenges in cybersecurity: scaling up security solutions, detecting unknown attacks and detecting advanced attacks, including polymorphic malware. Advanced malware can change forms to evade detection, and using a traditional signature-based approach makes it very difficult to detect such advanced attacks. ML turns out to be the best solution to combat it.

What Makes Machine Learning Different in Cybersecurity

Machine learning is well understood and widely deployed across many areas. Among the most popular are image processing for recognition and natural language processing (NLP) to help understand what a human or a piece of text is saying.

Cybersecurity is different from other use cases for machine learning in some respects.

Leveraging machine learning in cybersecurity carries its own challenges and requirements. We will discuss three unique challenges for applying ML to cybersecurity and three common but more severe challenges in cybersecurity.

Three Unique Challenges for Applying ML to Cybersecurity

Challenge 1: The much higher accuracy requirements. For example, if you’re just doing image processing, and the system mistakes a dog for a cat, that might be annoying but likely doesn’t have a life or death impact. If a machine learning system mistakes a fraudulent data packet for a legitimate one that leads to an attack against a hospital and its devices, the impact of the mis-categorization can be severe.

Every day, organizations see large volumes of data packets traverse firewalls. Even if only 0.1% of the data is mis-categorized by machine learning, we can wrongly block huge amounts of normal traffic that would severely impact the business. It’s understandable that in the early days of machine learning, some organizations were concerned that the models wouldn’t be as accurate as human security researchers. It takes time, and it also takes huge amounts of data to actually train a machine learning model to get up to the same level of accuracy as a really skilled human. Humans, however, don’t scale and are among the scarcest resources in IT today. We are relying on ML to efficiently scale up the cybersecurity solutions. Also, ML can help us detect unknown attacks that are hard for humans to detect, as ML can build up baseline behaviors and detect any abnormalities that deviate from them.

Challenge 2: The access to large amounts of training data, especially labeled data. Machine learning requires a large amount of data to make models and predictions more accurate. Gaining malware samples is a lot harder than acquiring data in image processing and NLP. There is not enough attack data, and lots of security risk data is sensitive and not available because of privacy concerns.

Challenge 3: The ground truth. Unlike images, the ground truth in cybersecurity might not always be available or fixed. The cybersecurity landscape is dynamic and changing all the time. Not a single malware database can claim to cover all the malware in the world, and more malware is being generated at any moment. What is the ground truth that we should compare to in order to decide our accuracy?

Three ML Challenges Made More Severe in Cybersecurity

There are other challenges that are common for ML in all sectors but more severe for ML in cybersecurity.

Challenge 1: Explainability of machine learning models. Having a comprehensive understanding of the machine learning results is critical to our ability to take proper action.

Challenge 2: Talent scarcity. We have to combine domain knowledge with ML expertise in order for ML to be effective in any area. Either ML or security alone is short of talent; it is even harder to find experts who know both ML and security. That’s where we found it is critical to make sure ML data scientists work together with security researchers, even though they don’t speak the same language, use different methodologies, and have different ways of thinking and different approaches. It is very important for them to learn to work with each other. Collaboration between these two groups is the key to successfully applying ML to cybersecurity.

Challenge 3: ML security. Because of the critical role cybersecurity plays in each business, it is more critical to make sure the ML we use in cybersecurity is secure by itself. There has been research in this area in academics, and we are glad to see and contribute to the industry movement in securing ML models and data. Palo Alto Networks is driving innovation and doing everything to make sure our ML is secure.

The goal of machine learning is to make security more efficient and scalable in an effort to help save labor and prevent unknown attacks. It’s hard to use manual labor to scale up to billions of devices, but machine learning can easily do that. And that is the kind of scale organizations truly need to protect themselves in the escalating threat landscape. ML is also critical for detecting unknown attacks in many critical infrastructures. We can’t afford even one attack, which can mean life or death.

How Machine Learning Enables the Future of Cybersecurity

Machine learning supports modern cybersecurity solutions in a number of different ways. Individually, each one is valuable, and together they are game-changing for maintaining a strong security posture in a dynamic threat landscape.

Identification and profiling: With new devices getting connected to enterprise networks all the time, it’s not easy for an IT organization to be aware of them all. Machine learning can be used to identify and profile devices on a network. That profile can determine the different features and behaviors of a given device.

Automated anomaly detection: Using machine learning to rapidly identify known bad behaviors is a great use case for security. After first profiling devices and understanding regular activities, machine learning knows what’s normal and what’s not.

Zero-day detection: With traditional security, a bad action has to be seen at least once for it to be identified as a bad action. That’s the way that legacy signature-based malware detection works. Machine learning can intelligently identify previously unknown forms of malware and attacks to help protect organizations from potential zero-day attacks.

Insights at scale: With data and application in many different locations, being able to identify trends across large volumes of devices is just not humanly possible. Machine learning can do what humans cannot, enabling automation for insights at scale.

Policy recommendations: The process of building security policies is often a very manual effort that has no shortage of challenges. With an understanding of what devices are present and what is normal behavior, machine learning can help to provide policy recommendations for security devices, including firewalls. Instead of having to manually navigate around different conflicting access control lists for different devices and network segments, machine learning can make specific recommendations that work in an automated approach.

With more devices and threats coming online every day, and human security resources in scarce supply, only machine learning can sort complicated situations and scenarios at scale to enable organizations to meet the challenge of cybersecurity now and in the years to come.

Learn more about machine learning in cybersecurity here.

About Dr. May Wang:

Dr. May Wang is the CTO of IoT Security at Palo Alto Networks and the Co-founder, Chief Technology Officer (CTO), and board member of Zingbox, which was acquired by Palo Alto Networks in 2019 for its security solutions to Internet of Things (IoT).

Internet of Things, IT Leadership

Technology is hardly the only industry experiencing hiring challenges at the moment, but resignations in tech still rank among the highest across all industries, with a 4.5% increase in resignations in 2021 compared with 2020, according to Harvard Business Review.

For the most part, these employees aren’t leaving the industry altogether; they’re moving to companies that can offer them what they want. Flexible schedules and work-life balance? 

Absolutely. Higher salaries? Of course. But one of the primary reasons why people in tech, particularly developers, switch or consider switching roles is because they want more opportunities to learn. Developers don’t want to quit: they want to face new challenges, acquire new skills, and find new ways to solve problems.

Ensuring access to learning and growth opportunities is part of the mandate for tech leaders looking to attract and retain the best people. A culture of continuous learning that encourages developers to upskill and reskill will also give your employees every opportunity to deliver more value to your organization.

Read on to learn how and why expanding access to learning helps you build higher-performing teams and a more inherently resilient organization.

Developers want more learning opportunities — and leadership should listen

Giving developers opportunities to learn has a major, positive impact on hiring, retention, and team performance. According to a Stack Overflow pulse survey, more than 50% of developers would consider leaving a job because it didn’t offer enough chances for learning and growth, while a similar percentage would stick with a role because it did offer these opportunities. And 50% percent of developers report that access to learning opportunities contributes to their happiness at work.

Yet most developers feel they don’t get enough time at work to devote to learning. Via a Twitter poll, Stack Overflow found that, when asked how much time they get at work to learn, nearly half of developers (46%) said “hardly any or none.” Considering that more than 50% of developers would consider leaving a job if it didn’t offer enough learning time, it’s clear that one way to help solve hiring and retention challenges is to give employees more chances to pick up new skills and evolve existing ones.

How can tech leaders and managers solve for this? One key is to create an environment where employees feel psychologically safe investing time in learning and asking for more time when they need it. High-pressure environments tend to emphasize wasted time (“How much time did you waste doing that?”) instead of invested time (“I invested 10 hours this week in learning this”). In this context, plenty of employees are afraid to ask about devoting work time to learning.

Company leadership and team managers can make this easier by consistently communicating the value of learning and modeling a top-down commitment to continuous learning. Executives and senior leaders can share their knowledge with employees through fireside chats and AMAs to underscore the importance of this culture shift. Managers should take the same approach with their teams. You can’t expect your more junior employees to invest time in learning if you haven’t made it clear, at every level of your organization, that learning matters.

Expanding learning opportunities improves team performance and organizational resiliency

Elevating the importance of learning helps sustain performance and competency in your engineering teams. But it does more than improve retention or team-level performance: it also builds organizational resiliency.

Some of your employees are always going to leave: to seek new adventures, to combat burnout or boredom, to make more money. Leadership no longer has the luxury of hiring for a specific skill and then considering that area covered forever. Technology and technology companies are changing too fast for that. Retaining talent is certainly important, but ultimately leaders should be focused on creating organizations that are resilient rather than fragile. The loss of one or two key individuals shouldn’t impede the progress of multiple teams or disrupt the organization as a whole.

There’s nothing you can do to completely eliminate turnover, but you can take steps to make your organization more resilient when turnover inevitably occurs:

Ensure that your teams don’t break when people leave. Incorporating more opportunities to learn into your developers’ working lives helps offset the knowledge and productivity losses that can happen when employees move on, taking their expertise with them. How many times have you heard a variation of this exchange: “How does this system/tool work?” “I don’t know; go ask [expert].” But what happens when that expert leaves? Resilient teams and organizations don’t stumble over the loss of a few key people.Give employees access to the learning opportunities they want. As we’ve said, developers prize roles that allow them to learn on the job. Access to learning opportunities is a major factor they weigh when deciding whether to leave a current job or accept a new one. Expanding learning opportunities for developers makes individual employees happier and more valuable to the organization while increasing organizational resiliency.Avoid asking your high-performers to do all the teaching. Implicitly or explicitly asking your strongest team members to serve as sources of truth and wisdom for your entire team is a bad idea. It sets your experts up for unhappiness and burnout, factors likely to push them out the door. Create a system where both new and seasoned employees can self-serve information so they can unstick themselves when they get stuck.

Four steps to prioritize learning and attract/retain high-performance teams

When it comes to learning, there are four major steps you can take to attract and retain the best talent and increase organizational resiliency.

1. Surface subject matter experts.

Your team has questions? Chances are, someone at your company has answers. There are experts (and potential experts) throughout your organization whose knowledge can eliminate roadblocks and improve processes. Your challenge is to uncover these experts — and plant the seeds for future experts by giving your employees time to learn new skills and investigate new solutions.

Lower the barrier to entry by making it fast, simple, and intuitive for people to contribute to your knowledge platform. Keep in mind that creating asynchronous paths for your employees to find and connect with experts enables knowledge sharing without creating additional distractions or an undue burden for those experts.

How Stack Overflow for Teams surfaces subject matter experts:

Spotlights subject matter experts (SMEs) across teams and departments to connect people with questions to people with answersEnables upskilling and reskilling by allowing teams and individuals to learn from one anotherAsynchronous communication allows employees to ask and answer questions without disrupting their established workflowsQ&A format lowers barriers to contribution and incentivizes users to explore and contribute to knowledge resources

2. Capture and preserve knowledge

Establishing practices to capture and preserve information is essential for making learning scale. The goal is to convert individual learnings and experiences into institutional knowledge that informs best practices so that everyone, and the organization as a whole, can benefit. That knowledge should be easily discoverable and its original context preserved for future knowledge-seekers. To capture and preserve knowledge effectively, you also need to make it easy for users to engage with your knowledge platform.

How Stack Overflow for Teams captures and preserves knowledge:

Collects knowledge continuously to preserve information and context without disrupting developers’ workflowsMakes knowledge searchable, so employees can self-serve answers to their questions and find solutions others have already worked outCompared with technical documentation, Q&A format requires a shorter time investment for both people with questions and people with answers

3. Make information centralized and accessible

The good news is that nobody at your company has to know everything. They just need to know where to find it. After all, knowledge is only valuable if people can locate it when they need it. That’s why knowledge resources should be easy to find, retrieve, and share across teams.

This is particularly critical as your organization scales: new hires can teach themselves the ropes without requiring extensive, synchronous communication with more seasoned employees who already have plenty of responsibilities and find themselves answering the same questions over and over again.

How Stack Overflow for Teams makes information centralized and accessible:

Makes information easy to locate, access, and shareSpeeds up onboarding and shortens time-to-value for new hiresAllows users to make meaningful contributions to knowledge resources without investing huge amounts of time or interrupting their flow state

4. Keep knowledge healthy and resilient

Knowledge isn’t immune to its own kind of tech debt. The major problem with static documentation is that the instant you hit Save, your content has started its steady slide toward being out of date. Like code, regardless of its scale, information must be continually maintained in order to deliver its full value.

Keeping content healthy — that is, fresh, accurate, and up-to-date — is essential. When your knowledge base is outdated or incomplete, employees start to lose trust in your knowledge. 

Once trust starts eroding, people stop contributing to your knowledge platform, and it grows even more outdated. Since SMEs are often largely responsible for ensuring that content is complete, properly edited, and consistently updated, keeping content healthy can be yet another heavy burden on these individuals. That’s why a crowdsourced platform that encourages the community to curate, update, and improve content is so valuable.

How Stack Overflow for Teams keeps knowledge healthy and resilient:

Our Content Health feature intelligently surfaces knowledge that might be outdated, inaccurate, or untrustworthy, encouraging more engagement and ensuring higher-quality knowledge resourcesContent is curated, updated, and maintained by the community, reducing the burden on SMEsThe platform automatically spotlights the most valuable, relevant information as employees vote on the best answers, thereby increasing user confidence in your knowledge

Resiliency requires learning

You can’t build a resilient organization without putting learning at the center of how your teams operate. Not only is offering access to learning and growth opportunities a requirement for attracting and retaining top talent, but fostering a culture of continuous learning protects against knowledge loss, keeps individuals and teams working productively, and encourages employees to develop skills that will make them even more valuable to your organization.

To learn more about Stack Overflow for Teams, visit us here

IT Leadership