In a bid to help enterprises offer better customer service and experience, Amazon Web Services (AWS) on Tuesday, at its annual re:Invent conference, said that it was adding new machine learning capabilities to its cloud-based contact center service, Amazon Connect.

AWS launched Amazon Connect in 2017 in an effort to offer a low-cost, high-value alternative to traditional customer service software suites.

As part of the announcement, the company said that it was making the forecasting, capacity planning, scheduling and Contact Lens feature of Amazon Connect generally available while introducing two new features in preview.

Forecasting, capacity planning and scheduling now available

The forecasting, capacity planning and scheduling features, which were announced in March and have been in preview until now, are geared toward helping enterprises predict contact center demand, plan staffing, and schedule agents as required.

In order to forecast demand, Amazon Connect uses machine learning models to analyze and predict contact volume and average handle time based on historical data, the company said, adding that the forecasts include predictions for inbound calls, transfer calls, and callback contacts in both voice and chat channels.

These forecasts are then combined with planning scenarios and metrics such as occupancy, daily attrition, and full-time equivalent (FTE) hours per week to help with staffing, the company said, adding that the capacity planning feature helps predict the number of agents required to meet service level targets for a certain period of time.

Amazon Connect uses the forecasts generated from historical data and combines them with metrics or inputs such as shift profiles and staffing groups to create schedules that match an enterprise’s requirements.

The schedules created can be edited or reviewed if needed and once the schedules are published, Amazon Connect notifies the agent and the supervisor that a new schedule has been made available.

Additionally, the scheduling feature now supports intraday agent request management which helps track time off or overtime for agents.

A machine learning model at the back end that drives scheduling can make real-time adjustments in context of the rules input by an enterprise, AWS said, adding that enterprises can take advantage of the new features by enabling them at the Amazon Connect Console.

After they have been activated via the Console, the capabilities can be accessed via the Amazon Connect Analytics and Optimization module within Connect.

The forecasting, capacity planning, and scheduling features are available initially across US East (North Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (London) Regions.

Contact Lens to provide conversational analytics

The Contact Lens service, which was added to Amazon Connect to analyze conversations in real time using natural language processing (NLP) and speech-to-text analytics, has been made generally available.

The capability to do analysis has been extended to text messages from Amazon Connect Chat, AWS said.

Contact Lens’ conversational analytics for chat helps you understand customer sentiment, redact sensitive customer information, and monitor agent compliance with company guidelines to improve agent performance and customer experience,” the company said in a statement.

Another feature within Contact Lens, dubbed contact search, will allow enterprises to search for chats based on specific keywords, customer sentiment score, contact categories, and other chat-specific analytics such as agent response time, the company said, adding that Lens will also offer a chat summarization feature.

This feature, according to the company, uses machine learning to classify, and highlight key parts of the customer’s conversation, such as issue, outcome, or action item.

New features allow for agent evaluation

AWS also said that it was adding two new capabilities—evaluating agents and recreating contact center workflow—to Amazon Connect, in preview. Using Contact Lens for Amazon Connect, enterprises will be able to create agent performance evaluation forms, the company said, adding that the service is now in preview and available across regions including  US East (North Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (London).

New evaluation criteria, such as agents’ adherence to scripts and compliance, can be added to the review forms, AWS said, adding that machine-learning based scoring can be activated.

The machine learning scoring will use the same underlying technology used by Contact Lens to analyze conversations.

Additionally, AWS said that it was giving enterprises the chance to create new workflows for agents who use the Amazon Connect Agent Workspace to do daily tasks.

“You can now also use Amazon Connect’s no-code, drag-and-drop interface to create custom workflows and step-by-step guides for your agents,” the company said in a statement.

Amazon Connect uses a pay-for-what-you-use model, and no upfront payments or long-term commitments are required to sign up for the service.

Cloud Computing, Enterprise Applications, Machine Learning

By Bryan Kirschner, Vice President, Strategy at DataStax

From delightful consumer experiences to attacking fuel costs and carbon emissions in the global supply chain, real-time data and machine learning (ML) work together to power apps that change industries.

New research co-authored by Marco Iansiti, the co-founder of the Digital Initiative at Harvard Business School, sheds further light on how a data platform with robust real-time capabilities contribute to delivering competitive, ML-driven experiences in large enterprises.

It’s yet another key piece of evidence showing that there is a tangible return on a data architecture that is cloud-based and modernized – or, as this new research puts it, “coherent.”

Data architecture coherence

In the new report, titled “Digital Transformation, Data Architecture, and Legacy Systems,” researchers defined a range of measures of what they summed up as “data architecture coherence.” Then, using rigorous empirical analysis of data collected from Fortune 1000 companies, they found that every “yes” answer to a question about data architecture coherence results in about 0.7–0.9 more machine learning use casesacross the company. Moving from the bottom quartile to the top quartile of data architecture coherence leads to more intensive machine learning capabilities across the corporation, and about14% more applications and use cases being developed and turned into products.

They identified two architectural elements for processing and delivering data: the “data platform,” which covers the sourcing, ingestion, and storage of data sets, and the “machine learning (ML) system,” which trains and productizes predictive models using input data.

They conclude that what they describe as coherent data platforms “deliver real-time capabilities in a robust manner:they can incorporate dynamic updates to data flows and return instantaneous results to end-user queries.”

These kinds of capabilities enable companies like Uniphore to build a platform that applies AI to sales and customer interactions to analyze sentiment in real-time and boost sales and customer satisfaction.

Putting data in the hands of the people that need it

The study results don’t surprise us. In the latest State of the Data Race survey report, over three quarters of the more than 500 tech leaders and practitioners  (78%) told us real-time data is a “must have.” And nearly as many (74%) have ML in production.

Coherent data platforms also can “combine data from various sources, merge new data with existing data, and transmit them across the data platform and among users,” according to Iansiti and his co-author Ruiqing Cao of the Stockholm School of Economics.

This is critical, because at the end of the day, competitive use cases are built, deployed, and iterated by people: developers, data scientists, and business owners – potentially collaborating in new ways at established companies.

The authors of the study call this “co-invention,” and it’s a key requirement. In their view a coherent data architecture “helps traditional corporations translate technical investments into user-centric co-inventions.” As they put it, “Such co-inventions include machine learning applications and predictive analytics embedded across the organization in various business processes, which increase the value of work conducted by data users and decision-makers.”

We agree and can bring some additional perspective on the upside of that kind of approach. In The State of the Data Race 2022 report, two-thirds (66%) of respondents at organizations that made a strategic commitment to leveraging real-time data said developer productivity had improved. And, specifically among developers, 86% of respondents from those organizations said, “technology is more exciting than ever.” That represents a 24-point bump over those organizations where real time data wasn’t a priority.

The focus on a modern data architecture has never been clearer

Nobody likes data sprawl, data silos, and manual or brittle processes – all aspects of a data architecture that hamper developer productivity and innovation. But the urgency and the upside of modernizing and optimizing the data architecture keeps coming into sharper focus.

For all the current macroeconomic uncertainty, this much is clear: the path to future growth depends on getting your data architecture fit to compete and primed to deliver real time, ML-driven applications and experiences.

Learn more about DataStax here.

About Bryan Kirschner:

Bryan is Vice President, Strategy at DataStax. For more than 20 years he has helped large organizations build and execute strategy when they are seeking new ways forward and a future materially different from their past. He specializes in removing fear, uncertainty, and doubt from strategic decision-making through empirical data and market sensing.

Data Architecture, IT Leadership

New York-based insurance provider Travelers, with 30,000 employees and 2021 revenues of about $35 billion, is in the business of risk. Managing all of its facets, of course, requires many different approaches and tools to achieve beneficial outcomes, and Mano Mannoochahr, the company’s SVP and chief data & analytics officer, has a crow’s nest perspective of immediate and long-term tasks to equally strengthen the company culture and customer needs.

“What’s unique about the [chief data officer] role is it sits at the cross-section of data, technology, and analytics,” he says. “And we recognized as a company that we needed to start thinking about how we leverage advancements in technology and tremendous amounts of data across our ecosystem, and tie it with machine learning technology and other things advancing the field of analytics. We needed to think about those disciplines together and make progress to maximize the benefit to our customers and our business overall.”

Another focus is on finding and nurturing talent. It’s a pressing issue not unique to Travelers, but Mannoochahr sees that in order to deliver on those disciplines advancing analytics to foster a healthier business, he and his team recognize the need to cast a wider net.

“We have a tremendous amount of capability already created helping our employees make the best decisions on our front lines,” he says. “But we have to bring in the right talent. This is kind of a team sport for us, so it’s not just data scientists but software engineers, data engineers, and even behavioral scientists to understand how we empathize and best leverage the experience that our frontline employees have, as well as position these capabilities in the best way so we can gain their trust and they can start to trust the data and the tool to make informed decisions. [The pandemic] slowed us down a little, as far as availability of talent, but I think we’ve doubled down on creating more opportunities for our existing talent, in helping them elevate their skills.”

Mannoochahr  recently spoke to Maryfran Johnson, CEO of Maryfran Johnson Media and host of the IDG Tech(talk) podcast, about how the CDO coordinates data, technology, and analytics to not only capitalize on advancements in machine learning and AI in real time, but better manage talent and help foster a forward-thinking and ambitious culture.

Here are some edited excerpts of that conversation. Watch the full video below for more insights.

On the role of the Chief Data Officer:

Due to the nature of our business, Travelers has always used data analytics to assess and price risk. What’s unique about the role is it sits at the cross-section of data, technology, and analytics. And we recognized as a company that we needed to start thinking about how we leverage advancements in technology and tremendous amounts of data across our ecosystem, and tie it with machine learning technology and other things advancing the field of analytics. We needed to think about those disciplines together and make progress to maximize the benefit to our customers and our business overall. It’s a unique role and it’s been a great journey. Collectively, the scope spans about 1,600 data analytics professionals in the company and we work closely with our technology partners—more than 3,000 of them—that cover areas of software engineering, infrastructure, cybersecurity, and architecture, for instance.

On business transformation:

We perform around our current business and want to meet to be able to deliver results. But at the same time, we’re thinking about the transformation of the business because opportunities are endless as you start to marry data, technology, and analytics. So the transformation of the next wave that we’re driving is really coming from the nexus of the infinite amount of data being generated, advancements in cloud computing and technology, and, of course, our ability to continue to expand our analytics expertise. We’ve always used these things in some form or fashion to appropriately price grids, set aside a group of reserves for being able to pay out claims, and, of course, serve our customers, agents, and brokers. But what’s changed is a greater world of possibilities. On a yearly basis, we respond to about two million folks from our brokers and agents and process over a million claims per year. So if you put it all together, every one of those transactions or interactions can be reinvented through a lens of technology, AI or machine learning. So we need to inform our front lines and workers how to make the most of the information available to do their job better. It’s an opportunity to reimagine some of the work on the front line that we’re getting excited about.

On having a data-first culture:

This is not about just the practitioners of this discipline or these capabilities. This is about being able to lift the rest of the more than 29,000 people in the organization and make them better and more informed employees through being able to deliver some set of training to elevate their capabilities. So we’ve been on a mission to raise the water mark for the entire organization. One of the things we’ve done is produce data culture knowledge map training, which is designed to help our broader operation understand that the data we create daily could be with us for decades to come, have a life outside an employee’s own desk, or inform about the many different ways data has been used. We have about 13,000 employees through this set of training and it’s received great feedback from the broader organization. Plus, we’ve also started to focus on our business operation leaders and help them understand how they can better utilize analytics and data, overcome biases from a management perspective, and continue validating them so they make the best decisions to run the business.

On sourcing talent:

We have a tremendous amount of capability already created with over 1,000 models being deployed in different parts of the business, helping our employees make the best decisions on our front lines. But opportunities lie ahead, so we have to ensure we bring in the right talent. And I would say this is kind of a team sport for us, so it’s not just data scientists but software engineers, data engineers, and even behavioral scientists to understand how we empathize and best leverage the experience that our frontline employees have, as well as be able to position these tools and capabilities in the best way so we can gain their trust and they can start to trust the data and the tool to make informed decisions. One of my goals, and one of our broader team, is we want to spread the message and help the talent out there understand a lot of the great, challenging problems we’re solving for the business, and how rewarding that work has been for us. But the challenge has only increased from a digitization perspective as COVID-19 hit, which created a lot of demand. It slowed us down a little, as far as availability of talent, but I think we’ve doubled down on creating more opportunities for our existing talent, in helping them elevate their skills.

Chief Data Officer

Machine learning (ML) is a commonly used term across nearly every sector of IT today. And while ML has frequently been used to make sense of big data—to improve business performance and processes and help make predictions—it has also proven priceless in other applications, including cybersecurity. This article will share reasons why ML has risen to such importance in cybersecurity, share some of the challenges of this particular application of the technology and describe the future that machine learning enables.

Why Machine Learning Has Become Vital for Cybersecurity

The need for machine learning has to do with complexity. Many organizations today possess a growing number of Internet of Things (IoT) devices that aren’t all known or managed by IT. All data and applications aren’t running on-premises, as hybrid and multicloud are the new normal. Users are no longer mostly in the office, as remote work is widely accepted.

Not all that long ago, it was common for enterprises to rely on signature-based detection for malware, static firewall rules for network traffic and access control lists (ACLs) to define security policies. In a world with more devices, in more places than ever, the old ways of detecting potential security risks fail to keep up with the scale, scope and complexity.

Machine learning is all about training models to learn automatically from large amounts of data, and from the learning, a system can then identify trends, spot anomalies, make recommendations and ultimately execute actions. In order to address all the new security challenges that organizations face, there is a clear need for machine learning. Only machine learning can address the increasing number of challenges in cybersecurity: scaling up security solutions, detecting unknown attacks and detecting advanced attacks, including polymorphic malware. Advanced malware can change forms to evade detection, and using a traditional signature-based approach makes it very difficult to detect such advanced attacks. ML turns out to be the best solution to combat it.

What Makes Machine Learning Different in Cybersecurity

Machine learning is well understood and widely deployed across many areas. Among the most popular are image processing for recognition and natural language processing (NLP) to help understand what a human or a piece of text is saying.

Cybersecurity is different from other use cases for machine learning in some respects.

Leveraging machine learning in cybersecurity carries its own challenges and requirements. We will discuss three unique challenges for applying ML to cybersecurity and three common but more severe challenges in cybersecurity.

Three Unique Challenges for Applying ML to Cybersecurity

Challenge 1: The much higher accuracy requirements. For example, if you’re just doing image processing, and the system mistakes a dog for a cat, that might be annoying but likely doesn’t have a life or death impact. If a machine learning system mistakes a fraudulent data packet for a legitimate one that leads to an attack against a hospital and its devices, the impact of the mis-categorization can be severe.

Every day, organizations see large volumes of data packets traverse firewalls. Even if only 0.1% of the data is mis-categorized by machine learning, we can wrongly block huge amounts of normal traffic that would severely impact the business. It’s understandable that in the early days of machine learning, some organizations were concerned that the models wouldn’t be as accurate as human security researchers. It takes time, and it also takes huge amounts of data to actually train a machine learning model to get up to the same level of accuracy as a really skilled human. Humans, however, don’t scale and are among the scarcest resources in IT today. We are relying on ML to efficiently scale up the cybersecurity solutions. Also, ML can help us detect unknown attacks that are hard for humans to detect, as ML can build up baseline behaviors and detect any abnormalities that deviate from them.

Challenge 2: The access to large amounts of training data, especially labeled data. Machine learning requires a large amount of data to make models and predictions more accurate. Gaining malware samples is a lot harder than acquiring data in image processing and NLP. There is not enough attack data, and lots of security risk data is sensitive and not available because of privacy concerns.

Challenge 3: The ground truth. Unlike images, the ground truth in cybersecurity might not always be available or fixed. The cybersecurity landscape is dynamic and changing all the time. Not a single malware database can claim to cover all the malware in the world, and more malware is being generated at any moment. What is the ground truth that we should compare to in order to decide our accuracy?

Three ML Challenges Made More Severe in Cybersecurity

There are other challenges that are common for ML in all sectors but more severe for ML in cybersecurity.

Challenge 1: Explainability of machine learning models. Having a comprehensive understanding of the machine learning results is critical to our ability to take proper action.

Challenge 2: Talent scarcity. We have to combine domain knowledge with ML expertise in order for ML to be effective in any area. Either ML or security alone is short of talent; it is even harder to find experts who know both ML and security. That’s where we found it is critical to make sure ML data scientists work together with security researchers, even though they don’t speak the same language, use different methodologies, and have different ways of thinking and different approaches. It is very important for them to learn to work with each other. Collaboration between these two groups is the key to successfully applying ML to cybersecurity.

Challenge 3: ML security. Because of the critical role cybersecurity plays in each business, it is more critical to make sure the ML we use in cybersecurity is secure by itself. There has been research in this area in academics, and we are glad to see and contribute to the industry movement in securing ML models and data. Palo Alto Networks is driving innovation and doing everything to make sure our ML is secure.

The goal of machine learning is to make security more efficient and scalable in an effort to help save labor and prevent unknown attacks. It’s hard to use manual labor to scale up to billions of devices, but machine learning can easily do that. And that is the kind of scale organizations truly need to protect themselves in the escalating threat landscape. ML is also critical for detecting unknown attacks in many critical infrastructures. We can’t afford even one attack, which can mean life or death.

How Machine Learning Enables the Future of Cybersecurity

Machine learning supports modern cybersecurity solutions in a number of different ways. Individually, each one is valuable, and together they are game-changing for maintaining a strong security posture in a dynamic threat landscape.

Identification and profiling: With new devices getting connected to enterprise networks all the time, it’s not easy for an IT organization to be aware of them all. Machine learning can be used to identify and profile devices on a network. That profile can determine the different features and behaviors of a given device.

Automated anomaly detection: Using machine learning to rapidly identify known bad behaviors is a great use case for security. After first profiling devices and understanding regular activities, machine learning knows what’s normal and what’s not.

Zero-day detection: With traditional security, a bad action has to be seen at least once for it to be identified as a bad action. That’s the way that legacy signature-based malware detection works. Machine learning can intelligently identify previously unknown forms of malware and attacks to help protect organizations from potential zero-day attacks.

Insights at scale: With data and application in many different locations, being able to identify trends across large volumes of devices is just not humanly possible. Machine learning can do what humans cannot, enabling automation for insights at scale.

Policy recommendations: The process of building security policies is often a very manual effort that has no shortage of challenges. With an understanding of what devices are present and what is normal behavior, machine learning can help to provide policy recommendations for security devices, including firewalls. Instead of having to manually navigate around different conflicting access control lists for different devices and network segments, machine learning can make specific recommendations that work in an automated approach.

With more devices and threats coming online every day, and human security resources in scarce supply, only machine learning can sort complicated situations and scenarios at scale to enable organizations to meet the challenge of cybersecurity now and in the years to come.

Learn more about machine learning in cybersecurity here.

About Dr. May Wang:

Dr. May Wang is the CTO of IoT Security at Palo Alto Networks and the Co-founder, Chief Technology Officer (CTO), and board member of Zingbox, which was acquired by Palo Alto Networks in 2019 for its security solutions to Internet of Things (IoT).

Internet of Things, IT Leadership

In my last column for, I outlined some of the cybersecurity issues around user authentication for verification of consumer and business accounts.  

Among other things, I advocated that in this remote/hybrid work era, CISOs must protect their company’s access to data by having a cyber-attack plan ready to implement, understanding the new tools and tactics that cyber thieves use, and being aware of newer AI-based technologies that can lessen cybersecurity risks. But first and foremost, I stressed that to better protect their organizations, CISOs needed to adopt (if they hadn’t done so already) some of the evolving identity and access management technologies being offered by a crop of emerging companies. 

Responses from other industry professionals generally agreed that there are issues around authentication, including multi-factor authentication (MFA), but some asked, “Isn’t FIDO supposed to eliminate the risks from all that? Didn’t the FIDO Alliance just recently announce new UX guidelines to speed up MFA adoption with FIDO security keys?” Well, yes, but there is more that tech pros can do. I’ll explain more below.  


FIDO as an industry initiative was set up a decade ago to standardize the need for strong authentication/password technologies. It’s basically a stronger set of security authentication measures, in essence, a better security ‘handshake’ between the device and a third-party service. Companies in the alliance include board-level members like Apple, Amazon, Meta, Microsoft, Google, and other tech-heavy hitters. Collectively, they are seeking to solve problems caused by users needing to create, maintain and remember multiple usernames and passwords.  

While these initiatives are great, they are only solving an authentication problem between the device and the end service. FIDO provides seamless and secure authentication to a service from a browser, your phone, or an app. But the reality is this is a device authentication, not a human one. There’s still a step on the front end, where the user has to authenticate themselves with the device, and this can be compromised. 

Identity and access – the user authentication challenge 

For example, using my phone’s face recognition access, my kids can hold my phone up to my face, and boom, they have access. All of the added protection provided by FIDO just got wiped out. My kids could have used (and abused) my accounts. Thankfully, I’ve raised them right. Or at least I hope so! 

In addition, someone could create a fake identity representing me on their device. From that point forward the third-party service thinks that I am the user because the device or browser has been authenticated, even though it is really a hacker who has hijacked my identity to set up the device. 

Obviously, there’s still a need for a layer of continuous authentication and user identity management to help protect against these exploits. This is about identifying the user versus the machine on an ongoing basis, not just at set-up or log-in. How can we do a better job of identifying who our real users are, while also eliminating former users (employees and contractors) from the ranks of those who have access to some of the most critical of systems? 

This is where I think that some of the newer products emerging from the start-up world will be very beneficial to protecting our organizations. 

Man vs. machine 

Solving the human user identity and authentication issue is just part of the problem. A recent article in Security Affairs notes that “while people need usernames and passwords to identify themselves, machines also need to identify themselves to one another. But instead of usernames and passwords, machines use keys and certificates that serve as machine identities so they can connect and communicate securely.” These can be also compromised by hackers. 

Managing the identity of devices used in cloud services, SaaS applications, and other systems is perhaps becoming an even bigger problem. Organizations often set up a new web service, create an identity for it and the IT assets associated with it, and once it’s up and running, IT staffers are likely not rushing to change or update security configurations on those systems. Once the initial dependencies are set up between devices, it gets that much harder to sever or update those complex relationships.  

However, good security acumen would determine that those should be refreshed, which can be a huge management problem. As a result, older, stale credentials become a softer target to attack. 

Hackers are increasingly exploiting the credentials of machines, not humans, to launch their attacks. Just like fooling other humans, hackers can fool other machines into handing over sensitive data. According to Security Affairs, given that machine identities are the least understood and weakly protected parts of enterprise networks, it should come as no surprise that cybercriminals are aggressively exploiting them. From Stuxnet to SolarWinds, attackers are increasingly abusing unprotected machine identities to launch a variety of attacks. In fact, over the past four years threats targeting weak machine identities have increased by 400%. 

This is a big deal.  

The bigger picture 

Ultimately, as companies continue to expand their use of hybrid and multi-cloud digital services, the more human and machine entities there will be to manage.  

CIOs must lead IT operations teams to ensure management of the whole identity and access lifecycle for both humans and machines. This is likely to involve new AI-connected tools that seamlessly handle integration, detection, and automation. These tools can equally limit or extend access to certain functions for both human personnel and automated actions, improving security while bringing down costs by pruning unnecessary account licenses. 

In addition, these solutions will fill a void that today still creates major headaches around compliance and reporting. Building a full audit trail into your existing systems is a start. With automations already in place, IT staff can then better manage the governance.  

Ready or not, CIOs and CISOs need to adapt to the evolving identity and access management landscape to adopt a holistic strategy or risk security breaches, failed compliance, and costly fines. 

Authentication, Cyberattacks, Security