New York-based insurance provider Travelers, with 30,000 employees and 2021 revenues of about $35 billion, is in the business of risk. Managing all of its facets, of course, requires many different approaches and tools to achieve beneficial outcomes, and Mano Mannoochahr, the company’s SVP and chief data & analytics officer, has a crow’s nest perspective of immediate and long-term tasks to equally strengthen the company culture and customer needs.

“What’s unique about the [chief data officer] role is it sits at the cross-section of data, technology, and analytics,” he says. “And we recognized as a company that we needed to start thinking about how we leverage advancements in technology and tremendous amounts of data across our ecosystem, and tie it with machine learning technology and other things advancing the field of analytics. We needed to think about those disciplines together and make progress to maximize the benefit to our customers and our business overall.”

Another focus is on finding and nurturing talent. It’s a pressing issue not unique to Travelers, but Mannoochahr sees that in order to deliver on those disciplines advancing analytics to foster a healthier business, he and his team recognize the need to cast a wider net.

“We have a tremendous amount of capability already created helping our employees make the best decisions on our front lines,” he says. “But we have to bring in the right talent. This is kind of a team sport for us, so it’s not just data scientists but software engineers, data engineers, and even behavioral scientists to understand how we empathize and best leverage the experience that our frontline employees have, as well as position these capabilities in the best way so we can gain their trust and they can start to trust the data and the tool to make informed decisions. [The pandemic] slowed us down a little, as far as availability of talent, but I think we’ve doubled down on creating more opportunities for our existing talent, in helping them elevate their skills.”

Mannoochahr  recently spoke to Maryfran Johnson, CEO of Maryfran Johnson Media and host of the IDG Tech(talk) podcast, about how the CDO coordinates data, technology, and analytics to not only capitalize on advancements in machine learning and AI in real time, but better manage talent and help foster a forward-thinking and ambitious culture.

Here are some edited excerpts of that conversation. Watch the full video below for more insights.

On the role of the Chief Data Officer:

Due to the nature of our business, Travelers has always used data analytics to assess and price risk. What’s unique about the role is it sits at the cross-section of data, technology, and analytics. And we recognized as a company that we needed to start thinking about how we leverage advancements in technology and tremendous amounts of data across our ecosystem, and tie it with machine learning technology and other things advancing the field of analytics. We needed to think about those disciplines together and make progress to maximize the benefit to our customers and our business overall. It’s a unique role and it’s been a great journey. Collectively, the scope spans about 1,600 data analytics professionals in the company and we work closely with our technology partners—more than 3,000 of them—that cover areas of software engineering, infrastructure, cybersecurity, and architecture, for instance.

On business transformation:

We perform around our current business and want to meet to be able to deliver results. But at the same time, we’re thinking about the transformation of the business because opportunities are endless as you start to marry data, technology, and analytics. So the transformation of the next wave that we’re driving is really coming from the nexus of the infinite amount of data being generated, advancements in cloud computing and technology, and, of course, our ability to continue to expand our analytics expertise. We’ve always used these things in some form or fashion to appropriately price grids, set aside a group of reserves for being able to pay out claims, and, of course, serve our customers, agents, and brokers. But what’s changed is a greater world of possibilities. On a yearly basis, we respond to about two million folks from our brokers and agents and process over a million claims per year. So if you put it all together, every one of those transactions or interactions can be reinvented through a lens of technology, AI or machine learning. So we need to inform our front lines and workers how to make the most of the information available to do their job better. It’s an opportunity to reimagine some of the work on the front line that we’re getting excited about.

On having a data-first culture:

This is not about just the practitioners of this discipline or these capabilities. This is about being able to lift the rest of the more than 29,000 people in the organization and make them better and more informed employees through being able to deliver some set of training to elevate their capabilities. So we’ve been on a mission to raise the water mark for the entire organization. One of the things we’ve done is produce data culture knowledge map training, which is designed to help our broader operation understand that the data we create daily could be with us for decades to come, have a life outside an employee’s own desk, or inform about the many different ways data has been used. We have about 13,000 employees through this set of training and it’s received great feedback from the broader organization. Plus, we’ve also started to focus on our business operation leaders and help them understand how they can better utilize analytics and data, overcome biases from a management perspective, and continue validating them so they make the best decisions to run the business.

On sourcing talent:

We have a tremendous amount of capability already created with over 1,000 models being deployed in different parts of the business, helping our employees make the best decisions on our front lines. But opportunities lie ahead, so we have to ensure we bring in the right talent. And I would say this is kind of a team sport for us, so it’s not just data scientists but software engineers, data engineers, and even behavioral scientists to understand how we empathize and best leverage the experience that our frontline employees have, as well as be able to position these tools and capabilities in the best way so we can gain their trust and they can start to trust the data and the tool to make informed decisions. One of my goals, and one of our broader team, is we want to spread the message and help the talent out there understand a lot of the great, challenging problems we’re solving for the business, and how rewarding that work has been for us. But the challenge has only increased from a digitization perspective as COVID-19 hit, which created a lot of demand. It slowed us down a little, as far as availability of talent, but I think we’ve doubled down on creating more opportunities for our existing talent, in helping them elevate their skills.

Chief Data Officer

The shift to e-learning has changed education for good. Students and educators now expect anytime, anywhere access to their learning environments and are increasingly demanding access to modern, cloud-based technologies that enable them to work flexibly, cut down their workloads, and reach their full academic potential.

This means that institutions need to take a holistic approach to education technology (EdTech), including platforms used for teaching and learning, to not only meet these demands but to address ever-present challenges such as student success, retention, accessibility, and educational integrity.

However, for many embarking on this digital transformation journey and looking to more fully embrace EdTech, it can be daunting. Not only are IT leaders often faced with issues related to cost, infrastructure and security, but some solutions can make it challenging for schools to deliver inclusive, consistent educational experiences to all of their students. 

For example, some solutions may require an upheaval of existing tools and infrastructure, placing a strain on already-busy IT teams. Technology leaders are also looking to ensure the security of their schools’ digital ecosystem and that educators and students receive sufficient training in order to use these tools in the classroom.

Other EdTech solutions offer a one-size-fits-all approach to education, making it difficult for some students to keep up with online learning and for educators to adapt to pupils’ different needs. Similarly, while some solutions enable teachers and students to work and learn remotely, they struggle to adapt to hybrid teaching models.

Anthology’s learning management system (LMS), Blackboard Learn, takes a different approach. Designed to make the lives of educators and learners easier, Blackboard Learn creates experiences that are informed and personalised to support learning, teaching, and leading more effectively.

With students and teachers alike demanding more flexibility, Blackboard Learn can be used to replace or to supplement traditional face-to-face classes, enabling institutions to recognise the full benefits of a hybrid environment while ensuring nobody is left behind. For example, by providing personalised learning experiences, students are empowered to learn on-the-go and in ways that best meet their individual needs, ensuring educators can deliver inclusive, consistent experiences for learners of all abilities.

It also allows students to gain independence and become more autonomous. By providing real-time, data-driven insights, learners can keep track of their own progress, identify next steps, and get the support they need when they need it. These insights also enable educators to identify disengaged or struggling learners sooner to help promote more positive outcomes for students, while Blackboard’s customisable feedback ensures all students are on track for assessment success.

Anthology’s LMS can make life easier for IT leaders, too. The SaaS application code was built with security and privacy in mind and is LMS agnostic, ensuring seamless integration into the learning management system and existing workflows. What’s more, by using Amazon Web Services (AWS) Cloud, institutions benefit from continuous deliverability of smaller updates – which require zero downtime.

This also means that Anthology has the agility to develop capabilities and features quickly, such as its built-in accessibility and plagiarism tools. Because these features are out-of-the-box, institutions can save money while benefitting from a streamlined, scalable EdTech stack that can continue to evolve as they do.

With Blackboard Learn by Anthology, educators can rest assured they have the foundation of an EdTech ecosystem that equips all students and teachers with the flexibility to create more personalised learning experiences that support student success, while improving efficiency and setting their institution up for what’s to come in higher education.

For more insights into understanding student expectations, click here to read Anthology’s whitepaper.

Artificial Intelligence, Education and Training Software

Machine learning (ML) is a commonly used term across nearly every sector of IT today. And while ML has frequently been used to make sense of big data—to improve business performance and processes and help make predictions—it has also proven priceless in other applications, including cybersecurity. This article will share reasons why ML has risen to such importance in cybersecurity, share some of the challenges of this particular application of the technology and describe the future that machine learning enables.

Why Machine Learning Has Become Vital for Cybersecurity

The need for machine learning has to do with complexity. Many organizations today possess a growing number of Internet of Things (IoT) devices that aren’t all known or managed by IT. All data and applications aren’t running on-premises, as hybrid and multicloud are the new normal. Users are no longer mostly in the office, as remote work is widely accepted.

Not all that long ago, it was common for enterprises to rely on signature-based detection for malware, static firewall rules for network traffic and access control lists (ACLs) to define security policies. In a world with more devices, in more places than ever, the old ways of detecting potential security risks fail to keep up with the scale, scope and complexity.

Machine learning is all about training models to learn automatically from large amounts of data, and from the learning, a system can then identify trends, spot anomalies, make recommendations and ultimately execute actions. In order to address all the new security challenges that organizations face, there is a clear need for machine learning. Only machine learning can address the increasing number of challenges in cybersecurity: scaling up security solutions, detecting unknown attacks and detecting advanced attacks, including polymorphic malware. Advanced malware can change forms to evade detection, and using a traditional signature-based approach makes it very difficult to detect such advanced attacks. ML turns out to be the best solution to combat it.

What Makes Machine Learning Different in Cybersecurity

Machine learning is well understood and widely deployed across many areas. Among the most popular are image processing for recognition and natural language processing (NLP) to help understand what a human or a piece of text is saying.

Cybersecurity is different from other use cases for machine learning in some respects.

Leveraging machine learning in cybersecurity carries its own challenges and requirements. We will discuss three unique challenges for applying ML to cybersecurity and three common but more severe challenges in cybersecurity.

Three Unique Challenges for Applying ML to Cybersecurity

Challenge 1: The much higher accuracy requirements. For example, if you’re just doing image processing, and the system mistakes a dog for a cat, that might be annoying but likely doesn’t have a life or death impact. If a machine learning system mistakes a fraudulent data packet for a legitimate one that leads to an attack against a hospital and its devices, the impact of the mis-categorization can be severe.

Every day, organizations see large volumes of data packets traverse firewalls. Even if only 0.1% of the data is mis-categorized by machine learning, we can wrongly block huge amounts of normal traffic that would severely impact the business. It’s understandable that in the early days of machine learning, some organizations were concerned that the models wouldn’t be as accurate as human security researchers. It takes time, and it also takes huge amounts of data to actually train a machine learning model to get up to the same level of accuracy as a really skilled human. Humans, however, don’t scale and are among the scarcest resources in IT today. We are relying on ML to efficiently scale up the cybersecurity solutions. Also, ML can help us detect unknown attacks that are hard for humans to detect, as ML can build up baseline behaviors and detect any abnormalities that deviate from them.

Challenge 2: The access to large amounts of training data, especially labeled data. Machine learning requires a large amount of data to make models and predictions more accurate. Gaining malware samples is a lot harder than acquiring data in image processing and NLP. There is not enough attack data, and lots of security risk data is sensitive and not available because of privacy concerns.

Challenge 3: The ground truth. Unlike images, the ground truth in cybersecurity might not always be available or fixed. The cybersecurity landscape is dynamic and changing all the time. Not a single malware database can claim to cover all the malware in the world, and more malware is being generated at any moment. What is the ground truth that we should compare to in order to decide our accuracy?

Three ML Challenges Made More Severe in Cybersecurity

There are other challenges that are common for ML in all sectors but more severe for ML in cybersecurity.

Challenge 1: Explainability of machine learning models. Having a comprehensive understanding of the machine learning results is critical to our ability to take proper action.

Challenge 2: Talent scarcity. We have to combine domain knowledge with ML expertise in order for ML to be effective in any area. Either ML or security alone is short of talent; it is even harder to find experts who know both ML and security. That’s where we found it is critical to make sure ML data scientists work together with security researchers, even though they don’t speak the same language, use different methodologies, and have different ways of thinking and different approaches. It is very important for them to learn to work with each other. Collaboration between these two groups is the key to successfully applying ML to cybersecurity.

Challenge 3: ML security. Because of the critical role cybersecurity plays in each business, it is more critical to make sure the ML we use in cybersecurity is secure by itself. There has been research in this area in academics, and we are glad to see and contribute to the industry movement in securing ML models and data. Palo Alto Networks is driving innovation and doing everything to make sure our ML is secure.

The goal of machine learning is to make security more efficient and scalable in an effort to help save labor and prevent unknown attacks. It’s hard to use manual labor to scale up to billions of devices, but machine learning can easily do that. And that is the kind of scale organizations truly need to protect themselves in the escalating threat landscape. ML is also critical for detecting unknown attacks in many critical infrastructures. We can’t afford even one attack, which can mean life or death.

How Machine Learning Enables the Future of Cybersecurity

Machine learning supports modern cybersecurity solutions in a number of different ways. Individually, each one is valuable, and together they are game-changing for maintaining a strong security posture in a dynamic threat landscape.

Identification and profiling: With new devices getting connected to enterprise networks all the time, it’s not easy for an IT organization to be aware of them all. Machine learning can be used to identify and profile devices on a network. That profile can determine the different features and behaviors of a given device.

Automated anomaly detection: Using machine learning to rapidly identify known bad behaviors is a great use case for security. After first profiling devices and understanding regular activities, machine learning knows what’s normal and what’s not.

Zero-day detection: With traditional security, a bad action has to be seen at least once for it to be identified as a bad action. That’s the way that legacy signature-based malware detection works. Machine learning can intelligently identify previously unknown forms of malware and attacks to help protect organizations from potential zero-day attacks.

Insights at scale: With data and application in many different locations, being able to identify trends across large volumes of devices is just not humanly possible. Machine learning can do what humans cannot, enabling automation for insights at scale.

Policy recommendations: The process of building security policies is often a very manual effort that has no shortage of challenges. With an understanding of what devices are present and what is normal behavior, machine learning can help to provide policy recommendations for security devices, including firewalls. Instead of having to manually navigate around different conflicting access control lists for different devices and network segments, machine learning can make specific recommendations that work in an automated approach.

With more devices and threats coming online every day, and human security resources in scarce supply, only machine learning can sort complicated situations and scenarios at scale to enable organizations to meet the challenge of cybersecurity now and in the years to come.

Learn more about machine learning in cybersecurity here.

About Dr. May Wang:

Dr. May Wang is the CTO of IoT Security at Palo Alto Networks and the Co-founder, Chief Technology Officer (CTO), and board member of Zingbox, which was acquired by Palo Alto Networks in 2019 for its security solutions to Internet of Things (IoT).

Internet of Things, IT Leadership

Technology is hardly the only industry experiencing hiring challenges at the moment, but resignations in tech still rank among the highest across all industries, with a 4.5% increase in resignations in 2021 compared with 2020, according to Harvard Business Review.

For the most part, these employees aren’t leaving the industry altogether; they’re moving to companies that can offer them what they want. Flexible schedules and work-life balance? 

Absolutely. Higher salaries? Of course. But one of the primary reasons why people in tech, particularly developers, switch or consider switching roles is because they want more opportunities to learn. Developers don’t want to quit: they want to face new challenges, acquire new skills, and find new ways to solve problems.

Ensuring access to learning and growth opportunities is part of the mandate for tech leaders looking to attract and retain the best people. A culture of continuous learning that encourages developers to upskill and reskill will also give your employees every opportunity to deliver more value to your organization.

Read on to learn how and why expanding access to learning helps you build higher-performing teams and a more inherently resilient organization.

Developers want more learning opportunities — and leadership should listen

Giving developers opportunities to learn has a major, positive impact on hiring, retention, and team performance. According to a Stack Overflow pulse survey, more than 50% of developers would consider leaving a job because it didn’t offer enough chances for learning and growth, while a similar percentage would stick with a role because it did offer these opportunities. And 50% percent of developers report that access to learning opportunities contributes to their happiness at work.

Yet most developers feel they don’t get enough time at work to devote to learning. Via a Twitter poll, Stack Overflow found that, when asked how much time they get at work to learn, nearly half of developers (46%) said “hardly any or none.” Considering that more than 50% of developers would consider leaving a job if it didn’t offer enough learning time, it’s clear that one way to help solve hiring and retention challenges is to give employees more chances to pick up new skills and evolve existing ones.

How can tech leaders and managers solve for this? One key is to create an environment where employees feel psychologically safe investing time in learning and asking for more time when they need it. High-pressure environments tend to emphasize wasted time (“How much time did you waste doing that?”) instead of invested time (“I invested 10 hours this week in learning this”). In this context, plenty of employees are afraid to ask about devoting work time to learning.

Company leadership and team managers can make this easier by consistently communicating the value of learning and modeling a top-down commitment to continuous learning. Executives and senior leaders can share their knowledge with employees through fireside chats and AMAs to underscore the importance of this culture shift. Managers should take the same approach with their teams. You can’t expect your more junior employees to invest time in learning if you haven’t made it clear, at every level of your organization, that learning matters.

Expanding learning opportunities improves team performance and organizational resiliency

Elevating the importance of learning helps sustain performance and competency in your engineering teams. But it does more than improve retention or team-level performance: it also builds organizational resiliency.

Some of your employees are always going to leave: to seek new adventures, to combat burnout or boredom, to make more money. Leadership no longer has the luxury of hiring for a specific skill and then considering that area covered forever. Technology and technology companies are changing too fast for that. Retaining talent is certainly important, but ultimately leaders should be focused on creating organizations that are resilient rather than fragile. The loss of one or two key individuals shouldn’t impede the progress of multiple teams or disrupt the organization as a whole.

There’s nothing you can do to completely eliminate turnover, but you can take steps to make your organization more resilient when turnover inevitably occurs:

Ensure that your teams don’t break when people leave. Incorporating more opportunities to learn into your developers’ working lives helps offset the knowledge and productivity losses that can happen when employees move on, taking their expertise with them. How many times have you heard a variation of this exchange: “How does this system/tool work?” “I don’t know; go ask [expert].” But what happens when that expert leaves? Resilient teams and organizations don’t stumble over the loss of a few key people.Give employees access to the learning opportunities they want. As we’ve said, developers prize roles that allow them to learn on the job. Access to learning opportunities is a major factor they weigh when deciding whether to leave a current job or accept a new one. Expanding learning opportunities for developers makes individual employees happier and more valuable to the organization while increasing organizational resiliency.Avoid asking your high-performers to do all the teaching. Implicitly or explicitly asking your strongest team members to serve as sources of truth and wisdom for your entire team is a bad idea. It sets your experts up for unhappiness and burnout, factors likely to push them out the door. Create a system where both new and seasoned employees can self-serve information so they can unstick themselves when they get stuck.

Four steps to prioritize learning and attract/retain high-performance teams

When it comes to learning, there are four major steps you can take to attract and retain the best talent and increase organizational resiliency.

1. Surface subject matter experts.

Your team has questions? Chances are, someone at your company has answers. There are experts (and potential experts) throughout your organization whose knowledge can eliminate roadblocks and improve processes. Your challenge is to uncover these experts — and plant the seeds for future experts by giving your employees time to learn new skills and investigate new solutions.

Lower the barrier to entry by making it fast, simple, and intuitive for people to contribute to your knowledge platform. Keep in mind that creating asynchronous paths for your employees to find and connect with experts enables knowledge sharing without creating additional distractions or an undue burden for those experts.

How Stack Overflow for Teams surfaces subject matter experts:

Spotlights subject matter experts (SMEs) across teams and departments to connect people with questions to people with answersEnables upskilling and reskilling by allowing teams and individuals to learn from one anotherAsynchronous communication allows employees to ask and answer questions without disrupting their established workflowsQ&A format lowers barriers to contribution and incentivizes users to explore and contribute to knowledge resources

2. Capture and preserve knowledge

Establishing practices to capture and preserve information is essential for making learning scale. The goal is to convert individual learnings and experiences into institutional knowledge that informs best practices so that everyone, and the organization as a whole, can benefit. That knowledge should be easily discoverable and its original context preserved for future knowledge-seekers. To capture and preserve knowledge effectively, you also need to make it easy for users to engage with your knowledge platform.

How Stack Overflow for Teams captures and preserves knowledge:

Collects knowledge continuously to preserve information and context without disrupting developers’ workflowsMakes knowledge searchable, so employees can self-serve answers to their questions and find solutions others have already worked outCompared with technical documentation, Q&A format requires a shorter time investment for both people with questions and people with answers

3. Make information centralized and accessible

The good news is that nobody at your company has to know everything. They just need to know where to find it. After all, knowledge is only valuable if people can locate it when they need it. That’s why knowledge resources should be easy to find, retrieve, and share across teams.

This is particularly critical as your organization scales: new hires can teach themselves the ropes without requiring extensive, synchronous communication with more seasoned employees who already have plenty of responsibilities and find themselves answering the same questions over and over again.

How Stack Overflow for Teams makes information centralized and accessible:

Makes information easy to locate, access, and shareSpeeds up onboarding and shortens time-to-value for new hiresAllows users to make meaningful contributions to knowledge resources without investing huge amounts of time or interrupting their flow state

4. Keep knowledge healthy and resilient

Knowledge isn’t immune to its own kind of tech debt. The major problem with static documentation is that the instant you hit Save, your content has started its steady slide toward being out of date. Like code, regardless of its scale, information must be continually maintained in order to deliver its full value.

Keeping content healthy — that is, fresh, accurate, and up-to-date — is essential. When your knowledge base is outdated or incomplete, employees start to lose trust in your knowledge. 

Once trust starts eroding, people stop contributing to your knowledge platform, and it grows even more outdated. Since SMEs are often largely responsible for ensuring that content is complete, properly edited, and consistently updated, keeping content healthy can be yet another heavy burden on these individuals. That’s why a crowdsourced platform that encourages the community to curate, update, and improve content is so valuable.

How Stack Overflow for Teams keeps knowledge healthy and resilient:

Our Content Health feature intelligently surfaces knowledge that might be outdated, inaccurate, or untrustworthy, encouraging more engagement and ensuring higher-quality knowledge resourcesContent is curated, updated, and maintained by the community, reducing the burden on SMEsThe platform automatically spotlights the most valuable, relevant information as employees vote on the best answers, thereby increasing user confidence in your knowledge

Resiliency requires learning

You can’t build a resilient organization without putting learning at the center of how your teams operate. Not only is offering access to learning and growth opportunities a requirement for attracting and retaining top talent, but fostering a culture of continuous learning protects against knowledge loss, keeps individuals and teams working productively, and encourages employees to develop skills that will make them even more valuable to your organization.

To learn more about Stack Overflow for Teams, visit us here

IT Leadership

For the healthcare sector, siloed data comes across as a major bottleneck in the way of innovative use cases such as drug discovery, clinical trials, and predictive healthcare. An Aster DM Healthcare, an Indian healthcare institution, has now found a solution to this problem that could lead to several cutting-edge solutions.

A single patient generates nearly 80MB of data annually through imaging and electronic medical records. RBC Capital Market projects that the annual growth rate of data for healthcare will reach 36% by 2025. “Genomic data alone is predicted to be 2 to 40 exabytes by 2025, eclipsing the amount of data acquired by all other technological platforms,” it says.

Although AI-enabled solutions in areas such as medical imaging are helping to address pressing challenges such as staffing shortages and aging populations, accessing silos of relevant data spread across various hospitals, geographies, and other health systems, while complying with regulatory policies, is a massive challenge.

Dr Harsh Rajaram, COO at Aster Telehealth, India & GCC

istock

“In a distributed learning setup, data from different hospitals must be brought together to create a centralised data repository for model training, raising lot of concerns on data privacy. Hospitals are sceptical in participating in such initiatives, fearing losing control on the patient data, though they see immense value in it,” says Dr Harsha Rajaram, COO at Aster Telehealth, India & GCC. Its parent firm Aster DM Healthcare is a conglomerate with hospitals, clinics, pharmacies, and healthcare consultancy service under its portfolio.

To overcome these challenges, Aster Innovation and Research Centre, the innovation hub of Aster DM Healthcare, has deployed its Secure Federated Learning Platform (SFLP) that securely and rapidly enables access to anonymised and structured health data for research and collaboration.

Federated learning is a method of training AI algorithms with data stored at multiple decentralised sources without moving that data. The SFLP allows access to diverse data source without compromising the data privacy, because data remains at the source, while the model training happens from multiple data sources.

“The platform marks a paradigm shift by getting the compute to the data rather than getting the data to the compute,” says Dr Lalit Gupta, consultant AI scientist-innovation at Aster Digital Health.

“Federated technology provided us a platform through which we can unlock the immense potential data provides to draw better insights into clinical, operational, and business challenges and tap on newer opportunities without the fear of losing control of our data. It will allow data scientists from multiple organisations to perform AI training without sharing raw data. By gaining access to larger data sets, they can develop more accurate AI models. It will also ensure data compliance and governance,” COO Rajaram says.

The building blocks of SFLP

Before deploying the platform, Aster conducted a capability demonstration, or proof of concept, of the platform using hospital data from the Bengaluru and Vijayawada clusters of Aster Hospital.

“The platform comprised a two-node collaboration with machines physically located in Bangalore and Vijayawada. The director/aggregator was in Bangalore and the two envoy/collaborator were distributed between Bengaluru and Vijayawada, respectively. The software setup included Ubuntu 20.04.02 with kernel version 5.4.0-65-generic, OpenFL Python library for collaboration, PyTorch Python library[GG1]  for developing deep learning models, and Nvidia Quadro RTX 6000 GPU,” says Gupta.

Dr Lalit Gupta, consultant AI scientist-innovation at Aster Digital Health

istock

“The Aster IT team helped to install and set up the three servers, enabled ports, installed the operating system and necessary drivers, and maintained the servers. The IT team also helped to fetch the data from PACS and HIS, which was required for federated learning experiments,” he says. PACS refers to picture archiving and communication system, a medical imaging technology used to store and transmit electronic images and reports. An HIS or health information system is designed to manage healthcare data.

As part of the capability demonstration, more than 125,000 chest X-ray images, including 18,573 images from more than 30,000 unique patient data from Bengaluru, were used to train a CheXNet AI model, developed in Python, to detect abnormalities in the X-ray report. The additional 18,537 images provided a 3% accuracy boost due to real-world data that was otherwise not available for training the AI model.

The platform can accommodate any analytical tool and does not have any restrictions on the size of data. “We shall decide on size of data based on use case. In case of our capability demonstration experiments, we used a chest X-ray image database of around 30GB,” says COO Rajaram.

It took Aster about eight months, including four months of the capability demonstration, to deploy the system. The platform went live in June 2022. We are in our early days with hardware and software deployed at only two hospitals currently. We intend to increase these deployments to multiple hospitals and look forward to other providers joining hands to leverage the ecosystem,” says Rajaram.

Addressing new data security challenges

While federated learning as a methodology is a well-acknowledged approach to address the data privacy challenges, it also brings in additional security risks as the data/AI model assets are more exposed to possible hacking. Hence, it is essential to provide security capabilities to go with the privacy.

A set of security related instruction codes are built into the central processing units of the servers, which provide the required hardware-based memory encryption that isolates specific application code and data in memory for data security. “The platform combines federated learning with security guarantees enabled by its hardware. This helps to protect data and AI model in storage, when transmitted over network, and during execution of federated learning training jobs. The security features in the platform provide confidentiality, integrity, and attestation capabilities that prevent stealing or reverse-engineering of the data distribution,” says Rajaram.

“Annotation was already in our PACS system. We used its API for data extraction. Though anonymisation was not required since it was within our network, for the pilot we did anonymise the data from the back end,” he says.

Electronic Health Records, Healthcare Industry

Companies typically face three big problems in managing their skills base: Normal learning approaches require too much time to scale up relevant knowledge. Hiring for new skills is expensive and also too slow. And skills from new hires are rarely properly shared.

Businesses of all types have fought to solve these problems. Some conduct ever more advanced offsite or onsite seminars and training – but these are costly, take time, and don’t adapt fast enough to incoming needs of the business and teams. Online training is often perceived as a hassle and participants can become disengaged. Other companies try to jump-start knowledge by bringing in consultants, but this risks only temporarily plugging the gaps.

The reality is that most of these efforts involve throwing money at only the immediate problem. Few budgets can meet the continuous need for up-to-the-minute learning and training, particularly in fast-evolving tech areas such as programming languages, software development, containerization, and cloud computing.

A fresh approach is needed

A handful of companies have found a solution. They’re adding community-driven learning to their existing training approaches. They recognize the wealth of knowledge held by individuals in their teams, and create an agile, natural process to share this knowledge via hands-on workshops. This is a logical progression from existing efforts to connect staff for social bonding and business collaboration.

In practice, what these companies do is create an open, well-managed community of trainers and trainees from within their staff base. Trainees (any employee) feed into a wish list of the specific skills and areas that they want to learn. Trainers (who are staff members with regular, non-training roles) offer lessons on skills or knowledge that they excel in. It is a system open to everyone, with managers, who understand the incoming strategic requirements of the business, helping to prioritize topics and identify potential trainers.

To succeed in this approach, businesses need good leadership and appropriate time allocation. It starts with Chief Technology or Chief Information Officers, who must endorse the importance that the company places on tech innovation, by actively facilitating employees to spend 10 to 20% of their time learning or training others. Once a learning initiative has begun and is nurtured and adapted, it often grows quickly as staff see others taking part.

The results we’re seeing from community learning at GfK

There have been some powerful results for companies running community-driven learning. At GfK, we provide consumer, market, and brand intelligence, with powerful predictive analytics. Since we began our own community-driven learning initiatives three years ago, we’ve witnessed compelling improvements. Our teams can initiate targeted, in-house training whenever necessary, with zero red tape. This has delivered a significant growth in innovation. We’re attracting and retaining top talent, and there are marked improvements in our speed of adaptability.

For example: We swapped initial hackathons for two-day learning events, run five times a year, called “we.innovate”. Our tech teams have full access to these staff-delivered interactive lessons and workshops. The skills covered are shaped by a combination of staff requests and the specific strategic needs of the business. Among the 40 or so topics on the list, we’ve already covered Kubernetes, basic and advanced usage of Git software to track code changes, domain-driven design approaches to software development, cloud computing, cyber security, test-driven development, and much else besides.

Hundreds of our staff have participated in our community learning, and we constantly encourage people to step up as trainers to keep things fresh and relevant. We measure progress by monitoring engagement levels and what the average level of expertise is per individual.

As we have experienced, this is a self-accelerating process. The scale of participation grows fast, meaning the results quickly become transformative at company level. Innovation is the currency of the future, and we are growing ours by drawing out our employees’ substantial individual expertise and distributing it as widely as possible.

To find out more about our innovation, visit gfk.com/careers

IT Leadership