The technology industry is made up of just 26% women, compared to a nearly equal split at 49% across the total workforce. Most notably, that number hasn’t done much besides decrease over the past 30 years, hovering around the same percentage and dipping slightly in recent years.

But the lack of women in tech is a deeper issue that stems all the way back to childhood — with gender stereotypes that have historically, and inaccurately, suggested that women and girls are less skilled at math, science, and technology. Unfortunately, that persistent bias has grown into a self-fulfilling prophecy over the years, creating a systemic issue where girls and women aren’t well represented in STEM, and therefore don’t feel empowered or encouraged to pursue it as a career path.

“For a certain period of time in your life when you’re in middle school, your interest in STEM as a girl or as a non-binary learner can be impacted by the lack of representation that you see. You can have a spark, or you can have an interest, but if you don’t see yourself represented, you will not necessarily start taking those courses for high school, which will in fact impact your ability to participate in post-secondary, which will impact your ability to get a career in STEM later in your life,” says Rebecca Hazell, interim executive director of Hackergal.

To close the gender gap for women in STEM careers, girls need to be encouraged to maintain an interest in STEM during elementary, middle, and high school. And that’s the core of Hackergal’s mission — to create opportunities for young girls to engage with STEM education and to consider STEM careers as a potential option.

Fostering the talent pipeline in middle school

Hackergal works with middle school and high school aged girls, as well as nonbinary students, directly through educators and school districts. Learners connect with Hackergal through classrooms, community centers, homeschooling programs, summer school, hackathon events, and coding clubs across Canada. And each year, Hackergal hosts a hackathon event at the end of the school year for kids participating in coding clubs.

The hackathons and coding clubs are targeted at grades six to nine, while the Hackergal Ambassador program is a highly competitive program for high schoolers who have aged out of Hackergal’s middle school coding and hackathon programs.

Hackergal uses a “teach the teacher” model, in which Hackergal connects with teachers, school boards, and school districts across Canada to directly train educators and provide them with information on the Hackergal Hub, Hazell says. The Hub enables teachers to bring a full coding curriculum to students that they can easily integrate into their lesson plans.

“Whether [kids learn] during classroom time, or it’s an extracurricular, they create that safe space where the girls can learn, make mistakes, and raise their hand with confidence and feel comfortable with that,” says Hazell, who adds that a lot of educators express nervousness about implementing a coding club, especially when they have no experience with coding themselves. But Hackergal’s approach aims to empower any educator to expanding their students’ access to coding education, regardless of the teacher’s own programming experience.

Utilizing a platform called Lynx, which originated in Canada and is developed in English and French, Hackergal provides educational programming across the country for students and teachers. The team at Hackergal has been intentional about making its curriculum available to students and teachers in any situation — whether they’re homeschooled or reside in rural areas of Canada, and regardless of language.

“We know that there are certain populations who need our programming a lot and they need the support. They need the community, they need the connection, and the competence-building for their youth. And we’re more than happy to keep growing our program in the interest of serving them better,” says Hazell.

Empowering a future generation of workers

Hackergal’s current generation of learners is highly motivated to have a social impact in their work, says Hazell, adding that this is reflected in each year’s hackathon theme. Last year’s participants, for example, worked around the theme “coding together for our planet,” with a focus on sustainability and environmental issues, such as addressing pollution or developing innovative energy solutions.

“We are very connected to social impact as an organization. It guides everything that we do,” Hazell says. “Research shows that girls specifically are more connected to tech and STEM learning if there is a social impact that’s aligned with that.”

Encouraging students’ passions about social progress is part of Hackergal’s commitment, given that as a generation Hackergal learners will face “some of the biggest problems that this world has ever encountered” and will be among those responsible for finding solutions, Hazell says.

“The people who are using these skills that we’re training them on now are going to have careers that are directly involved in coming up with solutions, and trying to innovate, to make sure our planet is okay,” she adds.

The program also helps its learners establish impressive resumes right out of high school, with some Hackergal students starting up companies by grade 11. That motivation and commitment will help them to become top talent for organizations in the future.

“They’re very motivated in that respect, the teenagers who are in our program, and they have a lot to offer and see the bigger picture. They’re thinking about what they can do and how it will impact the world going forward and what they can do to positively impact the world,” says Hazell.

Getting involved

For companies that want to work with Hackergal, it’s something of a “boutique” experience, says Laurel Maule, development manager at Hackergal. Because the organizations doesn’t have a home-base for students or a main operation center where companies can donate time or resources, corporate sponsors and donors typically work directly with Hackergal to support the organization’s specific needs.

As more organizations focus on DEI, they’re turning to organizations like Hackergal to help solve the talent pipeline as early as possible. For these organizations, it’s can also be an early branding opportunity, as they can put their company name in front of the future workforce.

Maule says that organizations often reach out to ask how they can help expand the talent pipeline. Beyond financial donations, some volunteer an IT executive to speak at a hackathon or coding event, or to write a blog or record a video that might inspire the young learners. Or they might invite ambassador students to do a specialized coding camp at their offices or offer mentorship and advice to older students who are thinking about their careers.

For the learners, it’s an opportunity to start fostering a network early on. They’ll have experiences with a variety of organizations, professional connections throughout the industry, and unique guidance from technology leaders, all before they graduate high school.

“The sky’s kind of the limit on how CIOs want to be engaged and how companies and employees want to be engaged. Each partnership, organization, or company that we work with, brings its own special set of skills. We work closely with them to figure out how we can utilize and build that long-term partnership to support these girls throughout their learning process,” says Maule.

A sustainable support model

By keeping resources low, and by working with government funding and directly with school districts and boards, Hackergal has been able to maintain a free program that enables students to learn, no matter their circumstances.

“We work in a way that doesn’t draw down too much on our resources and allows us to have that creativity and that programming. And we are interested in growing and learning from what we do and trying to challenge ourselves to be as innovative as the kids need us to be, because that’s what we’re trying to share with them. We want to make sure that we’re providing the kind of programming that challenges, that keeps them excited,” says Hazell.

And those efforts are working, as girls are gaining confidence through the program. According to a survey of the latest hackathon’s participants, 97% said they felt more confident in their coding and digital skills after the hackathon, 96% said they were more interested in writing code, and 100% say they felt more knowledgeable.

“You really can’t get better statistics in that sense, especially from a survey that you put out to kids that age group. It was fantastic to see that feedback, and I think we’re going to keep trying to meet that high satisfaction rate amongst our learners,” says Hazell.

Future of Hackergal

For the future, Hackergal is working on developing a full mentorship program for investors that will involve “more interactive, longer-term mentorship programs,” to further support students, says Hazell.

The organization also seeks to continue offering the program for free, as many of the students who need this programming the most are the ones who can’t afford it, or who don’t have access to it, making equitable access a key to Hackergal’s mission.

“I don’t think that we could say with conviction that we were serving those who need us most if we were charging for the resources that we are delivering,” says Hazell.

Hackergal is also working to increase sponsorship opportunities. Last year, for the first time, Hackergal launched a scholarship program, awarding two ambassadors who had graduated from the adult program a $5,000 scholarship for tuition or other expenses, generously provided by Royal Bank of Canada.

Organizations seeking to build their own talent pipeline through coding and STEM camps are also looking to Hackergal for advice on how to start, and how to continue that support beyond just one or two events.

“I hear from some of our speakers, and they always say without fail, ‘I wish this program had been around when I was younger,’” says Hazell. Even if girls don’t end up in tech careers, the key is “feeling encouraged to try something that’s maybe a little bit scary or challenging,” and finding that motivation to “push through barriers and to keep going, feeling supported by a community,” says Hazell.

“Being able to partner with Hackergal — it’s kind of like you’re doing it for your younger self, especially if you’re a woman in leadership in tech,” Hazell says. “Partnering with Hackergal allows them to fulfill that wish or that deep-seated feeling of wanting to connect with that kid. And seeing that excitement, and some of the photos we have from our experiences, really makes me emotional because you see these kids, and they’re so excited to be a part of that community and that energy is special and it can have a bigger impact.”

Diversity and Inclusion

Facing the possibility of an economic recession, one of the world’s leading professional services companies felt the urgency to improve its grasp on spend management – the practice of fully understanding and managing supplier relations and company purchasing.

With 738,000 employees and $3.8 billion in services contracts, it was crucial for Accenture to not only identify every dollar being spent but also assess whether the organization was fully exploiting each expenditure.

But a sense of frustration pervaded the company, as procurement teams complained about limited visibility into contract terms and challenges tracking statement of work (SOW) agreements, pacts specifying goals, and deadlines expected of external employees.

The capacity to generate the SOW contracts and effectively manage services spend depended upon the region since each was reliant on different processes and documentation requirements.

Inadequate services spend visibility also increased exposure to local legal and regulatory risks.

Likewise, customers were unsatisfied with a procurement process that was disjointed and inflexible when quick changes were needed.  

Improvements were needed and the deadline was tight.

“Procurement functions require a lot of time and effort working with suppliers to negotiate the best contract with the best terms,” said Patricia Miller, Accenture’s interim Chief Procurement Officer (CPO), “but if we are not able to compare the delivered service against that agreement in a systematic way, how can we assure that the hard-earned negotiated terms were applied?” 

To relieve this quandary, Accenture launched a campaign to build a vigorous, dynamic procurement function to unlock more value by providing extraordinary visibility into services spend. 

The global standard at lightning speed

Based in Dublin, Ireland, Accenture specializes in digital, cloud, and security technology strategies, consulting, and operations, serving more than 40 industries in more than 120 countries.

Now, as it conceptualized a new platform to effectively manage services spend, it was forced to change its deployment system.

Previously, deployment planning was laborious, requiring substantial time and investment.  The lengthy process slowed feedback on solution design, as well as delivery times on changes.

Yet, Accenture had a dependable, long-term partner in enterprise resource planning (ERP) software pioneer SAP, first adopting the company’s solutions in 2004.

As it faced its latest challenge, Accenture chose SAP Fieldglass, a vendor management system for services procurement and external workforce organization, to provide reporting and analytics.

In addition to implementing a global standard template – rather than a variety of country-specific prototypes – the solution would be customized to meet local invoicing, legal, and regulatory requirements.

From submission to payment, not only would turnaround time be reduced, but collaboration and communication with suppliers were about to reach unprecedented levels.

Meeting changing markets and business demands

The function was deployed back in 2020 in the first of many country-by-country rollouts.

Although the typical technology deployment had taken an average of one year per nation, the expedited timeline enabled 14 countries to begin using the solution within 12 months.

A global management team was also formed to support the effort.

Given the importance of the implementation, constant feedback was needed, and the enhanced technology amplified the level of dialogue, streamlining both testing activities and the ability to deliver required changes.

Today, Accenture’s procurement arm is better equipped to meet changing market and business demands than ever before.

For the first time, Accenture has a heightened understanding of the “hidden” workforce associated with its service business.  Since external workers may not always fit traditional profiles, users are able to cull contract information to link specific employees to their individual skill sets.

Explained Jane M. Kennedy, Global External Management Director for Accenture, “Today, we have much-proved…visibility for management (due to) an online solution that aligns to each worker’s type of engagement.” 

Suppliers noted the ease of transitioning to SAP Fieldglass, and the pace at which entire companies were able to adopt the platform.

Currently, 2,000 suppliers have implemented the system, while $ 1 billion in services spend are managed through the function each year, resulting in a more transparent supply chain and significant cost savings.

That includes the reduced fees for document storage in regions where procurement practices were primarily paper based.

Users report 99% greater accuracy, as well as 7% error reduction per 10,000 SOWs.

For creating a global standard procurement process through its development of a novel solution, Accenture was distinguished as a finalist at the 2023 SAP Innovation Awards, a yearly ceremony honoring organizations using SAP technologies to improve business and society. You can read their pitch deck to see what they accomplished to earn this honor.

Digital Transformation

SAP has appointed a new global head of artificial intelligence, Walter Sun, after the previous post-holder quit to found her own AI startup.

For the past 18 years, Sun worked at Microsoft, most recently as VP of AI for its business and applications platform group. Sun has a PhD from MIT and continued to publish academic research papers during his time at Microsoft, in addition to teaching at Seattle and Washington universities.

As part of Microsoft’s development team, Sun created Bing Predicts, the inference engine that provides the “favored to win” forecasts beneath search results for sporting fixtures and attempted to predict the 2016 US presidential election winner. (Spoiler alert: it failed.)

More usefully for enterprises, he also helped develop Dynamics 365 AI for Market Insights, a feature for Microsoft’s ERP and CRM platform that scans search data to provide enterprises with information about emerging trends in social interest and sentiment around their brands. Most recently, he was involved in the introduction of Dynamics 365 Copilot, which draws on OpenAI’s GPT-4 generative AI model to, among other things, help marketers write engaging sales pitches. In a recent blog post, Sun described how Microsoft researchers conducted experiments to compare the performance of different AI models for use in Dynamics 365. His colleagues also studied how to write the most effective prompts for soliciting useful responses from generative AI systems.

Sun replaces Feiyu Xu as SAP’s global head of AI. She joined the company in 2020, after a three-year stint in a similar role at Lenovo. Prior to that, she had worked for two decades at the German Research Center for Artificial Intelligence, DFKI.

In the three years Xu led SAP’s AI initiatives, the company introduced AI technologies to many of its products, including tools for supply chain planning, expense management, customer experience, and online commerce. In May 2023, around the time Xu announced her intention to leave the company, SAP said it would embed IBM’s Watson AI technology into its products.

SAP’s AI product team

The entire AI unit that previously reported to Xu will now report to Sun, an SAP representative said. Sun’s team will include two VPs of AI technology, Sebastian Wieczorek and Ulf Brackmann; a CTO, Johannes Hoffart, and a global AI product manager, Nadine Hoffmann.

Sun will report directly to Philipp Herzig, SAP’s head of cross-product engineering and experience, who reports to SAP’s executive board member for product engineering, Thomas Saueressig.

SAP couldn’t say whether Sun will have a seat on the company’s AI Ethics Steering Committee as his predecessor, Xu, did. For now, the only representative of the AI team on the committee is Wieczorek, the VP of AI technology. The other eight committee members hold senior posts with responsibility for marketing, data protection, government affairs, legal, diversity, customer data, quality, and sustainability.

As for Xu, after leaving SAP, she co-founded Nyonic, a Berlin-based startup that aims to build industry-focused, multilingual AI models that meet European ethical and legal standards. Xu is Nyonic’s chief innovation officer, and her co-founders include serial AI entrepreneur Han Dong as CEO in Shanghai, NLP expert Johannes Otterbach as CTO, computational linguist Hans Uszkoreit as chief science officer, and Vanessa Cann, a board member of the German AI Association, as CEO for Europe. The company is hiring engineers in Berlin and Shanghai.

Enterprise Applications, SAP

Until recently, software-defined networking (SDN) technologies have been limited to use in data centers — not manufacturing floors.

But as part of Intel’s expansive plans to upgrade and build a new generation of chip factories in line with its Integrated Device Manufacturing (IDM) 2.0 blueprint, unveiled in 2021, the Santa Clara, Calif.-based semiconductor giant opted to implement SDN within its chip-making facilities for the scalability, availability, and security benefits it delivers.

“Our concept was to use data center technologies and bring them to the manufacturing floor,” says Rob Colby, project lead. “We’ve had to swap the [networking infrastructure] that exists, which is classic Ethernet, and put in SDN. I’ve upgraded a whole factory from one code version to another code version without downtime for factory tools.”

Aside from zero downtime, moving to Cisco’s Application Centric Infrastructure (ACI) enabled Intel to solve the increasingly complex security challenges associated with new forms of connectivity, ongoing threats, and software vulnerabilities. The two companies met for more than a year to plan and implement for Intel’s manufacturing process security and automation technology that had been used only in data centers.

“This is revolutionary for us in the manufacturing space,” Colby says, noting the cost savings from not taking the factory offline and uninterrupted production is a major financial benefit that keeps on giving. 

That ability to upgrade the networking infrastructure without downtime applies to downloading security patches and integrating tools into the production environment alike, Colby adds.  

“Picture a tool being the size of a house. One of our most recent tools is a $100 million tool, and landing a tool of that size involves a lot of complexity, after which I have to connect it so it can communicate with other systems within our infrastructure,” Colby says. “[Having SDN in place] makes landing tools faster and the quality increases. We’re also able to protect it at the level we need to be protecting it without missing something in the policy.”

Bringing SDN to the factory floor

The project, which earned Intel a 2023 US CIO 100 Award for IT innovation and leadership, has also enabled the chipmaker to perform network deployments faster with 85% less headcount.

Colby says it took a couple of years for the partners to build the blueprint and begin rolling out the solution to existing factories, including rigorous offline testing before beginning.

The migration required no retraining of chip designers in the clean room but some training for those in the manufacturing facilities. “We really went above and beyond to make it as seamless as possible for them,” Colby says. “We’ve recently been testing being able to migrate them over to ACI on the factory floor without any downtime. That will accelerate our migration for the rest of the factory floor.”

The collaboration with Cisco enables ACI to be deployed for factory floor process tools, embedded controllers, and new technologies such as IoT devices being introduced into the factory environment, according to Intel.

It was “clear that we needed to move to an infrastructure that better supported automation, offered more flexible and dynamic security capabilities, and could reduce the overall impact when planned or unplanned changes occur,” Intel wrote in a white paper about its switch to SDN. “The network industry has been trending toward SDN over the last decade, and Intel Manufacturing has been deploying Cisco Application Centric Infrastructure (ACI) in factory on-premises data centers since 2018, gaining experience in the systems and allowing for more market maturity.”

Moving ACI to the manufacturing factories was the next step, and Colby cited Sanjay Krishen and Joe Sartini, both Intel regional managers, as instrumental in bringing SDN to Intel’s manufacturing floor.

The broad view of SDN in manufacturing

There are thousands of semiconductor companies globally, mostly in Taiwan. Yet the US Government CHIPS and Science Act of 2022 has incentivized more semiconductor manufacturing on US soil, and it is taking root.

“The use of cellular and WiFi connectivity on the factory floor has enabled these manufacturers to gain improved visibility, performance, output, and even maintenance,” says IDC analyst Paul Hughes.

“For any industry, software-defined networking brings additional scale and on-demand connectivity to what are now connected machines (industrial IoT),” Hughes says, adding that this also provides improved access to the cloud for data management, storage, analytics, and decision-making. “SDN allows networks to scale up securely when manufacturing activity scales and ensures that all the data generated by and used by machines and tools on the factory floor can move quickly across the network.”

As more semiconductor manufacturing springs up in the US, the use of SDN also “becomes one of the key steps in digital transformation where, in this case, a semiconductor manufacturer can collect, manage, and use data holistically from the factory floor to beyond the network edge,” says Hughes, whose most recent survey, IDC’s 2023 Future of Connectedness Sentiment, shows that 41% of manufacturers believe that the flexibility to add/change bandwidth capacity in near real-time is a top reason for SDN/SD-WAN investment.

The survey also showed that 31% of manufacturers say optimized WAN traffic for latency, jitter, and packet loss is another top reason for SDN/SD-WAN investment and is considered very important for managing factory floor equipment in real-time.

Intel has deployed SDN in roughly 15% of its factories to date and will continue to migrate existing Ethernet-based factories to SDN. For new implementations, Intel has chosen to use open source Ansible playbooks and scripts from GitHub to accelerate its move to SDN.

Intel certified Cisco’s ACI solution in time to deploy in high-volume factories built in Ireland and the US in 2022 and for more planned in Arizona, Ohio, New Mexico, Israel, Malaysia, Italy, and Germany in the coming years, according to the company.

Intel’s core partner on the SDN project is confident the benefits will continue to have a sizable benefit — even for a company of Intel’s size.

“The biggest benefit is that SDN helped Intel complete new factory network builds with 85% less headcount and weeks faster through the use of automated scripts,” says Carlos Rojas, a sales and business developer who worked on the project. “Automation and SDN enable better scalability and consistency of security and policy controls, and the ability to deploy micro-segmentation, improving Intel’s security posture and reducing attack surfaces.”

CIO 100, Manufacturing Industry, Networking, SDN

Nvidia’s transformation from an accelerator of video games to an enabler of artificial intelligence (AI) and the industrial metaverse didn’t happen overnight — but the leap in its stock market value to over a trillion dollars did.

It was when Nvidia reported strong results for the three months to April 30, 2023, and forecast its sales could jump by 50% in the following fiscal quarter, that its stock market valuation soared, catapulting it into the exclusive trillion-dollar club alongside well-known tech giants Alphabet, Amazon, Apple, and Microsoft. The once-niche chipmaker, now a Wall Street darling, was becoming a household name.

Investor exuberance waned later that week, however, dropping the chip designer out of the trillion-dollar club in short order, just as former members Meta and Tesla before it. But it was soon back in, and in mid-June, investment bank Morgan Stanley forecast Nvidia’s value could continue to rise another 15% before the year is out.

By late August, Nvidia had more than justified its earlier optimism, reporting a quarter-on-quarter increase in revenue of 88% for the three months to July 30, driven by record sales of data center products of over $10 billion, with strong demand from AWS, Google, Meta, Microsoft, and Oracle. Its stock price, too, continued to climb, bumping up against the $500 level Morgan Stanley forecast. Unlike most of its trillion-dollar tech cohorts, Nvidia has less consumer brand awareness to go on, making its Wall Street leap more mysterious to Main Street. How Nvidia got here and where it’s going next sheds light on how the company has achieved that valuation — a story that owes a lot to the rising importance of specialty chips in business, and accelerating interest in the promise of generative AI.

Graphics driver

Nvidia started out in 1993 as a fabless semiconductor firm designing graphics accelerator chips for PCs. Its founders spotted that generating 3D graphics in video games — then a fast-growing market — placed highly repetitive, math-intensive demands on PC central processing units (CPUs). They realized those calculations could be performed more rapidly in parallel by a dedicated chip rather than in series by the CPU, an insight that led to the creation of the first Nvidia GeForce graphic cards.

For many years, graphics drove Nvidia’s business; even 30 years on, its sales of graphics cards for gaming, including the GeForce line, still make it the biggest vendor of discrete graphics cards in the world. (Intel makes more graphics chips, though, because most of its CPUs ship with the company’s own integrated graphics silicon.)

Over the years, other uses for the parallel-processing capabilities of Nvidia’s graphical processing units (GPUs) emerged, solving problems with a similar matrix arithmetic structure to 3D-graphics modelling.

Still, software developers seeking to leverage graphics chips for non-graphical applications had to wrangle their calculations into a form that could be sent to the GPU as a series of instructions for either Microsoft’s DirectX graphics API or the open-source OpenGL (Open Graphics Library).

Then in 2006 Nvidia introduced a new GPU architecture, CUDA, that could be programmed directly in C to accelerate mathematical processing, simplifying its use in parallel computing. One of the first applications for CUDA was in oil and gas exploration, processing the mountains of data from geological surveys.

The market for using GPUs as general-purpose processors (GPGPUs) really opened up in 2009, when OpenGL publisher Khronos Group released Open Computing Language (OpenCL).

Soon, hyperscalers such as AWS added GPUs to some of their compute instances, making scalable GPGPU capacity available on demand, thereby lowering the barrier of entry to compute-intensive workloads for enterprises everywhere.

AI, crypto mining, and the metaverse

One of the biggest drivers of demand for Nvidia’s chips in recent years has been AI, or, more specifically, the need to perform trillions of repetitive calculations to train machine learning (ML) models. Some of those models are truly gargantuan: OpenAI’s GPT-4 is said to have over 1 trillion parameters. Nvidia was an early supporter of OpenAI, even building a special compute module based on its H100 processors to accelerate the training of the large language models (LLMs) the company was developing.

Another unexpected source of demand for the company’s chips has been cryptocurrency mining, the calculations for which can be performed faster and in a more energy-efficient manner on a GPU than on a CPU. Demand for GPUs for cryptocurrency mining meant that graphics cards were in short supply for years, making GPU manufacturers like Nvidia similar to pick-axe retailers during the California Gold Rush.

Although Nvidia’s first chips were used to enhance 3D gaming, the manufacturing industry is also interested in 3D simulations, and its pockets are deeper. Going beyond the basic rendering and accelerating code libraries of OpenGL and OpenCL, Nvidia has developed a software platform called Omniverse — a metaverse for industry used to create and view digital twins of products or even entire production lines in real-time. The resulting imagery can be used for marketing or collaborating on new designs and manufacturing processes.

Efforts to stay in the $1t club

Nvidia is driving forward on many fronts. On the hardware side, it continues to sell GPUs for PCs and some gaming consoles; supplies computational accelerators to server manufacturers, hyperscalers, and supercomputer manufacturers; and makes chips for self-driving cars. It’s also in the service business, operating its own cloud infrastructure for pharmaceutical firms, the manufacturing industry, and others. And it’s a software vendor, developing generic libraries of code that anyone can use to accelerate calculations on Nvidia hardware, as well as more specific tools such as its cuLitho package to optimize the lithography stage in semiconductor manufacturing.

But interest in the latest AI tools such as ChatGPT (developed on Nvidia hardware), among others, is driving a new wave of demand for Nvidia hardware, and prompting the company to develop new software to help enterprises develop and train the LLMs on which generative AI is based.

In the last few months the company has also partnered with software vendors including Adobe, Snowflake, ServiceNow, Hugging Face, and VMware, to ensure the AI elements of their enterprise software are optimized for its chips.

“Because of our scale and velocity, we’re able to sustain this really complex stack of software and hardware, networking and compute across all these different usage models and computing environments,” CEO Jensen Huang said during a call on August 23 to discuss the latest earnings.

Nvidia is also pitching AI Foundations, its cloud-based generative AI service, as a one-stop shop for enterprises that might lack resources to build, tune, and run custom LLMs trained on their own data to perform tasks specific to their industry. The move, announced in March, may be a savvy one, given rising business interest in generative AI, and it pits the company in direct competition with hyperscalers that also rely on Nvidia’s chips.

Nvidia AI Foundations models include NeMo, a cloud-native enterprise framework; Picasso, an AI capable of generating images, video, and 3D applications; and BioNemo, which deals in molecular structures, making generative AI particularly interesting for accelerating drug development, where it can take up to 15 years to bring a new drug to market. Nvidia says its hardware, software, and services can cut early-stage drug discovery from months to weeks. Amgen and AstraZeneca are among the pharmaceutical firms testing the waters, and with US pharmaceutical firms alone spending over $100 billion a year on R&D, more than three times Nvidia’s revenue, the potential upside is clear.

Pharmaceutical development is moving faster, but the road toward widespread adoption of another of Nvidia’s target markets is less clear: self-driving cars have been “just around the corner” for years, but testing and getting approval for use on the open road is proving even more complex than getting approval for a new drug.

Nvidia gets two bites at this market. One is building and running the virtual worlds in which self-driving algorithms are tested without putting anyone at risk. The other is the cars themselves. If the algorithms make it out of the virtual world and onto the roads, cars will need chips from Nvidia and others to process real-time imagery and perform myriad calculations needed to keep them on course. This is the smallest market segment Nvidia breaks out in its quarterly results: just $253 million, or 2% of overall sales, in the three months to July 30, 2023. But it’s a segment that’s been more than doubling each year.

When it reported its results for the three months to April 30, Nvidia made an ambitious forecast: that its revenue for the following fiscal quarter, ending July 30, would be over 50% higher — and it went on to beat that figure by a wide margin, reporting revenue of $13.5 billion. Growth in gaming hardware sales was also up 22% year on year, and 11% quarter on quarter, which would be impressive for most consumer electronics companies, but lags far behind the recent growth in Nvidia’s biggest market — data centers. The proportion of its overall revenue coming from gaming has shrunk from over one-third in the three months to April 30 to just under one-fifth in the period to July 30. Nevertheless, Nvidia still sees opportunity ahead, as less than half of its installed base has upgraded to graphics cards with the Geforce RTX technology it introduced in 2018, CFO Colette Kress said during the call.

Huang and Kress both talked up how clearly Nvidia can see future demand for its consumer and data center products, well into next year.

“The world is transitioning from general-purpose computing to accelerated computing,” Huang said. With around $250 billion in capital expenditure on data centers every year, according to Huang, the potential market for Nvidia is enormous as that transition plays out.

“Demand is tremendous,” he said, adding that the company is significantly expanding its production capacity to boost supply for the rest of this year and into next.

Nevertheless, Kress was more reserved in her projections for the three months to October 30, saying she expects revenue of between $15.7 billion and $16.3 billion, or quarter-on-quarter growth between 16% and 21%.

All eyes will be on the company’s next earnings announcement, on November 21.

Artificial Intelligence, C Language, Cryptocurrency, GPUs, Manufacturing Systems, Nvidia, Software Deployment, Software Development

Many companies today are rapidly adopting new technologies and tools to improve overall efficiencies, improve customer and client experiences, and support key initiatives that are related to business transformation. However, these efforts, while necessary, bring with them growing pains for the workforce.

As our global technologies transform, so must our teams. What we have discovered in implementing emerging technology at U.S. Bank over the years is that effectively deploying and making use of new tools requires a skilled and diverse workforce and a technology team with a strong engineering culture to support it.

Banking on technology and people

The largest technology investment for U.S. Bank came in 2022 when we announced Microsoft Azure as our primary cloud service provider. This move accelerated our ongoing technology transformation, part of which includes migrating more than two-thirds of our application footprint to the cloud by 2025. Harnessing the power of cloud is just one of many ways that technology is enabling our organization to bring products and services to our clients faster, while enhancing our operations’ scalability, resiliency, stability, and security.

The technology transformation at U.S. Bank is also focused on adopting a more holistic approach to both external and internal talent pipelines. Diversity is a key component of our team building because true innovation and problem-solving comes from people with different perspectives. To attract new, diverse talent to join our team, we supplement traditional recruitment methods with proactive techniques that help build our company’s reputation as a leader in technology and to give back to our community.

For example, we’re positioning some of our top subject matter experts at relevant conferences and councils to share lessons learned from our transformation journey and we’re engaging with educational programs, like Girls Who Code, Summit Academy, and Minneapolis Community and Technical College to both develop and recruit diverse talent.

Our top workforce priority, however, is retaining our current team and equipping them with the skills they’ll need today and in the future. Because technology changes so quickly, we have adopted a continuous learning mindset where our teams embed learning into their everyday responsibilities and see it as an investment in themselves. To do that, we created a strategy that focuses on four key areas: an employee’s time, establishing a personal plan, providing effective learning tools, and offering ways to apply what is learned. 

1. Time: Establishing a flexible learning environment

We created an environment and performance goals that encourage our technology teams to regularly dedicate time to continuous learning. Each member of my leadership team operates a different type of technology team with different priorities, work schedules, and deadlines, so they are empowered to decide how to create the time and space for their employees to achieve their learning goals. Some have opted to block all employees’ calendars during certain times of the month, and others leave it to their individual manager-employee relationships to determine what works best. We’ve found that, by empowering each team to make these decisions, our teammates are more likely to complete their learning goals.

2. Plan: Growing skillsets and knowledge

Just investing the time doesn’t necessarily mean our teams will develop the right skills. So, we created a program we call “Grow Your Knowledge,” where managers and employees have ongoing skills-related discussions to better understand employees’ current skills, skill interests, and potential skill gaps. This helps them collaboratively create a personalized development plan. We’re also able to use the information to help us measure impact and provide insights on new trainings we may need to meet a common skill gap.

3. Tools: Learning paths and programs

We assembled a cross-functional team of external consultants, HR representatives, learning and development experts, and technical professionals to develop the Tech Academy — a well-curated, one-stop shop for modern tech learning at U.S. Bank. This resource designed to support our teams to learn specific technical, functional, leadership, and power skills that are needed to drive current initiatives. Employees can take advantage of persona-aligned learning paths, targeted skill development programs, and experiential learning. We even developed a Modern Technology Leadership Development Program for managers to help them better understand how to support their teams through this transformation.

4. Application: Putting experiential learning into practice

Providing experiential opportunities where employees can further build their skills by practicing them is an essential part of our strategy. Right now, we offer programs such as certification festivals, hackathons, code-a-thons, bootcamps, and other communities of practice for our teammates to hone their newly acquired skills in psychologically and technologically safe, yet productive settings.

Our certification festival, called CERT-FEST, is our most successful experiential learning program so far. We leverage our own teammates to train others in a cohort-learning environment for eight weeks. To date, our employees have obtained several thousand Microsoft Azure certifications. Hackathons and code-a-thons take that certification to the next level by allowing our technology teammates to partner with the business in a friendly, competitive environment. The winning teams at this event build solutions for new products or services that meet a real business or client need.

Learn today for the needs of tomorrow

Since we’ve started this continuous learning journey with our teams, we’re seeing higher employee engagement, an increase in our team’s reported skills and certifications, and a stronger technology-to-business connection across U.S. Bank. These efforts have also shifted our employee culture to acknowledge that working in technology means you will always be learning and growing.

Finding new, more effective ways to address the ever-shifting needs of our customers will always be a priority. But in a continuous learning environment the question we now always ask is, “What do I need to know today, to learn today, to do my job better tomorrow?” This has been the driving force behind our success in growing, retaining, and motivating our technology workforce.

Financial Services Industry, IT Training 

Developers are hired for their coding skills, but often spend too much time on information-finding, setup tasks, and manual processes. To combat wasted time and effort, Discover® Financial Services championed a few initiatives to help developers get back to what they do best: developing. The result? More than 100,000 hours of developer toil have been automated or eliminated.

“A happy developer is one who’s writing code,” said Joe Mills, Director of Transformation Strategy and Automation at Discover. “So, we strive to create an inspiring culture and an exciting place to build your career. We want it to be easy to deliver value with the skillsets you have and to harness opportunities to refine your craft.”

Streamlining development through tools, knowledge, community

DevWorx is a program that simplifies the developer experience, streamlines work, and frees up time to innovate. Specifically, DevWorx is an online hub where developers across Discover can access prescriptive guidance for repetitive setup or deployment tasks, developer environments, self-service or automation tools, and a community of other developers to collaborate with.

“It’s basically a developer-driven community where we remove barriers to getting work done, focus on efficiency, and really enjoy coding as opposed to it feeling like a slog,” said Jonathan Stoyko, senior manager of strategic projects.

Developers can use DevWorx to standardize duplicate processes and reduce manual tasks. “If there’s a code structure that has to be reused every time you’re creating an application, that structure can be standardized as a template,” said Stoyko. “And we can store it in a common location so everybody has access to it and can contribute to it.”

Increasing productivity with step-by-step tutorials

Golden Paths are a key element of DevWorx. Golden Paths provide step-by-step tutorials for accomplishing specific development tasks within Discover. From making submissions, gathering approvals, and filling out prerequisite forms, Golden Paths covers the entire production lifecycle.

“If someone gets dropped into a new team, they can start coding within minutes and skip months of playing catchup,” said Andrew Duckett, senior principal application engineer and architect. “With Golden Paths, these processes are all codified and readily accessible.”

Developers are encouraged to contribute to existing paths and build new ones based on their own experiences.

Duckett continues: “We believe that it’s better to let the engineering community determine what works best for them, not to put a bunch of people in an ivory tower and dictate what is right. These developers are hired to innovate and solve problems, so we let them do that.”

Reducing manual tasks through automation

Automating manual tasks and repetitive processes is crucial for increasing developer efficiency. “Employing automation for tasks that many engineers face throughout their SDLC helps to shift focus towards human value-add activities. This also increases overall delivery throughput, with higher confidence in our development lifecycle, and produces consistent processes across teams that would otherwise be handled one-off and uniquely” said Joe Mills.

Developers can engage a team of automation experts to assess certain processes and tasks and help uncover automation opportunities. The team uses a hub-and-spoke model to scale their efforts across development teams at Discover and can help teams with robotic process automation, business automation, or code automation.

Reducing friction through consistent development practices

In addition to these initiatives, engineers at Discover adhere to a set of practices, internally called CraftWorx, that define and direct the agile development process. Aligning engineers across these practices reduces friction because engineers and developers are following the same development practices.

“If you’re trying to solve a problem and you think, ‘where’s the answer?’ CraftWorx aims to be that answer,” said Colin Petford, director of technology capability enablement at Discover. “It’s also constantly evolving along with our craft. It will never be finished because technology doesn’t sit still.”

Learn how Discover developers are using automation, Golden Paths, CraftWorx, and more.

Digital Transformation

Enterprise resource planning (ERP) software vendor IFS has agreed to acquire Falkonry, the developer of an AI-based time-series data analytics tool, to boost its enterprise asset management (EAM) services portfolio.

IFS has an eye on the growing number of connected machines in factories, and will add Falkonry’s self-learning Time Series AI Suite, which can help enterprises manage and maintain manufacturing equipment, to its existing enterprise simulation and AI-based scheduling and optimization capabilities.  

EAM can be considered a subset of ERP software, providing tools and applications to manage the lifecycle of physical assets in an enterprise, in order to maximize their value. The global EAM market is expected to grow at a compound annual growth rate (CAGR) of 8.7% to reach $5.5 billion by 2026, from $3.3 billion in 2020, according to research from MarketsandMarkets.

Cupertino-headquartered Falkonry, which was founded in 2012 by CEO Nikunj Mehta, has customers across North America, South America, and Europe, including the US Navy and Air Force, Ternium, North American Stainless, and Harbour Energy, among others. It has raised $13.3 million in funding from investors including Zetta Venture Partners, SparkLabs Accelerator, Polaris Partners, Presidio Ventures, Basis Set Ventures, Fortive, and Next47. IFS expects to complete the acquisition of Falkonry by the fourth quarter of 2023. In June, it announced the acquisition of Poka — a connected worker software services provider — in order to boost the productivity of an overall factory. And last year it scooped up Netherlands-based Ultimo to help meet demand for cloud-based enterprise asset management technology.

Asset Management Software, Enterprise Applications, Mergers and Acquisitions

When organizations began to fully embrace both the work-from-anywhere (WFA) user model and multi-cloud strategies, IT leadership quickly realized that traditional networks lack the flexibility needed to support modern digital transformation initiatives. 

Legacy network shortcomings led to the rapid growth of software-defined wide area networking (SD-WAN). This next-generation technology enables a more agile network and provides high-performance access to cloud applications for users on-premises and off-premises. It eliminates the need for backhauling—routing remote traffic through the data center before accessing the internet—enabling direct access to critical cloud services. The benefits are many and include:

Better application performance: SD-WAN can prioritize business-critical traffic and real-time services like Voice over Internet Protocol (VoIP), steering it over the most efficient route and incorporating advanced WAN remediation. Having several options for moving traffic helps reduce packet loss and latency from heavy traffic. 

Increased agility: Organizations can easily scale to whatever sites and traffic levels they experience, eliminating the need to plan network upgrades months or years in advance.

Cost savings and ROI: SD-WAN routes traffic efficiently over multiple channels—including existing MPLS circuits and the public internet via LTE, broadband, and 5G. This maximizes the usage of available WAN capacity and eliminates the need for new MPLS bandwidth, cutting costs. It also optimizes operations and reduces network outages, which frees up IT resources so they can work on other things.

Common SD-WAN misconceptions
Like all technologies, SD-WAN has evolved since it was introduced several years ago. Though it has been around for some time, many in the industry have not embraced it and don’t fully appreciate its capabilities because of some misconceptions. Below are four common myths about SD-WAN:

SD-WAN causes latency: This is not true. Secure SD-WAN eliminates the need to backhaul traffic for security checks because it can do this closer to the remote location where the traffic originates. This reduces latency.  

SD-WAN = uCPE + SSE: Universal customer premises equipment (uCPE) that is stitched together with different virtual machines and security service edge (SSE) components that come from different vendors creates complexity. An integrated solution like SD-WAN from a single vendor is much more efficient. They are not equally effective solutions.

Networking and security solutions should be procured separately: Secure SD-WAN isn’t only about networking. A vendor with strong capabilities and deep experience in both networking and security is going to provide IT organizations with better solutions. 

SASE will replace SD-WAN: This won’t happen because SD-WAN is a foundational element of SASE, a comprehensive solution that securely connects the hybrid workforce. The right SD-WAN needs to be in place for a smooth, seamless transition to SASE that meets the organization’s needs.

The future of SD-WAN
With the rise of distributed applications and the continued increase in network traffic, SD-WAN will continue to grow in importance as a critical part of corporate network infrastructure. In fact, it’s becoming foundational to the distributed, hybrid network because it gives IT teams the visibility to anticipate network issues, automate remediation, enforce consistent security policies, and centrally control access to applications and resources. 

It is also a critical step on the journey to SD-Branch, software-defined secure networking for branch environments, and SASE. Because it plays such an important role in these technologies, SD-WAN needs to be fully autonomous to provide scalable and resilient architecture.

Additionally, the role of AI and automation will continue to grow, especially within Secure SD-WAN solutions that leverage these tools to detect, contain, and respond to emerging threats. Finally, ensuring optimal user experience should be top of mind for CIOs because if users run into challenges accessing resources they need to work, they may find workarounds that will expose the network to threats. 

Embrace SD-WAN for more nimble infrastructure
SD-WAN is an excellent network security solution that has not been fully embraced by all because of some lingering misconceptions. It’s past time for IT leadership to bust the myths and move forward. The future demands that organizations have more agile and secure networks and provide their WFA users with seamless access to data and applications on the cloud.

Fortinet’s Secure SD-WAN solution provides Secure SD-WAN on-premises and in the cloud while offering FortiGuard AI-Powered Security Services to protect sensitive traffic. 


Pentagon Credit Union (PenFed), the second-largest credit union in the US, is looking to generative AI to transform how it interacts with its customers. Its vision? To create a new, cost-effective channel that helps meet members needs — and learns as it does so, to the benefit of members and the credit union itself.

“What’s happened in our business over the years is every channel is expensive and it doesn’t ever replace another channel. It’s just additive,” says Joseph Thomas, PenFed EVP and CIO, who notes that today 80% of PenFed’s interactions are digital, 15% are via call center, and 5% still rely on physical branches. “But we realized that with AI, we could add another channel of engagement but very cost effectively. We could add chat with a bot-enabled interaction to solve the early, simpler questions.”

Even with more than 2.9 million members, as a credit union PenFed doesn’t have the resources of a traditional bank. It doesn’t have an innovation lab or center of excellence to help it develop new technologies. But it does have more than eight years of experience leveraging supervised ML to support credit risk modeling and decision making. And in that time, it also adopted Salesforce.

“Salesforce is not just a CRM for us,” Thomas explains. “Salesforce is a digital platform, and it already had capabilities with Einstein as part of the platform, so we could cheaply and efficiently get into AI-enabled chatbots.”

The AI journey

The credit union started its new service strategy by deploying an Einstein-powered chatbot internally to support its IT service desk. The bot, which leveraged PenFed’s body of knowledge articles to assist end-users with tasks such as password resets, proved its effectiveness immediately and now handles about 25% of common internal service requests, freeing up service desk staff to focus on more complex tasks.

Once Thomas’s team developed experience with the platform, it began rolling out bots externally to the credit union’s members. Today, bots handle nearly 40,000 sessions per month, providing loan application status, product and servicing information, and technical support.

“We wanted to use AI internally before we unleashed it on the members,” Thomas says, adding that, with Einstein packaged with Salesforce, PenFed was able to conduct those internal experiments and later offer the new channel to its members at no extra cost.

PenFed now resolves 20% of cases on first contact with Einstein bots, with a 223% increase in chat and chatbot activity over the past year, Thomas says. The chat channel has also taken pressure off PenFed’s call center, which has reduced its average speed to answer by a minute, to less than 60 seconds, even as PenFed’s membership has increased by 31%.

But it is phase three of PenFed’s AI journey that Thomas is particularly excited about: Using generative AI for an assistant that can interact more naturally than a traditional chatbot while gathering data for insights that can lead to more personalized interactions.

“I don’t normally get hyped up on technology; I’m much more practical,” Thomas says, adding that his primary focus is always delivering value. “But what I’m seeing with generative AI is the missing ingredient to the world of digital, to the world of data.”

For years, CIOs have invested in data initiatives — data science, business intelligence, analytics — and they’ve also investing in digital channels, Thomas explains. But generative AI offers the potential to “snap data and digital together” to help institutions like PenFed go “from the digital credit union to the cognitive credit union,” he says.

Thomas offers up an example to illustrate his point. Today PenFed members can use the credit union’s digital channel to, say, change a CD from automatic to manual renewal. With gen AI in the mix, even as the bot helps a member perform this task, it can seek to understand the meaning behind it. In this case, the member may be shifting to manual renewal in order to facilitate moving their investments to a new account with another financial institution once the current CD matures.

“They’re going to take their money to [the other institution] because [the other institution] has got a better rate,” Thomas says. “Let’s say ours is 4.5% and theirs is 4.75%. In today’s world, we’re missing the digital forensics that members leave behind with the digital transaction.”

With generative AI, that insight could trigger the system to deliver the member a personalized offer of, say, 4.7% via the member’s channel of preference. The member gets a personalized experience, and the business could target members likely to churn rather than creating a marketing campaign that offers a 4.75% rate to 500,000 members.

“Now you get this hyper-personalized business transaction that benefits both parties,” Thomas says. “That’s just a small example. I think the combinations are endless.”

The copilot approach

As with its previous phase, PenFed is starting to use gen AI as a “copilot” for the credit union’s internal employee support line before the team extends the technology to its members. The next step will likely be a copilot for call center representatives dealing with member calls.

The credit union is using Einstein GPT on the Salesforce Financial Services Cloud because that’s where its knowledge articles sit. It is in the process of standing up Salesforce Data Cloud, which will act as the connection to other data sources.

“Data Cloud is going to be the zero ETL capability,” Thomas says. “It will get real-time data from Salesforce clouds and from our Snowflake environment.”

As Thomas sees it, that combination of real-time data and AI insights will further transform PenFed’s customer experience to an intelligent, mutually beneficial one for both the credit union and its members.

Digital Transformation, Financial Services Industry, Generative AI,