Home

Business Consultant. Works in Digital Legal Management (since 2007), Knowledge Management (KM, since 1999), Contract Lifecycle Management (CLM, since 2006), Digital Contract Management (since 2009), SaaS B2B (since 1998), Signer of the Agile Manifesto (2006). Founder of EuroCloud Italy (2010) and GM of Cloud for Europe (2016). Publishes Internet contents in different subjects since 1996. Owner and founder of the brand B|KM for SaaS B2B production. Co-founder of AltonaSpain (2021), and Noticias Altona (2022), in the audit/compliance sector for the Spanish market. Works in Spain, Italy, The Netherlands. In his private life he’s a writer, art curator, and amateur music composer.


Recent articles
The technology industry is made up of just 26% women, compared to a nearly equal split at 49% across the total workforce. Most notably, that number hasn’t done much besides decrease over the past 30 years, hovering around the same percentage and dipping slightly in recent years. But the lack of women in tech is a deeper issue that stems all the way back to childhood — with gender stereotypes that have historically, and inaccurately, suggested that women and girls are less skilled at math, science, and technology. Unfortunately, that persistent bias has grown into a self-fulfilling prophecy over the years, creating a systemic issue where girls and women aren’t well represented in STEM, and therefore don’t feel empowered or encouraged to pursue it as a career path. “For a certain period of time in your life when you’re in middle school, your interest in STEM as a girl or as a non-binary learner can be impacted by the lack of representation that you see. You can have a spark, or you can have an interest, but if you don’t see yourself represented, you will not necessarily start taking those courses for high school, which will in fact impact your ability to participate in post-secondary, which will impact your ability to get a career in STEM later in your life,” says Rebecca Hazell, interim executive director of Hackergal. To close the gender gap for women in STEM careers, girls need to be encouraged to maintain an interest in STEM during elementary, middle, and high school. And that’s the core of Hackergal’s mission — to create opportunities for young girls to engage with STEM education and to consider STEM careers as a potential option. Fostering the talent pipeline in middle school Hackergal works with middle school and high school aged girls, as well as nonbinary students, directly through educators and school districts. Learners connect with Hackergal through classrooms, community centers, homeschooling programs, summer school, hackathon events, and coding clubs across Canada. And each year, Hackergal hosts a hackathon event at the end of the school year for kids participating in coding clubs. The hackathons and coding clubs are targeted at grades six to nine, while the Hackergal Ambassador program is a highly competitive program for high schoolers who have aged out of Hackergal’s middle school coding and hackathon programs. Hackergal uses a “teach the teacher” model, in which Hackergal connects with teachers, school boards, and school districts across Canada to directly train educators and provide them with information on the Hackergal Hub, Hazell says. The Hub enables teachers to bring a full coding curriculum to students that they can easily integrate into their lesson plans. “Whether during classroom time, or it’s an extracurricular, they create that safe space where the girls can learn, make mistakes, and raise their hand with confidence and feel comfortable with that,” says Hazell, who adds that a lot of educators express nervousness about implementing a coding club, especially when they have no experience with coding themselves. But Hackergal’s approach aims to empower any educator to expanding their students’ access to coding education, regardless of the teacher’s own programming experience. Utilizing a platform called Lynx, which originated in Canada and is developed in English and French, Hackergal provides educational programming across the country for students and teachers. The team at Hackergal has been intentional about making its curriculum available to students and teachers in any situation — whether they’re homeschooled or reside in rural areas of Canada, and regardless of language. “We know that there are certain populations who need our programming a lot and they need the support. They need the community, they need the connection, and the competence-building for their youth. And we’re more than happy to keep growing our program in the interest of serving them better,” says Hazell. Empowering a future generation of workers Hackergal’s current generation of learners is highly motivated to have a social impact in their work, says Hazell, adding that this is reflected in each year’s hackathon theme. Last year’s participants, for example, worked around the theme “coding together for our planet,” with a focus on sustainability and environmental issues, such as addressing pollution or developing innovative energy solutions. “We are very connected to social impact as an organization. It guides everything that we do,” Hazell says. “Research shows that girls specifically are more connected to tech and STEM learning if there is a social impact that’s aligned with that.” Encouraging students’ passions about social progress is part of Hackergal’s commitment, given that as a generation Hackergal learners will face “some of the biggest problems that this world has ever encountered” and will be among those responsible for finding solutions, Hazell says. “The people who are using these skills that we’re training them on now are going to have careers that are directly involved in coming up with solutions, and trying to innovate, to make sure our planet is okay,” she adds. The program also helps its learners establish impressive resumes right out of high school, with some Hackergal students starting up companies by grade 11. That motivation and commitment will help them to become top talent for organizations in the future. “They’re very motivated in that respect, the teenagers who are in our program, and they have a lot to offer and see the bigger picture. They’re thinking about what they can do and how it will impact the world going forward and what they can do to positively impact the world,” says Hazell. Getting involved For companies that want to work with Hackergal, it’s something of a “boutique” experience, says Laurel Maule, development manager at Hackergal. Because the organizations doesn’t have a home-base for students or a main operation center where companies can donate time or resources, corporate sponsors and donors typically work directly with Hackergal to support the organization’s specific needs. As more organizations focus on DEI, they’re turning to organizations like Hackergal to help solve the talent pipeline as early as possible. For these organizations, it’s can also be an early branding opportunity, as they can put their company name in front of the future workforce. Maule says that organizations often reach out to ask how they can help expand the talent pipeline. Beyond financial donations, some volunteer an IT executive to speak at a hackathon or coding event, or to write a blog or record a video that might inspire the young learners. Or they might invite ambassador students to do a specialized coding camp at their offices or offer mentorship and advice to older students who are thinking about their careers. For the learners, it’s an opportunity to start fostering a network early on. They’ll have experiences with a variety of organizations, professional connections throughout the industry, and unique guidance from technology leaders, all before they graduate high school. “The sky’s kind of the limit on how CIOs want to be engaged and how companies and employees want to be engaged. Each partnership, organization, or company that we work with, brings its own special set of skills. We work closely with them to figure out how we can utilize and build that long-term partnership to support these girls throughout their learning process,” says Maule. A sustainable support model By keeping resources low, and by working with government funding and directly with school districts and boards, Hackergal has been able to maintain a free program that enables students to learn, no matter their circumstances. “We work in a way that doesn’t draw down too much on our resources and allows us to have that creativity and that programming. And we are interested in growing and learning from what we do and trying to challenge ourselves to be as innovative as the kids need us to be, because that’s what we’re trying to share with them. We want to make sure that we’re providing the kind of programming that challenges, that keeps them excited,” says Hazell. And those efforts are working, as girls are gaining confidence through the program. According to a survey of the latest hackathon’s participants, 97% said they felt more confident in their coding and digital skills after the hackathon, 96% said they were more interested in writing code, and 100% say they felt more knowledgeable. “You really can’t get better statistics in that sense, especially from a survey that you put out to kids that age group. It was fantastic to see that feedback, and I think we’re going to keep trying to meet that high satisfaction rate amongst our learners,” says Hazell. Future of Hackergal For the future, Hackergal is working on developing a full mentorship program for investors that will involve “more interactive, longer-term mentorship programs,” to further support students, says Hazell. The organization also seeks to continue offering the program for free, as many of the students who need this programming the most are the ones who can’t afford it, or who don’t have access to it, making equitable access a key to Hackergal’s mission. “I don’t think that we could say with conviction that we were serving those who need us most if we were charging for the resources that we are delivering,” says Hazell. Hackergal is also working to increase sponsorship opportunities. Last year, for the first time, Hackergal launched a scholarship program, awarding two ambassadors who had graduated from the adult program a $5,000 scholarship for tuition or other expenses, generously provided by Royal Bank of Canada. Organizations seeking to build their own talent pipeline through coding and STEM camps are also looking to Hackergal for advice on how to start, and how to continue that support beyond just one or two events. “I hear from some of our speakers, and they always say without fail, ‘I wish this program had been around when I was younger,’” says Hazell. Even if girls don’t end up in tech careers, the key is “feeling encouraged to try something that’s maybe a little bit scary or challenging,” and finding that motivation to “push through barriers and to keep going, feeling supported by a community,” says Hazell. “Being able to partner with Hackergal — it’s kind of like you’re doing it for your younger self, especially if you’re a woman in leadership in tech,” Hazell says. “Partnering with Hackergal allows them to fulfill that wish or that deep-seated feeling of wanting to connect with that kid. And seeing that excitement, and some of the photos we have from our experiences, really makes me emotional because you see these kids, and they’re so excited to be a part of that community and that energy is special and it can have a bigger impact.” Diversity and Inclusion [...]
Facing the possibility of an economic recession, one of the world’s leading professional services companies felt the urgency to improve its grasp on spend management – the practice of fully understanding and managing supplier relations and company purchasing. With 738,000 employees and $3.8 billion in services contracts, it was crucial for Accenture to not only identify every dollar being spent but also assess whether the organization was fully exploiting each expenditure. But a sense of frustration pervaded the company, as procurement teams complained about limited visibility into contract terms and challenges tracking statement of work (SOW) agreements, pacts specifying goals, and deadlines expected of external employees. The capacity to generate the SOW contracts and effectively manage services spend depended upon the region since each was reliant on different processes and documentation requirements. Inadequate services spend visibility also increased exposure to local legal and regulatory risks. Likewise, customers were unsatisfied with a procurement process that was disjointed and inflexible when quick changes were needed.   Improvements were needed and the deadline was tight. “Procurement functions require a lot of time and effort working with suppliers to negotiate the best contract with the best terms,” said Patricia Miller, Accenture’s interim Chief Procurement Officer (CPO), “but if we are not able to compare the delivered service against that agreement in a systematic way, how can we assure that the hard-earned negotiated terms were applied?”  To relieve this quandary, Accenture launched a campaign to build a vigorous, dynamic procurement function to unlock more value by providing extraordinary visibility into services spend.  The global standard at lightning speed Based in Dublin, Ireland, Accenture specializes in digital, cloud, and security technology strategies, consulting, and operations, serving more than 40 industries in more than 120 countries. Now, as it conceptualized a new platform to effectively manage services spend, it was forced to change its deployment system. Previously, deployment planning was laborious, requiring substantial time and investment.  The lengthy process slowed feedback on solution design, as well as delivery times on changes. Yet, Accenture had a dependable, long-term partner in enterprise resource planning (ERP) software pioneer SAP, first adopting the company’s solutions in 2004. As it faced its latest challenge, Accenture chose SAP Fieldglass, a vendor management system for services procurement and external workforce organization, to provide reporting and analytics. In addition to implementing a global standard template – rather than a variety of country-specific prototypes – the solution would be customized to meet local invoicing, legal, and regulatory requirements. From submission to payment, not only would turnaround time be reduced, but collaboration and communication with suppliers were about to reach unprecedented levels. Meeting changing markets and business demands The function was deployed back in 2020 in the first of many country-by-country rollouts. Although the typical technology deployment had taken an average of one year per nation, the expedited timeline enabled 14 countries to begin using the solution within 12 months. A global management team was also formed to support the effort. Given the importance of the implementation, constant feedback was needed, and the enhanced technology amplified the level of dialogue, streamlining both testing activities and the ability to deliver required changes. Today, Accenture’s procurement arm is better equipped to meet changing market and business demands than ever before. For the first time, Accenture has a heightened understanding of the “hidden” workforce associated with its service business.  Since external workers may not always fit traditional profiles, users are able to cull contract information to link specific employees to their individual skill sets. Explained Jane M. Kennedy, Global External Management Director for Accenture, “Today, we have much-proved…visibility for management (due to) an online solution that aligns to each worker’s type of engagement.”  Suppliers noted the ease of transitioning to SAP Fieldglass, and the pace at which entire companies were able to adopt the platform. Currently, 2,000 suppliers have implemented the system, while $ 1 billion in services spend are managed through the function each year, resulting in a more transparent supply chain and significant cost savings. That includes the reduced fees for document storage in regions where procurement practices were primarily paper based. Users report 99% greater accuracy, as well as 7% error reduction per 10,000 SOWs. For creating a global standard procurement process through its development of a novel solution, Accenture was distinguished as a finalist at the 2023 SAP Innovation Awards, a yearly ceremony honoring organizations using SAP technologies to improve business and society. You can read their pitch deck to see what they accomplished to earn this honor. Digital Transformation [...]
SAP has appointed a new global head of artificial intelligence, Walter Sun, after the previous post-holder quit to found her own AI startup. For the past 18 years, Sun worked at Microsoft, most recently as VP of AI for its business and applications platform group. Sun has a PhD from MIT and continued to publish academic research papers during his time at Microsoft, in addition to teaching at Seattle and Washington universities. As part of Microsoft’s development team, Sun created Bing Predicts, the inference engine that provides the “favored to win” forecasts beneath search results for sporting fixtures and attempted to predict the 2016 US presidential election winner. (Spoiler alert: it failed.) More usefully for enterprises, he also helped develop Dynamics 365 AI for Market Insights, a feature for Microsoft’s ERP and CRM platform that scans search data to provide enterprises with information about emerging trends in social interest and sentiment around their brands. Most recently, he was involved in the introduction of Dynamics 365 Copilot, which draws on OpenAI’s GPT-4 generative AI model to, among other things, help marketers write engaging sales pitches. In a recent blog post, Sun described how Microsoft researchers conducted experiments to compare the performance of different AI models for use in Dynamics 365. His colleagues also studied how to write the most effective prompts for soliciting useful responses from generative AI systems. Sun replaces Feiyu Xu as SAP’s global head of AI. She joined the company in 2020, after a three-year stint in a similar role at Lenovo. Prior to that, she had worked for two decades at the German Research Center for Artificial Intelligence, DFKI. In the three years Xu led SAP’s AI initiatives, the company introduced AI technologies to many of its products, including tools for supply chain planning, expense management, customer experience, and online commerce. In May 2023, around the time Xu announced her intention to leave the company, SAP said it would embed IBM’s Watson AI technology into its products. SAP’s AI product team The entire AI unit that previously reported to Xu will now report to Sun, an SAP representative said. Sun’s team will include two VPs of AI technology, Sebastian Wieczorek and Ulf Brackmann; a CTO, Johannes Hoffart, and a global AI product manager, Nadine Hoffmann. Sun will report directly to Philipp Herzig, SAP’s head of cross-product engineering and experience, who reports to SAP’s executive board member for product engineering, Thomas Saueressig. SAP couldn’t say whether Sun will have a seat on the company’s AI Ethics Steering Committee as his predecessor, Xu, did. For now, the only representative of the AI team on the committee is Wieczorek, the VP of AI technology. The other eight committee members hold senior posts with responsibility for marketing, data protection, government affairs, legal, diversity, customer data, quality, and sustainability. As for Xu, after leaving SAP, she co-founded Nyonic, a Berlin-based startup that aims to build industry-focused, multilingual AI models that meet European ethical and legal standards. Xu is Nyonic’s chief innovation officer, and her co-founders include serial AI entrepreneur Han Dong as CEO in Shanghai, NLP expert Johannes Otterbach as CTO, computational linguist Hans Uszkoreit as chief science officer, and Vanessa Cann, a board member of the German AI Association, as CEO for Europe. The company is hiring engineers in Berlin and Shanghai. Enterprise Applications, SAP [...]
Until recently, software-defined networking (SDN) technologies have been limited to use in data centers — not manufacturing floors. But as part of Intel’s expansive plans to upgrade and build a new generation of chip factories in line with its Integrated Device Manufacturing (IDM) 2.0 blueprint, unveiled in 2021, the Santa Clara, Calif.-based semiconductor giant opted to implement SDN within its chip-making facilities for the scalability, availability, and security benefits it delivers. “Our concept was to use data center technologies and bring them to the manufacturing floor,” says Rob Colby, project lead. “We’ve had to swap the that exists, which is classic Ethernet, and put in SDN. I’ve upgraded a whole factory from one code version to another code version without downtime for factory tools.” Aside from zero downtime, moving to Cisco’s Application Centric Infrastructure (ACI) enabled Intel to solve the increasingly complex security challenges associated with new forms of connectivity, ongoing threats, and software vulnerabilities. The two companies met for more than a year to plan and implement for Intel’s manufacturing process security and automation technology that had been used only in data centers. “This is revolutionary for us in the manufacturing space,” Colby says, noting the cost savings from not taking the factory offline and uninterrupted production is a major financial benefit that keeps on giving.  That ability to upgrade the networking infrastructure without downtime applies to downloading security patches and integrating tools into the production environment alike, Colby adds.   “Picture a tool being the size of a house. One of our most recent tools is a $100 million tool, and landing a tool of that size involves a lot of complexity, after which I have to connect it so it can communicate with other systems within our infrastructure,” Colby says. “ makes landing tools faster and the quality increases. We’re also able to protect it at the level we need to be protecting it without missing something in the policy.” Bringing SDN to the factory floor The project, which earned Intel a 2023 US CIO 100 Award for IT innovation and leadership, has also enabled the chipmaker to perform network deployments faster with 85% less headcount. Colby says it took a couple of years for the partners to build the blueprint and begin rolling out the solution to existing factories, including rigorous offline testing before beginning. The migration required no retraining of chip designers in the clean room but some training for those in the manufacturing facilities. “We really went above and beyond to make it as seamless as possible for them,” Colby says. “We’ve recently been testing being able to migrate them over to ACI on the factory floor without any downtime. That will accelerate our migration for the rest of the factory floor.” The collaboration with Cisco enables ACI to be deployed for factory floor process tools, embedded controllers, and new technologies such as IoT devices being introduced into the factory environment, according to Intel. It was “clear that we needed to move to an infrastructure that better supported automation, offered more flexible and dynamic security capabilities, and could reduce the overall impact when planned or unplanned changes occur,” Intel wrote in a white paper about its switch to SDN. “The network industry has been trending toward SDN over the last decade, and Intel Manufacturing has been deploying Cisco Application Centric Infrastructure (ACI) in factory on-premises data centers since 2018, gaining experience in the systems and allowing for more market maturity.” Moving ACI to the manufacturing factories was the next step, and Colby cited Sanjay Krishen and Joe Sartini, both Intel regional managers, as instrumental in bringing SDN to Intel’s manufacturing floor. The broad view of SDN in manufacturing There are thousands of semiconductor companies globally, mostly in Taiwan. Yet the US Government CHIPS and Science Act of 2022 has incentivized more semiconductor manufacturing on US soil, and it is taking root. “The use of cellular and WiFi connectivity on the factory floor has enabled these manufacturers to gain improved visibility, performance, output, and even maintenance,” says IDC analyst Paul Hughes. “For any industry, software-defined networking brings additional scale and on-demand connectivity to what are now connected machines (industrial IoT),” Hughes says, adding that this also provides improved access to the cloud for data management, storage, analytics, and decision-making. “SDN allows networks to scale up securely when manufacturing activity scales and ensures that all the data generated by and used by machines and tools on the factory floor can move quickly across the network.” As more semiconductor manufacturing springs up in the US, the use of SDN also “becomes one of the key steps in digital transformation where, in this case, a semiconductor manufacturer can collect, manage, and use data holistically from the factory floor to beyond the network edge,” says Hughes, whose most recent survey, IDC’s 2023 Future of Connectedness Sentiment, shows that 41% of manufacturers believe that the flexibility to add/change bandwidth capacity in near real-time is a top reason for SDN/SD-WAN investment. The survey also showed that 31% of manufacturers say optimized WAN traffic for latency, jitter, and packet loss is another top reason for SDN/SD-WAN investment and is considered very important for managing factory floor equipment in real-time. Intel has deployed SDN in roughly 15% of its factories to date and will continue to migrate existing Ethernet-based factories to SDN. For new implementations, Intel has chosen to use open source Ansible playbooks and scripts from GitHub to accelerate its move to SDN. Intel certified Cisco’s ACI solution in time to deploy in high-volume factories built in Ireland and the US in 2022 and for more planned in Arizona, Ohio, New Mexico, Israel, Malaysia, Italy, and Germany in the coming years, according to the company. Intel’s core partner on the SDN project is confident the benefits will continue to have a sizable benefit — even for a company of Intel’s size. “The biggest benefit is that SDN helped Intel complete new factory network builds with 85% less headcount and weeks faster through the use of automated scripts,” says Carlos Rojas, a sales and business developer who worked on the project. “Automation and SDN enable better scalability and consistency of security and policy controls, and the ability to deploy micro-segmentation, improving Intel’s security posture and reducing attack surfaces.” CIO 100, Manufacturing Industry, Networking, SDN [...]
Nvidia’s transformation from an accelerator of video games to an enabler of artificial intelligence (AI) and the industrial metaverse didn’t happen overnight — but the leap in its stock market value to over a trillion dollars did. It was when Nvidia reported strong results for the three months to April 30, 2023, and forecast its sales could jump by 50% in the following fiscal quarter, that its stock market valuation soared, catapulting it into the exclusive trillion-dollar club alongside well-known tech giants Alphabet, Amazon, Apple, and Microsoft. The once-niche chipmaker, now a Wall Street darling, was becoming a household name. Investor exuberance waned later that week, however, dropping the chip designer out of the trillion-dollar club in short order, just as former members Meta and Tesla before it. But it was soon back in, and in mid-June, investment bank Morgan Stanley forecast Nvidia’s value could continue to rise another 15% before the year is out. By late August, Nvidia had more than justified its earlier optimism, reporting a quarter-on-quarter increase in revenue of 88% for the three months to July 30, driven by record sales of data center products of over $10 billion, with strong demand from AWS, Google, Meta, Microsoft, and Oracle. Its stock price, too, continued to climb, bumping up against the $500 level Morgan Stanley forecast. Unlike most of its trillion-dollar tech cohorts, Nvidia has less consumer brand awareness to go on, making its Wall Street leap more mysterious to Main Street. How Nvidia got here and where it’s going next sheds light on how the company has achieved that valuation — a story that owes a lot to the rising importance of specialty chips in business, and accelerating interest in the promise of generative AI. Graphics driver Nvidia started out in 1993 as a fabless semiconductor firm designing graphics accelerator chips for PCs. Its founders spotted that generating 3D graphics in video games — then a fast-growing market — placed highly repetitive, math-intensive demands on PC central processing units (CPUs). They realized those calculations could be performed more rapidly in parallel by a dedicated chip rather than in series by the CPU, an insight that led to the creation of the first Nvidia GeForce graphic cards. For many years, graphics drove Nvidia’s business; even 30 years on, its sales of graphics cards for gaming, including the GeForce line, still make it the biggest vendor of discrete graphics cards in the world. (Intel makes more graphics chips, though, because most of its CPUs ship with the company’s own integrated graphics silicon.) Over the years, other uses for the parallel-processing capabilities of Nvidia’s graphical processing units (GPUs) emerged, solving problems with a similar matrix arithmetic structure to 3D-graphics modelling. Still, software developers seeking to leverage graphics chips for non-graphical applications had to wrangle their calculations into a form that could be sent to the GPU as a series of instructions for either Microsoft’s DirectX graphics API or the open-source OpenGL (Open Graphics Library). Then in 2006 Nvidia introduced a new GPU architecture, CUDA, that could be programmed directly in C to accelerate mathematical processing, simplifying its use in parallel computing. One of the first applications for CUDA was in oil and gas exploration, processing the mountains of data from geological surveys. The market for using GPUs as general-purpose processors (GPGPUs) really opened up in 2009, when OpenGL publisher Khronos Group released Open Computing Language (OpenCL). Soon, hyperscalers such as AWS added GPUs to some of their compute instances, making scalable GPGPU capacity available on demand, thereby lowering the barrier of entry to compute-intensive workloads for enterprises everywhere. AI, crypto mining, and the metaverse One of the biggest drivers of demand for Nvidia’s chips in recent years has been AI, or, more specifically, the need to perform trillions of repetitive calculations to train machine learning (ML) models. Some of those models are truly gargantuan: OpenAI’s GPT-4 is said to have over 1 trillion parameters. Nvidia was an early supporter of OpenAI, even building a special compute module based on its H100 processors to accelerate the training of the large language models (LLMs) the company was developing. Another unexpected source of demand for the company’s chips has been cryptocurrency mining, the calculations for which can be performed faster and in a more energy-efficient manner on a GPU than on a CPU. Demand for GPUs for cryptocurrency mining meant that graphics cards were in short supply for years, making GPU manufacturers like Nvidia similar to pick-axe retailers during the California Gold Rush. Although Nvidia’s first chips were used to enhance 3D gaming, the manufacturing industry is also interested in 3D simulations, and its pockets are deeper. Going beyond the basic rendering and accelerating code libraries of OpenGL and OpenCL, Nvidia has developed a software platform called Omniverse — a metaverse for industry used to create and view digital twins of products or even entire production lines in real-time. The resulting imagery can be used for marketing or collaborating on new designs and manufacturing processes. Efforts to stay in the $1t club Nvidia is driving forward on many fronts. On the hardware side, it continues to sell GPUs for PCs and some gaming consoles; supplies computational accelerators to server manufacturers, hyperscalers, and supercomputer manufacturers; and makes chips for self-driving cars. It’s also in the service business, operating its own cloud infrastructure for pharmaceutical firms, the manufacturing industry, and others. And it’s a software vendor, developing generic libraries of code that anyone can use to accelerate calculations on Nvidia hardware, as well as more specific tools such as its cuLitho package to optimize the lithography stage in semiconductor manufacturing. But interest in the latest AI tools such as ChatGPT (developed on Nvidia hardware), among others, is driving a new wave of demand for Nvidia hardware, and prompting the company to develop new software to help enterprises develop and train the LLMs on which generative AI is based. In the last few months the company has also partnered with software vendors including Adobe, Snowflake, ServiceNow, Hugging Face, and VMware, to ensure the AI elements of their enterprise software are optimized for its chips. “Because of our scale and velocity, we’re able to sustain this really complex stack of software and hardware, networking and compute across all these different usage models and computing environments,” CEO Jensen Huang said during a call on August 23 to discuss the latest earnings. Nvidia is also pitching AI Foundations, its cloud-based generative AI service, as a one-stop shop for enterprises that might lack resources to build, tune, and run custom LLMs trained on their own data to perform tasks specific to their industry. The move, announced in March, may be a savvy one, given rising business interest in generative AI, and it pits the company in direct competition with hyperscalers that also rely on Nvidia’s chips. Nvidia AI Foundations models include NeMo, a cloud-native enterprise framework; Picasso, an AI capable of generating images, video, and 3D applications; and BioNemo, which deals in molecular structures, making generative AI particularly interesting for accelerating drug development, where it can take up to 15 years to bring a new drug to market. Nvidia says its hardware, software, and services can cut early-stage drug discovery from months to weeks. Amgen and AstraZeneca are among the pharmaceutical firms testing the waters, and with US pharmaceutical firms alone spending over $100 billion a year on R&D, more than three times Nvidia’s revenue, the potential upside is clear. Pharmaceutical development is moving faster, but the road toward widespread adoption of another of Nvidia’s target markets is less clear: self-driving cars have been “just around the corner” for years, but testing and getting approval for use on the open road is proving even more complex than getting approval for a new drug. Nvidia gets two bites at this market. One is building and running the virtual worlds in which self-driving algorithms are tested without putting anyone at risk. The other is the cars themselves. If the algorithms make it out of the virtual world and onto the roads, cars will need chips from Nvidia and others to process real-time imagery and perform myriad calculations needed to keep them on course. This is the smallest market segment Nvidia breaks out in its quarterly results: just $253 million, or 2% of overall sales, in the three months to July 30, 2023. But it’s a segment that’s been more than doubling each year. When it reported its results for the three months to April 30, Nvidia made an ambitious forecast: that its revenue for the following fiscal quarter, ending July 30, would be over 50% higher — and it went on to beat that figure by a wide margin, reporting revenue of $13.5 billion. Growth in gaming hardware sales was also up 22% year on year, and 11% quarter on quarter, which would be impressive for most consumer electronics companies, but lags far behind the recent growth in Nvidia’s biggest market — data centers. The proportion of its overall revenue coming from gaming has shrunk from over one-third in the three months to April 30 to just under one-fifth in the period to July 30. Nevertheless, Nvidia still sees opportunity ahead, as less than half of its installed base has upgraded to graphics cards with the Geforce RTX technology it introduced in 2018, CFO Colette Kress said during the call. Huang and Kress both talked up how clearly Nvidia can see future demand for its consumer and data center products, well into next year. “The world is transitioning from general-purpose computing to accelerated computing,” Huang said. With around $250 billion in capital expenditure on data centers every year, according to Huang, the potential market for Nvidia is enormous as that transition plays out. “Demand is tremendous,” he said, adding that the company is significantly expanding its production capacity to boost supply for the rest of this year and into next. Nevertheless, Kress was more reserved in her projections for the three months to October 30, saying she expects revenue of between $15.7 billion and $16.3 billion, or quarter-on-quarter growth between 16% and 21%. All eyes will be on the company’s next earnings announcement, on November 21. Artificial Intelligence, C Language, Cryptocurrency, GPUs, Manufacturing Systems, Nvidia, Software Deployment, Software Development [...]