Until recently, software-defined networking (SDN) technologies have been limited to use in data centers — not manufacturing floors.

But as part of Intel’s expansive plans to upgrade and build a new generation of chip factories in line with its Integrated Device Manufacturing (IDM) 2.0 blueprint, unveiled in 2021, the Santa Clara, Calif.-based semiconductor giant opted to implement SDN within its chip-making facilities for the scalability, availability, and security benefits it delivers.

“Our concept was to use data center technologies and bring them to the manufacturing floor,” says Rob Colby, project lead. “We’ve had to swap the [networking infrastructure] that exists, which is classic Ethernet, and put in SDN. I’ve upgraded a whole factory from one code version to another code version without downtime for factory tools.”

Aside from zero downtime, moving to Cisco’s Application Centric Infrastructure (ACI) enabled Intel to solve the increasingly complex security challenges associated with new forms of connectivity, ongoing threats, and software vulnerabilities. The two companies met for more than a year to plan and implement for Intel’s manufacturing process security and automation technology that had been used only in data centers.

“This is revolutionary for us in the manufacturing space,” Colby says, noting the cost savings from not taking the factory offline and uninterrupted production is a major financial benefit that keeps on giving. 

That ability to upgrade the networking infrastructure without downtime applies to downloading security patches and integrating tools into the production environment alike, Colby adds.  

“Picture a tool being the size of a house. One of our most recent tools is a $100 million tool, and landing a tool of that size involves a lot of complexity, after which I have to connect it so it can communicate with other systems within our infrastructure,” Colby says. “[Having SDN in place] makes landing tools faster and the quality increases. We’re also able to protect it at the level we need to be protecting it without missing something in the policy.”

Bringing SDN to the factory floor

The project, which earned Intel a 2023 US CIO 100 Award for IT innovation and leadership, has also enabled the chipmaker to perform network deployments faster with 85% less headcount.

Colby says it took a couple of years for the partners to build the blueprint and begin rolling out the solution to existing factories, including rigorous offline testing before beginning.

The migration required no retraining of chip designers in the clean room but some training for those in the manufacturing facilities. “We really went above and beyond to make it as seamless as possible for them,” Colby says. “We’ve recently been testing being able to migrate them over to ACI on the factory floor without any downtime. That will accelerate our migration for the rest of the factory floor.”

The collaboration with Cisco enables ACI to be deployed for factory floor process tools, embedded controllers, and new technologies such as IoT devices being introduced into the factory environment, according to Intel.

It was “clear that we needed to move to an infrastructure that better supported automation, offered more flexible and dynamic security capabilities, and could reduce the overall impact when planned or unplanned changes occur,” Intel wrote in a white paper about its switch to SDN. “The network industry has been trending toward SDN over the last decade, and Intel Manufacturing has been deploying Cisco Application Centric Infrastructure (ACI) in factory on-premises data centers since 2018, gaining experience in the systems and allowing for more market maturity.”

Moving ACI to the manufacturing factories was the next step, and Colby cited Sanjay Krishen and Joe Sartini, both Intel regional managers, as instrumental in bringing SDN to Intel’s manufacturing floor.

The broad view of SDN in manufacturing

There are thousands of semiconductor companies globally, mostly in Taiwan. Yet the US Government CHIPS and Science Act of 2022 has incentivized more semiconductor manufacturing on US soil, and it is taking root.

“The use of cellular and WiFi connectivity on the factory floor has enabled these manufacturers to gain improved visibility, performance, output, and even maintenance,” says IDC analyst Paul Hughes.

“For any industry, software-defined networking brings additional scale and on-demand connectivity to what are now connected machines (industrial IoT),” Hughes says, adding that this also provides improved access to the cloud for data management, storage, analytics, and decision-making. “SDN allows networks to scale up securely when manufacturing activity scales and ensures that all the data generated by and used by machines and tools on the factory floor can move quickly across the network.”

As more semiconductor manufacturing springs up in the US, the use of SDN also “becomes one of the key steps in digital transformation where, in this case, a semiconductor manufacturer can collect, manage, and use data holistically from the factory floor to beyond the network edge,” says Hughes, whose most recent survey, IDC’s 2023 Future of Connectedness Sentiment, shows that 41% of manufacturers believe that the flexibility to add/change bandwidth capacity in near real-time is a top reason for SDN/SD-WAN investment.

The survey also showed that 31% of manufacturers say optimized WAN traffic for latency, jitter, and packet loss is another top reason for SDN/SD-WAN investment and is considered very important for managing factory floor equipment in real-time.

Intel has deployed SDN in roughly 15% of its factories to date and will continue to migrate existing Ethernet-based factories to SDN. For new implementations, Intel has chosen to use open source Ansible playbooks and scripts from GitHub to accelerate its move to SDN.

Intel certified Cisco’s ACI solution in time to deploy in high-volume factories built in Ireland and the US in 2022 and for more planned in Arizona, Ohio, New Mexico, Israel, Malaysia, Italy, and Germany in the coming years, according to the company.

Intel’s core partner on the SDN project is confident the benefits will continue to have a sizable benefit — even for a company of Intel’s size.

“The biggest benefit is that SDN helped Intel complete new factory network builds with 85% less headcount and weeks faster through the use of automated scripts,” says Carlos Rojas, a sales and business developer who worked on the project. “Automation and SDN enable better scalability and consistency of security and policy controls, and the ability to deploy micro-segmentation, improving Intel’s security posture and reducing attack surfaces.”

CIO 100, Manufacturing Industry, Networking, SDN

The mainframe may seem like a relic of a day gone by, but truth be told, it’s still integral. According to the Rocket Software Survey Report 2022: The State of the Mainframe, four out of five IT professionals see the mainframe as critical to business success. At the same time, innovation and modernization are imperative for business survival.

When deciding which modernization path to take, some companies choose to scrap their mainframe, a costly endeavor that increases the risk of downtime and sacrifices its powerful benefits. With that in mind, what can businesses do to modernize their applications effectively?

Tap into open-source software

Mainframe-dependent businesses often think that open source is just for cloud-based products – but that assumption is incorrect. By introducing open-source software to mainframe infrastructure, companies will improve product development, speed time to market, and open the mainframe to new developers that will drive mainframe innovation.

Open-source software accelerates IBM Z® application development and delivery through modern tools that drive automation and integration to and from the mainframe. Success hinges on development support. Without, it can create security and compliance risks—and be difficult to maintain. Waiting on vulnerability fixes from the open-source software community can open an organization to a multitude of threats.

Another benefit of open-source software is that it provides the next generation of IT professionals – who may be unfamiliar with the mainframe – with familiar languages and tools that make it easy for them to manage the mainframe similar to the way they work with other platforms.

One of those technologies, Zowe, connects the gap between modern applications and the mainframe. Introduced four years ago by the Linux Foundation’s Open Mainframe Project, Zowe and other open-source technologies like it provide organizations with the responsiveness and adaptability they need to implement advanced tools and practices that balance developers’ desire to work with the latest technology and organizational need for security and support.

Through DevOps and application development, businesses can bring the accessibility of open source to the mainframe while ensuring the compliance and security of their system’s data. Because of the development of open DevOps/AppDev solutions, businesses can deliver applications to market faster, at lower cost, and with less risk.

Modernizing in place

Many legacy systems, mainframe and distributed, lack connectivity and interoperability with today’s cloud platforms and applications not because of a lack of capability, but because of a lack of effort. Enterprises are at a crossroads for how to invest in their future infrastructure support and have a handful of options:

Operate as-is. This option may not include net-new investments but positions a business for failure against competitors.

Re-platform or “rip and replace” existing technology. While it addresses the modernization issue, it does so in a costly, disruptive, and time-consuming manner that forces businesses to throw away expensive technology investments.

Modernizing in place. This makes it possible to embrace increasingly mature tools and technologies – from mainframe data virtualization, API development, hierarchical storage management (HSM), and continuous integration and continuous delivery (CI/CD) – that bring mainframe systems forward to today’s IT infrastructure expectations.

Rocket Software data reveals that more than half of IT leaders favor modernizing in place – and less than 30 percent and 20 percent favor “operate as-is” and “re-platforming”, respectively. DevOps innovations (e.g., interoperability and integration, storage, automation, and performance and capacity management needs) have empowered IT leaders to see modernizing in place as a more cost-effective, less disruptive path to a hybrid cloud future.

Bet on hybrid cloud infrastructure

Modernizing in place to drive a hybrid cloud strategy presents the best path for enterprise businesses that need to meet the evolving needs of the customer and implement an efficient, sustainable IT infrastructure. The investment in cloud solutions bridges the skills gap and attracts new talent while not throwing away the investment in existing systems. Integrating automation tools and artificial intelligence capabilities in a hybrid model eliminates many manual processes, ultimately reducing workloads and improving productivity. The flexibility of a modernized hybrid environment also enables a continuously optimized operational environment with the help of DevOps and CI/CD testing.

The mainframe has stood the test of time – and it will continue to do so for the benefits it provides that the cloud does not. As you evolve your strategy, think about how best to leverage past technology investments with the modern app dev tools delivered through the cloud today to innovate on your mainframe technology.

Learn more about how Rocket Software and its solutions can help you modernize.

Digital Transformation

Economic instability and uncertainty are the leading causes for technology budget decreases, according to the IDG/Foundry 2022 annual State of the CIO survey. Despite a desire to cut budgets, data remains the key factor to a business succeeding – especially during economic uncertainty. According to the Harvard Business Review, data-driven companies have better financial performance, are more likely to survive, and are more innovative.[1]

So how do companies find this balance and create a cost-effective data stack that can deliver real value to their business? A new survey from Databricks, Fivetran, and Foundry that surveyed 400-plus senior IT decision-makers in data analytics/AI roles at large global companies, finds that 96% of respondents report negative business effects due to integration challenges. However, many IT and business leaders are discovering that modernizing their data stack overcomes those integration hurdles, providing the basis for a unified and cost-effective data architecture.

Building a performant & cost-effective data stack 

The Databricks, Fivetran, and Foundry report points the way for four investment priorities for data leaders: 

1. Automated data movement. A data pipeline is critical to the modern data infrastructure. Data pipelines ingest and move data from popular enterprise SaaS applications, and operational and analytic workloads to cloud-based destinations like data lakehouses. As the volume, variety and velocity of data grow, businesses need fully managed, secure and scalable data pipelines that can automatically adapt as schemas and APIs change while continuously delivering high-quality, fresh data. Modernizing analytic environments with an automated data movement solution reduces operational risk, ensures high performance, and simplifies ongoing management of data integration. 

2. A single system of insight. A data lakehouse incorporates integration tools that automate ELT to enable data movement to a central location in near real time. By combining both structured and unstructured data and eliminating separate silos, a single system of insight like the data lakehouse enables data teams to handle all data types and workloads. This unified approach of the data lakehouse dramatically simplifies the data architecture and combines the best features of a data warehouse and a data lake. This enables improved data management, security, and governance in a single data architecture to increase efficiency and innovation. Last, it supports all major data and AI workloads making data more accessible for decision-making.

A unified data architecture results in a data-driven organization that gains both BI, analytics and AI/ML insights at speeds comparable to those of a data warehouse, an important differentiator for tomorrow’s winning companies. 

3. Designed for AI/ML from the ground up. AI/ML is gaining momentum, as more than 80% of organizations are using or exploring the use of (AI) to stay competitive. “AI remains a foundational investment in digital transformation projects and programs,” says Carl W. Olofson, research vice president with IDC, who predicts worldwide AI spending will exceed $221B by 2025.[2] Despite that commitment, becoming a data-driven company fueled by BI analytics and AI insights is proving to be beyond the reach of many organizations that find themselves stymied by integration and complexity challenges. The data lakehouse solves this by providing a single solution for all major data workloads from streaming analytics to BI, data science, and AI. It empowers data science and machine learning teams to access, prepare and explore data at scale.

4. Solving the data quality issue. Data quality tools(59%) stand out as the most important technology to modernize the data stack, according to IT leaders in the survey. Why is data quality so important? Traditionally, business intelligence (BI) systems enabled queries of structured data in data warehouses for insights. Data lakes, meanwhile, contained unstructured data that was retained for the purposes of AI and Machine Learning (ML). However, maintaining siloed systems, or attempting to integrate them through complex workarounds, is difficult and costly. In a data lakehouse, metadata layers on top of open file formats increase data quality, while query engine advances speed and performance. This serves the needs of both BI analytics and AI/ML workloads in order to assure the accuracy, reliability, relevance, completeness, and consistency of data. 

According to the Databricks, Fivetran, and Foundry report, nearly two-thirds of IT leaders are using a data lakehouse, and more than four out of five say they’re likely to consider implementing one. At a moment when cost pressure is calling into question open-ended investments in data warehouses and data lakes, savvy IT leaders are responding as they place a high priority on modernizing their data stack. 

Download the full report to discover exclusive insights from IT leaders into their data pain points, how theyplan to address them, and what roles they expect cloud and data lakehouses to play in their data stack modernization.

[1] https://mitsloan.mit.edu/ideas-made-to-matter/why-data-driven-customers-are-future-competitive-strategy

[2]  Source: IDC’s Worldwide Artificial Intelligence Spending Guide, Feb V1 2022. 

Data Architecture

Efficient supply chain operations are increasingly vital to business success, and for many enterprise, IT is the answer.

With over 2,000 suppliers and 35,000 components, Kanpur-based Lohia Group was facing challenges in managing its vendors and streamlining its supply chain. The capital goods company, which has been in textiles and flexible packaging for more than three decades, is a major supplier of end-to-end machinery for flexible woven (polypropylene and high-density polyethylene) packaging industry.

“In the absence of an integrated system, there was no control on vendor supply, which led to an increased and unbalanced inventory,” says Jagdip Kumar, CIO of Lohia. “There was also a mismatch between availability of stock and customer deliveries. At the warehouse level, we had no visibility with respect to what inventory we had and where it was located.”

Those issues were compounded by the fact that the lead time for certain components required to fulfill customer orders ranges from four to eight months. With such long component delivery cycles, client requirements often change. “The customer would want a different model of the machine, which required different components. As we used Excel and email, we were unable to quickly make course correction,” Kumar says. 

Jagdip Kumar, CIO, Lohia Corp


Moreover, roughly 35% of the components involved in each customer order are customized based on the customer’s specific requirements. Long lead times and a lack of visibility at the supplier’s end meant procurement planning for these components was challenging, he says, adding that, in the absence of any ability to forecast demand, Lohia was often saddled with disbalanced (either extra or less) inventory.

The solution? Better IT.

Managing suppliers to enhance efficiency and customer experience

To manage its inventory and create a win-win situation for the company and its suppliers, Kumar opted to implement a vendor management solution.

“The solution was conceptualized with the goal of removing the manual effort required during the procurement process by automating most of the tasks of the company and the supplier while providing the updates that the former needed,” says Kumar.

“We roped in KPMG to develop the vendor portal for us on this SAP platform, which is developed on SAP BTP (Business Technology Platform), a business-centric, open, and unified platform for the entire SAP ecosystem,” he says.

The application was developed using SAP FIORI/UI5, while the backend was developed using SAP O-Data/ABAP services. The cloud-based front end is integrated with Lohia’s ERP system, thereby providing all relevant information in real-time. It took four months to implement the solution, which went live in September 2021.

With the new deployment, the company now knows the changes happening in real-time, be it the non-availability of material or a customer not making the payment or wanting to delay delivery of their ordered machine. “All these changes now get communicated to the vendors who prepone or postpone accordingly. Armed with complete visibility, we were able to reduce our inventory by 10%, which resulted in cost savings of around ₹ 200 million,” says Kumar.

The vendor portal has also automated several tasks such as schedule generation and gate entry, which have led to increases in productivity and efficiency.

“The schedules are now automatically generated through MRP [material requirement planning] giving visibility to our suppliers for the next three to four months, which helps them to plan their raw material requirements in advance and provide us timely material,” Kumar says. The result is a material shortage reduction of 15% and a 1.5X increase in productivity. “It has also helped us to give more firm commitments to our customers and our customers delivery has improved significantly, increasing customer trust,” he says.

“Earlier there was always a crowd at the gate as the entry of each truck took 10-15 minutes. The new solution automatically picks up the consignment details when the vendor ships it. At the gate, only the barcode is scanned, and truck entry is allowed entry. With 100 trucks coming in every day, we now save 200-300 minutes of precious time daily,” he says.

Kumar’s in-house development team worked in tandem with KPMG to build custom capabilities on the platform, such as automatic scheduling and FIFO (first in, first out) inventory valuation.

To ensure suppliers would adopt the solution, Lohia deployed its own team at each vendors’ premises for two to three days to teach them how to use the portal.

“We showcased the benefits that they could gain over the next two to three months by using the solution,” Kumar says. “We have been able to onboard 200 suppliers, who provide 80% of the components, on this portal. We may touch 90-95% by the end of this year.”

Streamlining warehouse operations to enhance productivity

At the company’s central warehouse in Kanpur, Kumar faced traceability issues related to its spare parts business. Also, stock was spread across multiple locations and most processes were manual, leading to inefficient and inaccurate spare parts dispatches.

“There were instances when a customer asked for 100 parts, and we supplied only 90 parts. There were also cases wherein a customer had asked for two different parts in different quantities, and we dispatched the entire quantity comprising only one part,” says Kumar. “Then there was the issue of preference. As we take all the payment upfront from our customers, our preference is to supply the spare part on a ‘first come first serve’ basis. However, there could be another customer whose factory was down because he was awaiting a part. We could not prioritize that customer’s delivery over others.”

That the contract workers were not literate, and the company had too much dependency on their experience was another bottleneck.

To overcome these problems, and to integrate its supply chain logistics with its warehouse and distribution processes, Lohia partnered with KPMG to deploy SAP EWM application on the cloud.

“We decided to optimize the warehouse processes with the usage of barcode, QR code, and wifi-enabled RF-based devices. There was also a need to synchronize warehouse activities through the integration of warehouse processes with tracking and traceability functions,” says Kumar. The implementation commenced on 01st April 2022, and it went live on 01st August 2022.

To achieve traceability, Kumar barcoded Lohia’s entire stock. “We now get a list from the system on the dispatchable order and its sequence. Earlier there was a lot of time wastage, as we didn’t know which part was kept in which portion of the warehouse. Employees no longer take the zig-zag path as the new solution provides the complete path and the sequence in which they must go and pick up the material,” Kumar says.

Kumar also implemented aATP (Advanced Available-to-Promise), which provides a response to order fulfilment inquiries in Sales and Production Planning. This feature within the EWM solution provides a check based on the present stock situation and any planned or anticipated stock receipts.

“The outcome was as per the expectations. There was improved inventory visibility across the warehouse as well as in-transit stock. The EWM dashboard helped warehouse supervisor to have controls on inbound, outbound, stocks overview, resource management, and physical inventory,” says Kumar.

“Earlier one person used to complete only 30 to 32 parts in a day but after this implementation, the same person dispatches 47 to 48 parts in a day, which is a significant jump of 50% in productivity. The entire process has become 100% accurate with no wrong supply. If there is short supply, it is known to us in advance. There is also a 25% reduction in overall turnaround time in inbound and outbound processes,” he adds.

Supply Chain Management Software

Every organization pursuing digital transformation needs to optimize IT from edge to cloud to move faster and speed time to innovation. But the devil’s in the details. Each proposed IT infrastructure purchase presents decision-makers with difficult questions. What’s the right infrastructure configuration to meet our service level agreements (SLAs)? Where should we modernize — on-premises or in the cloud? And how do we demonstrate ROI in order to proceed?

There are no easy, straightforward answers. Every organization is at a different stage in the transformation journey, and each one faces unique challenges. The conventional approach to IT purchasing decisions has been overwhelmingly manual: looking through spreadsheets, applying heuristics, and trying to understand all the complex dependencies of workloads on underlying infrastructure.

Partners and sellers are similarly constrained. They must provide a unique solution for each customer with little to no visibility into a prospect’s IT environment. This has created an IT infrastructure planning and buying process that is inaccurate, time-consuming, wasteful, and inherently risky from the perspective of meeting SLAs.

Smarter solutions make for smarter IT decisions

It’s time to discard legacy processes and reinvent IT procurement with a new approach that leverages the power of data-driven insights. For IT decision makers and their partners and sellers, a modern approach involves three essential steps to optimize procurement — and accelerate digital transformation:

1. Understand your VM needs

Before investing in infrastructure modernization, it’s critical to get a handle on your current workloads. After all, you must have a clear understanding of what you already have before deciding on what you need. To reach that understanding, enterprises, partners, and sellers should be able to collect and analyze fine-grained resource utilization data per virtual machine (VM) — and then leverage those insights to precisely determine the resources each VM needs to perform its job.

Why is this so important? VM admins often select from a menu of different sized VM templates when they provision a workload. They typically do so without access to data — which can lead to slowed performance due to under-provisioning, or oversubscribed VMs if they choose an oversized template. It’s essential to right-size your infrastructure plan before proceeding.

2. Model and price infrastructure with accuracy

Any infrastructure purchase requires a budget, or at least an understanding of how much money you intend to spend. To build that budget, an ideal IT procurement solution provides an overview of your inventory, including aggregate information on storage, compute, virtual resource allocation, and configuration details. It would also provide a simulator for on-premises IT that includes the ability to input your actual costs of storage, hosts, and memory. Bonus points for the ability to customize your estimate with depreciation term, as well as options for third-party licensing and hypervisor and environmental costs.

Taken together, these capabilities will tell you how much money you’re spending to meet your needs — and help you to avoid overpaying for infrastructure.

3. Optimize workloads across public and private clouds

Many IT decision makers wonder about the true cost of running particular applications in the public cloud versus keeping them on-premises. Public cloud costs often start out attractively low but can increase precipitously as usage and data volumes grow. As a result, it’s vital to have a clear understanding of cost before deciding where workloads will live. A complete cost estimate involves identifying the ideal configurations for compute, memory, storage, and network when moving apps and data to the cloud.

To do this, your organization and your partners and sellers need a procurement solution that can map their entire infrastructure against current pricing and configuration options from leading cloud providers. This enables you to make quick, easy, data-driven decisions about the costs of running applications in the cloud based on the actual resource needs of your VMs.

And, since you’ve already right sized your infrastructure (step 1), you won’t have to worry about moving idle resources to the cloud and paying for capacity you don’t need.

HPE leads the way in modern IT procurement

HPE has transformed the IT purchasing experience with a simple procurement solution delivered as a service: HPE CloudPhysics. Part of the HPE GreenLake edge-to-cloud platform, HPE CloudPhysics continuously monitors and analyzes your IT infrastructure, models that infrastructure as a virtual environment, and provides cost estimates of cloud migrations. Since it’s SaaS, there’s no hardware or software to deal with — and no future maintenance.

HPE CloudPhysics is powered by some of the most granular data capture in the industry, with over 200 metrics for VMs, hosts, data stores, and networks. With insights and visibility from HPE CloudPhysics, you and your sellers and partners can seamlessly collaborate to right-size infrastructure, optimize application workload placement, and lower costs. Installation takes just minutes, with insights generated in as little as 15 minutes.

Across industries, HPE CloudPhysics has already collected more than 200 trillion data samples from more than one million VM instances worldwide. With well over 4,500 infrastructure assessments completed, HPE CloudPhysics already has a proven record of significantly increasing the ROI of infrastructure investments.

This is the kind of game-changing solution you’re going to need to transform your planning and purchasing experience — and power your digital transformation.


About Jenna Colleran


Jenna Colleran is a Worldwide Product Marketing Manager at HPE. With over six years in the storage industry, Jenna has worked in primary storage and cloud storage, most recently in cloud data and infrastructure services. She holds a Bachelor of Arts degree from the University of Connecticut.

Cloud Management, HPE, IT Leadership