By Liia Sarjakoski, Principal Product Marketing Manager, 5G Security, for Palo Alto Network Security

Governments, organizations, and businesses are readily embracing transformation at the edge of mobile networks these days. Mobile edge – with its distributed support for low latency, capacity for rapid delivery of massive data amounts, and scalable cloud-native architectures – enables mission critical industrial and logistic applications and creates richer experiences across remote working, education, retail, and entertainment. Bringing resources closer to the user enables a better user experience, serving mission critical applications and taking advantage of improved economics.

But, mobile edge, including Multi-access Edge Computing (MEC), requires a new kind of approach to cybersecurity. It is a new environment where the network, applications, and services are not only distributed geographically but also across organizational boundaries. Service providers’ 5G infrastructure and enterprise networks will be deeply intertwined. Further, the mobile edge will be highly adaptive. It will dynamically scale to meet demands of new applications and changing usage patterns.

Effective 5G edge security is best achieved through a platform approach that combines the protection of diverse mobile edge environments under one umbrella. A platform approach not only provides visibility for advanced, network-wide threat detection but it provides the necessary foundation for security automation. Automation is vital for security to keep up with the dynamically changing 5G environment.

We can think of 5G networks to include four types of edge environments. Effective edge security spans across all of these environments.

Palo Alto Networks

Regional data centers — protect distributed core network with distributed security

Driven by the explosion of mobile data and improving customer experience, service providers are distributing core network functions — e.g., Session Management Function (SMF) and Access and Mobility Management Function (AMF) — closer to the users to regional data centers. Service providers are able to improve user traffic latency and can optimize their transport architecture for cost savings.

As network functions — e.g., SMF and AMF — are brought to the edge of the network, securing them needs to take place there, as well. Instead of providing protection at one to three national data centers, it now needs to be implemented at five to 10 regional data centers. The key interfaces to protect are N2 and N4. Unprotected N2 interfaces can be vulnerable to Radio Access Network (RAN) based threats from gNodeB base stations (gNBs). Unprotected N4 interfaces can be vulnerable to Packet Forwarding Control Protocol (PFCP) threats between distributed user plane function (UPF) — e.g. located in a MEC environment — and the core network.

Additionally, SMF, AMF, and other network function workloads need protection in this typically cloud-native container-based environment.

The key for protecting the regional data center environment is a cloud-native security platform that can be automatically scaled to changing traffic or topology demands. At the same time, many of the threats are telco specific and preventing them requires built-in support for the telco protocols.

Public MEC — support user experience with cloud-native security

Public MEC is part of the public 5G network and typically serves consumer and IoT use cases. It integrates applications as part of the 5G network and brings them closer to the user. This improves the user experience while also optimizing the cost by deploying resources where they are needed. Public MEC is built into the service provider’s network by utilizing a distributed user plane function (UPF) to directly break out traffic to edge applications. Many service providers are partnering with cloud service providers (CSPs) on building these application environments as the CSP platforms have become the standard.

As third party applications become an integral part of the 5G networks, protecting and monitoring the application workloads and protecting UPF with microsegmentation promotes stopping any lateral movement of attacks.

Edge applications are also integral to the 5G user experience. Smoothly working applications for example, video content, AR/VR, and gaming promote service providers’ customer retention rate.

Securing the public MEC calls for a cloud-native, multi-cloud approach for cloud workloads and microsegmentation.

Private MEC – empower enterprises with full control over 5G traffic

Private MEC is deployed at an enterprise customer’s premises and is often set up alongside with a private 5G or LTE network, serving mission-critical enterprise applications. It utilizes a local UPF to break out user plane traffic to the enterprise network. The traffic is further routed to a low-latency edge application or deeper into the enterprise network. A key driver for private MEC adoption is privacy of the traffic — an enterprise has full control of its 5G traffic, which never leaves its environment.

Private MEC carries the enterprise customer’s data. In today’s distributed world with eroded security perimeters, many enterprises rely on the Zero Trust approach to protect their users, applications and infrastructure. A critical building block for implementing a Zero Trust Enterprise is the ability to enforce granular security policies and security services across all network segments — including 5G traffic. Service providers need to find ways to empower enterprise customers with full control over the 5G traffic.

At the same time, the service provider needs to securely expose interfaces from the customer’s premises to their core network — namely the N4 interface to SMF for PFCP signaling traffic originating from the private MEC.

Private MEC security requires a flexible approach to bring security to heterogeneous private MEC environments across appliance, virtual, and cloud environments. Many enterprises will choose to leverage partners for turnkey private MEC solutions and they will be requiring built-in security. Also cloud service providers are going after the private MEC market, and the ability to provide cloud-native security will be critical.

Mobile devices — most effectively protected with network-based security solutions

Accelerated by rapidly increasing IoT devices, the number of mobile devices is massive. The devices are heterogeneous across a multitude of software and hardware platforms. The limited computing and battery capacity of these devices often forces the device vendors to make compromises in security capabilities, making mobile devices a soft target. Infected devices can compromise organizations’ business critical data and disturb mission-critical operations. They also pose a risk to the mobile network itself, especially in case of botnet-originated massive, coordinated DDoS attacks.

The combination of limited device resources, heterogeneous device types and device vendors’ tight control of the platforms makes it difficult to implement device-based security solutions in scale. Network-based security, on the other hand, is a highly effective method to protect mobile devices at scale. When supported with granular visibility to user (SUPI) and device (PEI) level traffic flows, network-based security can see and stop advanced threats in real time. Organizations are able to protect their mobile devices across attack vectors including vulnerability exploits, ransomware, malware, phishing, and data theft.

Network-based security can be deployed as part of any of the edge environments or the service provider’s core network.

Staying on top of privacy in distributed 5G networks

Protecting private information is more important than ever. Handling of private information is heavily regulated and breaches can result in public backlash. As the mobile core network becomes more distributed, the service providers need to double down on protecting Customer Proprietary Network Information (CPNI) that is now often carried in the signaling traffic (e.g., N4) between MEC sites and regional and national data centers. Service providers often use encryption to protect CPNI.


Securing the 5G edge requires a zero trust approach that can scale across multiple different environments. The distributed 5G network no longer has a clear perimeter. Service providers’,  enterprises’, and CSPs’ assets and workloads are intertwined. Only through visibility and control across the whole system, can service providers and enterprises detect breaches, lateral movement, and stop the kill chains.

The new mobile networks are complex, but securing them doesn’t need to be. The key for simple 5G edge security is a platform approach that manages protecting the key 5G interfaces under a single umbrella — no matter if they are distributed across private and public telco clouds and data centers.

Learn more about Palo Alto Networks 5G-Native Security for protecting 5G interfaces, user traffic, network function workloads and more. Our ML-Powered NGFW for 5G provides deep visibility to all key 5G interfaces and can be deployed across data center (PA-Series), virtual (VM-Series), and container-based (CN-Series) environments. Our Prisma Cloud Compute provides cloud-native protection for container-based network function (CNF) workloads.

About Liia Sarjakoski:
Liia is the Principal Product Marketing Manager, 5G Security, for Palo Alto Network Security

IT Leadership, Zero Trust

The world has become far more complicated. For businesses, the need to balance employee safety, changed expectations about how and where we work, and the shifting threat landscape have transformed the very nature of how we use our computers. While users have always wanted safe, reliable and high performing PCs and notebooks, delivering this in the post-pandemic world poses an immense challenge. And with workplaces and teams distributed more widely than ever before, manageability faces a whole new set of obstacles.


Organisations need to ensure the computing platform they choose can deliver the performance they need while being as energy efficient as possible. The winner of a Grand Prix isn’t the fastest car. It’s the fastest car that stays in the race the longest. Performance is about more than the fastest CPU; it’s about ensuring you have the right processor, chipset, network and firmware all tuned to work together in harmony and at peak efficiency.

Great performance is about ensuring your computing platform tick all those boxes.


If we think about that Grand Prix winning car, as well as having a powerful motor and great fuel efficiency so it can race faster for longer, it is also equipped with a variety of equipment to ensure the driver and those around them keep safe. Today’s threat environment moves faster and can impact an organisation faster than ever before. The adversaries are constantly changing how they attack and are exploiting newly discovered vulnerabilities.

New software patches, to thwart emerging threats and mitigate the risks of vulnerabilities, need to be easily and quickly deployed. Organisations need to be able to protect their data which demands the capability to remotely fix or wipe devices is also important should a device be lost or stolen.

The technology platform you choose needs built-in, multilayer hardware-based security above and below the operating system to help defend against attacks so IT teams can react quickly when a threat is detected without slowing users down, even when PCs are far from home. Security needs to be built into the technology platform by design and not bolted in as an afterthought.


The COVID pandemic has changed the nature of work. Teams are now more distributed than ever so IT teams can’t rely on physical access to systems in order to support them. Old-school remote access systems were difficult to deploy and only gave IT teams limited ability to diagnose and fix problems.

Today’s computing platforms enable IT teams to remotely log in to users’ laptops to fix most issues, even if an operating system fails. Technology management and support teams need a platform that allows them to remotely log in to the device, wipe it if necessary and reinstall the operating systems and applications. This is a game changer for remote support.

A powerful manageability platform gives full KVM (Keyboard, Video, Mouse) capability throughout the power cycle – including uninterrupted control of the desktop when an operating system loads. And it allows authorised support people the ability to access and reconfigure the BIOS so every aspect of the user’s experience can be controlled and optimised.


A winning Formula One car is more than the sum of its individual parts and a great PC is more than just hardware. An optimised platform ensures all the parts of the system work together perfectly so it doesn’t let users down or make support harder.

That requires the computing platform to be rigorously tested. And, as well as offering benefits for users in their day to day work, a stable platform delivers smoother fleet management. With the cost of supporting a PC estimated at around $5000 per year according to Gartner, building an easy-to-manage and stable fleet of computers using a well-designed and thoroughly tested computing platform can deliver great value to organisations.

For organisations looking for a platform that supports these four pillars, they need to look for computers that are built on a platform that enables them to deliver great performance and security on a stable platform that ensures they can keep working and be supported whenever they need the assistance of their IT team.

Whether you’re in education and need to support students on and off campus, or a large business with team members distributed across the world, the Intel vPro platform delivers the performance, security, manageability and stability organisations need to meet the demands of today.

High-Performance Computing

April 14, 2022

Source:  Tim Guido, Corporate Director, Performance Improvement, Sanmina | Manufacturing Tomorrow

Many industries such as the automotive, medical and semiconductor sectors must comply with third party standards to control processes, reduce risk and ensure quality during the manufacturing of products. Over the past few years, organizations have begun to embrace an even broader mindset towards risk-based thinking, motivated by the growing discipline of regulatory compliance and an increasing number of unexpected global events that have impacted their operations.

When manufacturers want to implement a new production line, they are examining all of the possible risks and scenario planning for every reasonable action that could either prevent or mitigate a risk if it materializes. Some people call this business continuity, risk management or disaster management. Nothing has brought these concerns more to top of mind than the past few years of dealing with trade wars, the pandemic, extreme weather and supply chain shortages.

Risk Management Checklist

Risk management is about the practical investment in preventative and mitigating measures that can be tapped during a crisis. There are four main areas to consider when building a risk management or business continuity program:

Risk Assessment. The first action to take is to put a stake in the ground in terms of what could go wrong at each plant, whether it happens to be a fire, earthquake, chemical spill, cyber attack or something else. This will vary for different regions. The possibility of an earthquake impacting operations in California is much higher than in Alabama. An act of terrorism may be more likely to happen in certain countries versus others.

Let’s say a manufacturer is setting up a new production line. The first step would be to complete a risk assessment form that spans different areas – Employee Health and Safety, Finance, HR, IT, Operations and Program Management. Based on the guidelines provided, the person completing the form identifies possible issues and potential impacts – this could be anything from production or shipment delays to impacts to employee health and safety. Then a threat rating is assigned between 1 and 5 for both occurrence and impact, with 5 being a critical situation that warrants the most attention.

Then, preventative and mitigating measures are determined based on factors that could contribute to the adverse event. Are there inadequate controls, lack of monitoring or poor training that might add to a problem? Could these areas be improved to either prevent or lessen the potential impact? While an earthquake isn’t preventable, an organization could retrofit their building and upgrade IT systems to ensure that people are safe and can still perform their job duties if a temblor hits.

Incident Management & Business Recovery Planning. Building out all of the details for incident management and business recovery is essential, if not glamorous. A contact list needs to be created so that a key lead can contact all affected employees, customers and suppliers during a disaster. Getting customers and suppliers in the loop early could enable them to become part of the solution. A call notification script should be drafted that provides consistent communications to impacted parties and decisions need to be made about whom gets told what in certain scenarios. Checklists and drills should also be included, such as how to safely clear employees from a facility.


Internal Audit Checks. Once the business recovery plan is drafted, it should be audited annually. This ensures that the right action plans are included and the correct project leaders and backup leads are identified and verified. Each section, such as advanced planning, revision histories and recovery priorities, must be evaluated as part of the audit to ensure that there’s a solid plan in place and that all participants are properly trained and on board with the approach.


Test Exercise. Every plant should run through a drill for their highest-priority emergencies to evaluate preparedness. They must be able to prove that there’s a data IT recovery capability and have a rough idea of what can be done for a customer in the scope of the test exercise. If work needs to be moved to another location, are they able to confirm the backup plant’s capacity and a timeline for the transfer? Do they understand the open orders that need to be transferred? How does the detailed recovery plan work in terms getting operations back up and running?  For each action, what would be considered a success and how soon? A sample objective would be to get access to a site within one hour and have at least 80 percent of the team notified within the hour of a situation.

After running a drill, evaluating its effectiveness and making improvements to the plan and communicating it to the team should occur. If actions such as getting access to the site, notifying the team, understanding orders, getting alternate facility confirmation and knowing the right customer contacts can all be demonstrated during the exercise, then the majority of functional activities are ready to go, even if the actual crisis requires some fine tuning of processes. Just like the overall plan, the test runs should be performed at least once a year to verify their continued relevance.

Preventing Problems Before They Happen

At Sanmina, we are seeing increasingly robust expectations for risk management programs across the markets that we serve. Customers are more eager to get involved in understanding the details of these plans than ever before and are considering them an integral part of their manufacturing strategy.

It’s vitally important to understand potential risks, evaluate the scope and effectiveness of an action plan and cultivate a living risk management process that is periodically reviewed and updated. It’s also critical to instill a preventative mindset within an organization’s culture because it’s not always an intuitive thought process. While fixing a problem in the moment may be beneficial, it’s important to build a mindset that’s not just about corrective thinking but a proactive approach that identifies potential root causes that could help prevent or lessen a problem that may occur in the future.

The post Four Steps to Reducing Manufacturing Risk appeared first on Internet of Business.