TOGAF is a longstanding, popular, open-source enterprise architecture framework that is widely used by large businesses, government agencies, non-government public organizations, and defense agencies. Offered by The Open Group, TOGAF advises enterprises on how best to implement, deploy, manage, and maintain enterprise architecture.

The Open Group offers several options for those who want to be certified in the TOGAF 9. Earning a cert is a great way to demonstrate to employers that you are qualified to work in an enterprise architecture environment using the TOGAF 9.2 Standard framework. TOGAF is designed to help organizations implement enterprise architecture management using a standardized framework that is still highly customizable to a company’s specific enterprise architecture needs.

Earlier this year, The Open Group announced the latest update to the TOGAF framework, releasing TOGAF Standard, 10th Edition. The update brought changes to the structure of the framework, making it easier to navigate and more accessible for companies to adopt and customize for their unique business needs. Currently, The Open Group offers certifications only for TOGAF 9, but there are plans to release new certifications that align with TOGAF Standard, 10th Edition. This article will be updated once the new certifications are announced in the coming months.

TOGAF 9 Foundation and TOGAF 9 Certified

The TOGAF 9 Foundation and TOGAF 9 Certified are the two main certifications for the TOGAF Standard, Version 9.2 offered by The Open Group. To earn your TOGAF 9 Foundation certification, you’ll need to pass the TOGAF 9 Part 1 exam. To earn the next level of certification, the TOGAF 9 Certified designation, you’ll need to pass the TOGAF 9 Part 2 exam.

You can opt to take each exam separately at different times, or you can take the TOGAF 9 Combined Part 1 and Part 2 exam to earn both certifications at once. There are no prerequisites for the TOGAF 9 Part 1 exam, but you will need to pass the first exam to qualify for the TOGAF 9 Part 2 examination.  

The TOGAF 9 Part 1 exam covers basic and core concepts of TOGAF, introduction to the Architecture Development Method (ADM), enterprise continuum and tools, ADM phases, ADM guidelines and templates, architecture governance, architecture viewpoints and stakeholders, building blocks of enterprise architecture, ADM deliverables, and TOGAF reference models.

The TOGAF 9 Part 2 exam has eight scenario-based questions that demonstrate your ability to apply your foundational knowledge from the first exam to real-world enterprise architecture situations. The eight questions are drawn from topics such as ADM phases, adapting the ADM, architecture content framework, TOGAF reference models, and the architecture capability framework.  

TOGAF Business Architecture Level 1

The Open Group offers the TOGAF Business Architecture Level 1 certification, which focuses on validating your knowledge and understanding of business modeling, business capabilities, TOGAF business scenarios, information mapping, and value streams.

Integrating Risk and Security Certification

The Open Group also offers the Integrating Risk and Security Certification, which validates that you understand several security and risk concepts as they apply to enterprise architecture. The certification covers important security and risk concepts as they relate to the TOGAF ADM, information security management, enterprise risk management, other IT security and risk standards, enterprise security architecture, and the importance of security and risk management in an organization. There are no prerequisites for the exam, but to pass you will need to attend three hours of training from an accredited training course and then pass the assessment. There is an option for self-study training via an e-learning platform.

TOGAF certification training

The Open Group offers self-study material, with two available study guides that cover the TOGAF 9 Foundation and learning outcomes that go beyond the foundational level. Those who wish to attend prep courses can search through accredited courses. Some courses also include the examination at the end of the course, depending on the program.

TOGAF certification cost

For the English TOGAF exams the current rate is US$360 for Part 1, US$360 for Part 2, or US$550 for the Combined Part 1 and Part 2 exam. The English TOGAF Business Architecture Level 1 exam is priced at US$315. There is currently no pricing information available for the Integrating Risk and Security Certification.

It’s also important to note that pricing and rates per exam will change depending on where you’re located. To see the rates for other countries and languages, check The Open Group’s website for more information.

TOGAF Role-Based Badges

The Open Group also offers TOGAF Role-Based Badges designed for IT professionals seeking to demonstrate enterprise architecture knowledge and skills. The Badges are digital and verified by “secure metadata” as a way for you to display achievements and awards online, and for organizations to easily verify certifications of potential candidates. They can also be used to identify various milestones as you work your way toward a full certification. Badges can be used in email signatures, on your personal website or resume, and on your social media accounts.

The Open Group offers three categories of Role-Based Badges for TOGAF 9.2: Enterprise Architecture, Enterprise Architecture Modeling, and Digital Enterprise Architecture. Under each category, there are two types of badges you can earn, Team Member or Practitioner. You’ll earn different badges depending on which certifications you complete or how far along you are in completing the TOGAF 9.2 Certified credential.

Certifications, Enterprise Architecture

Sometimes — even in IT — slowing down can pay off big-time. For Wolverine Worldwide, COVID-19 proved the point.

While many companies accelerated their cloud migrations in response to the pandemic, the 140-year-old boot and shoe manufacturer halted much of its technology projects to focus on keeping the business afloat, a decision that left the company a bit behind where it wanted to be but better positioned to succeed, thanks to the availability of more advanced cloud services and tools to ease its transformation to a hybrid cloud infrastructure, says Wolverine CIO Dee Slater.

“When the pandemic hit, we took a bit of a pause,” Slater says. “Now we are well under way and really focused on modernizing the way we work, streamlining and simplifying work across platforms and having actionable data right at out fingertips.”

The Rockford, Mich.-based company best known for its boots and Hush Puppies, and more recently its acquisitions of the Merrell, Sperry, Saucony, and Sweaty Betty brands, originally launched its cloud journey in 2019.

But when COVID hit, the company faced several crises as it tried to keep its business flowing, and the cloud transformation got pushed back. “We had some tough decisions to make,” Slater says. “There was no playbook for this pandemic. We were just starting on our data journey.”

Prioritizing the supply chain

One such crisis centered on Wolverine’s supply chain. As was the case for most manufacturers, supply chain issues quickly materialized for Wolverine in the early days of the pandemic, with lead times for shoes doubling, in part because getting materials across borders had become arduous. This was especially challenging for Slater, who is not only Wolverine’s CIO but also its senior vice president of supply chain and shared services.

“It’s one of those CIO-plus roles that people talk about,” says Slater, who has served as CIO since 2006. “The plus part of my role includes logistics, distribution, trade compliance, or the movement of our goods, our contact centers, and our project management office.”

Wolverine’s footwear is sold in 170 countries and is manufactured in Vietnam, Indonesia, Hong Kong, and China. The company also operates distribution centers in California, Michigan, and Kentucky, and in Ontario, Canada.

Issues surrounding Wolverine’s global manufacturing and distribution footprint became instantly business critical. Vietnam, for instance, was closed for two months during the pandemic, Slater says. To optimize business on its re-opening, Wolverine IT built supply chain data models using Microsoft Power BI to prioritize which brands it should manufacture first once factories resumed operation.

Wolverine, which Slater says relies on SAP and Microsoft for its core infrastructure, is now “well along the journey in supply chain data” using SAP SAC analytics but has yet to embark on other aspects of its digital transformation, such as building a data lake and embracing AI, she says. Currently, Slater’s plan is to complete Wolverine’s hybrid cloud based on Microsoft Azure, which is now at the halfway mark.

Wolverine relies on seven data centers, two of which are run by third-party partners. The on-premises data center at its corporate headquarters connects to Azure and other public clouds, Slater says, adding that Wolverine has moved roughly 500 services from on-prem to the Azure cloud.

While the pandemic slowed down Wolverine’s hybrid cloud transformation, the abundance of new tools and programs now available to aid in migration is making the delay more palatable, she says. For example, Wolverine has signed on to Rise with SAP, a new SAP service that is minimizing the migration challenges of moving Wolverine’s on-premises SAP stack to Azure. The company is also using Azure Arc, a Microsoft cloud management tool that launched just months before the pandemic and now enables Wolverine to build applications that can run across data centers, edge, and multicloud environments.

Tools like Arc give Wolverine a “single pane of glass to manage its processes,” the CIO says. “When we talk about modernizing work at Wolverine, it does not happen with the flip of a switch. So we actually had to manage the combination of our on-premises legacy solutions, as we have our modern new ways of working in the cloud.”

Wolverine’s cloud push is largely about “getting our data in the cloud so we can connect in ways we have not done before, making that data even more powerful,” Slater says. To that end, the company plans to start creating a data lake in 2023, she says.

The manufacturer will also continue developing its SAP SAC analytics infrastructure and begin building machine learning models to generate insights and directives based on data that resides in the data lake, she says.

“Step one is streamlining and standardizing all the data so we have common process and practice that we can apply machine learning to, for example, and get rid of some of those mundane tasks as we build out our data lake,” Slater says. “We will then start applying AI to help inform, predict, and actually start making some of those decisions for us. We are not currently doing that.”

While the pandemic delay has increased the urge to move quickly now, Slater still wants to ensure new technologies, such as machine learning, are adopted in the appropriate manner. “We are not a software company. We’re a shoe company … buying a business process,” she says. “Keeping that in mind as we’re implementing it is critical.”

And if there’s anything Wolverine’s pandemic experience has enforced, it’s that technology can be a driver of business, but in the end, business needs come first.

“It’s about prioritizing which business solutions go first,” the CIO adds. “This pandemic has been a blessing and a curse — a curse for the obvious reasons, but the rapid adoption of technology has a lot of people knocking on my door ready to use systems and data. And it’s about prioritizing who goes first and in what order.”

That’s quite a cultural shift from just a few years ago when even IT staff were wary of change. Slater recalls the negative reaction employees had when the company initially brought Microsoft in for a three-day cloud certification class in 2017 — a perception the CIO was able to smooth over before Wolverine’s cloud project started.

The IT team was apprehensive because many thought moving to the cloud would eliminate their jobs.

“But by end of that session, everyone understood that this was the way of the future; this was going to allow us to scale up and scale down and be a great career path,” Slater says. “I’m happy to report that we’ve had minimal turnover because our team saw the vision. They’re sticking with us.”

In this way, the delay of the cloud migration during the pandemic actually helped with employee retention, the CIO maintains. It also helped the company attract additional talent, Slater says.

“Employees get excited to have a CEO who talks about technology and investing in technology and modernizing the work technologies,” she says. “Certainly, recruiting has its challenges, but I think we have an advantage versus our peers for both attracting and retaining employees” thanks to the company’s ongoing transformation, and the opportunities ahead.

Cloud Computing, Digital Transformation

The software supply chain is, as most of us know by now, both a blessing and a curse.

It’s an amazing, labyrinthine, complex (some would call it messy) network of components that, when it works as designed and intended, delivers the magical conveniences and advantages of modern life: Information and connections from around the world plus unlimited music, videos, and other entertainment, all in our pockets. Vehicles with lane assist and accident avoidance.

Home security systems. Smart traffic systems. And on and on.

But when one or more of those components has defects that can be exploited by criminals, it can be risky and dangerous. It puts the entire chain in jeopardy. You know — the weakest link syndrome. Software vulnerabilities can be exploited to disrupt the distribution of fuel or food. It can be leveraged to steal identities, empty bank accounts, loot intellectual property, spy on a nation, and even attack a nation.

So the security of every link in the software supply chain is important — important enough to have made it into a portion of President Joe Biden’s May 2021 executive order, “Improving the Nation’s Cybersecurity” (also known as EO 14028).

It’s also important enough to have been one of the primary topics of discussion at The 2022 RSA conference in San Francisco. Among dozens of presentations on the topic at the conference was “Software supply chain: The challenges, risks, and strategies for success” by Tim Mackey, principal security strategist within the Synopsys Cybersecurity Research Center (CyRC).

Challenges and risks

The challenges and risks are abundant. For starters, too many organizations don’t always vet the software components they buy or pull from the internet. Mackey noted that while some companies do a thorough background check on vendors before they buy — covering everything from the executive team, financials, ethics, product quality, and other factors to generate a vendor risk-assessment score — that isn’t the norm.

“The rest of the world is coming through, effectively, an unmanaged procurement process,” he said. “In fact, developers love that they can just download anything from the internet and bring it into their code.”

While there may be some regulatory or compliance requirements on those developers, “they typically aren’t there from the security perspective,” Mackey said. “So once you’ve decided that, say, an Apache license is an appropriate thing to use within an organization, whether there are any unpatched CVEs [Common Vulnerabilities and Exposures] associated with anything with an Apache license, that’s somebody else’s problem. There’s a lot of things that fall into the category of somebody else’s problem.”

Then there’s the fact that the large majority of the software in use today — nearly 80% — is open source, as documented by the annual “Open Source Security and Risk Analysis” (OSSRA) report by the Synopsys CyRC.

Open source software is no more or less secure than commercial or proprietary software and is hugely popular for good reasons — it’s usually free and can be customized to do whatever a user wants, within certain licensing restrictions.

But, as Mackey noted, open source software is generally made by volunteer communities — sometimes very small communities — and those involved may eventually lose interest or be unable to maintain a project. That means if vulnerabilities get discovered, they won’t necessarily get fixed.

And even when patches are created to fix vulnerabilities, they don’t get “pushed” to users. Users must “pull” them from a repository. So if they don’t know they’re using a vulnerable component in their software supply chain, they won’t know they need to pull in a patch, leaving them exposed. The infamous Log4Shell group of vulnerabilities in the open source Apache logging library Log4j is one of the most recent examples of that.

Keeping track isn’t enough

To manage that risk requires some serious effort. Simply keeping track of the components in a software product can get very complicated very quickly. Mackey told of a simple app he created that had eight declared “dependencies” — components necessary to make the app do what the developer wants it to do. But one of those eight had 15 dependencies of its own. And one of those 15 had another 30. By the time he got several levels deep, there were 133 — for just one relatively simple app.

Also, within those 133 dependencies were “multiple instances of code that had explicit end-of-life statements associated with them,” he said. That means it was no longer going to be maintained or updated.

And simply keeping track of components is not enough. There are other questions organizations should be asking themselves, according to Mackey. They include: Do you have secure development environments? Are you able to bring your supply chain back to integrity? Do you regularly test for vulnerabilities and remediate them?

“This is very detailed stuff,” he said, adding still more questions. Do you understand your code provenance and what the controls are? Are you providing a software Bill of Materials (SBOM) for every single product you’re creating? “I can all but guarantee that the majority of people on this [conference] show floor are not doing that today,” he said.

But if organizations want to sell software products to the U.S. government, these are things they need to start doing. “The contract clauses for the U.S. government are in the process of being rewritten,” he said. “That means any of you who are producing software that is going to be consumed by the government need to pay attention to this. And it’s a moving target — you may not be able to sell to the U.S. government the way that you’re used to doing it.”

Even SBOMs, while useful and necessary — and a hot topic in software supply chain security — are not enough, Mackey said.

Coordinated efforts

“Supply chain risk management (SCRM) is really about a set of coordinated efforts within an organization to identify, monitor, and detect what’s going on. And it includes the software you create as well as acquire, because even though it might be free, it still needs to go through the same process,” he said.

Among those coordinated efforts is the need to deal with code components such as libraries within the supply chain that are deprecated — no longer being maintained. Mackey said developers who aren’t aware of that will frequently send “pull requests” asking when the next update on a library is coming.

And if there is a reply at all, it’s that the component is end-of-life, been end-of-life, and that the only thing to do is move to another library.

“But what if everything depends on it?” he said. “This is a perfect example of the types of problems we’re going to run into as we start managing software supply chains.”

Another problem is that developers don’t even know about some dependencies they’re pulling into a software project, and whether those might have vulnerabilities.

“The OSSRA report found that the top framework with vulnerabilities last year was jQuery [a JavaScript library]. Nobody decides to use JQuery, it comes along for the ride,” he said, adding that that is true of others as well, including Lodash (a JavaScript library) and Spring Framework (an application framework and inversion of control container for the Java platform). “They all come along for the ride,” he said. “They’re not part of any monitoring. They’re not getting patched because people simply don’t know about them.”

Building trust

There are multiple other necessary activities within SCRM that, collectively, are intended to make it much more likely that a software product can be trusted. Many of them are contained in the guidance on software supply chain security issued in early May by the National Institute of Standards and Technology in response to the Biden EO.

Mackey said this means that organizations will need their “procurement teams to be working with the government’s team to define what the security requirements are. Those requirements are then going to inform what the IT team is going to do — what a secure deployment means. So when somebody buys something you have that information going into procurement for validation.”

“A provider needs to be able to explain what their SBOM is and where they got their code because that’s where the patches need to come from,” he said.

Finally, Mackey said the biggest threat is the tendency to assume that if something is secure at one point in time, it will always be secure.

“We love to put check boxes beside things — move them to the done column and leave them there,” he said. “The biggest threat we have is that someone’s going to exploit the fact that we have a check mark on something that is in fact a dynamic something — not a static something that deserves a check mark. That’s the real world. It’s messy — really messy.”

How prepared are software vendors to implement the security measures that will eventually be required of them? Mackey said he has seen reports showing that for some of those measures, the percentage is as high as 44%. “But around 18% is more typical,” he said. “People are getting a little bit of the message, but we’re not quite there yet.”

So for those who want to sell to the government, it’s time to up their SCRM game. “The clock is ticking,” Mackey said.

Click here to find more Synopsys content about securing your software supply chain.

Security

The data center has traditionally been the central spine of your IT strategy. The core hub and home for applications, routing, firewalls, processing, and more. However, trends such as the cloud, mobility, and pandemic-induced homeworking are upending everything.

Now, the enterprise is reliant on distributed workplaces and cloud-based resources generating traffic beyond the network, such as home working or cloud platforms. Conventional networking models that backhaul traffic to the data center are seen as slow, resource-intensive, and inefficient. Ultimately, the Internet is the new enterprise network.

If the core data center is the spine, then the wide-area network (WAN) has to be the arms, right? During the pandemic, a survey revealed that 52% of U.S. businesses have adopted some form of SD-WAN technology. Larger enterprises, like national (79%) and global (77%) businesses, have adopted SD-WAN at much higher rates than smaller firms.

But operational visibility is an essential component of an SD-WAN implementation because, unlike MPLS links, the internet is a diverse and unpredictable transport. SD-WAN orchestrator application policies and automated routing decisions make day-to-day operations easier but can also deteriorate the overall end-to-end performance. As a result, applications can run slower than before a corrective action, making troubleshooting these issues very difficult without additional insight or validation.

Visibility beyond the edge

Just think about the number of possible paths data can take to be delivered end-to-end. If you take the example of an organization having 100 branch offices, two data centers, two cloud providers, 15 SaaS applications, and using four ISPs – there are more than 7,000 possible network paths in use anytime. If the network team sticks to traditional network monitoring, limited to branch offices and data centers, it means the overall visibility is reduced to less than 2% of the estate (102 paths over 7000+). The lack of visibility beyond the edge of the enterprise network can leave network operations entirely out of control.

Additionally, most SD-WAN vendors only measure and provide visibility from customer-edge to customer-edge – basically, the edge network devices and the secure tunnels that connect data centers to branch offices, banks, retail stores, etc. In order to deliver a reliable and secure user experience over this new and complex network architecture, network professionals need end-to-end visibility; not just edge-to-edge.

Experience-Driven NetOps is an approach that extends visibility beyond the edge of the data center and into the branch site, remote locations, ISP and cloud networks, and remote users to provide visibility from an end-user perspective (where they connect to in the enterprise) rather than from the controller-only edge perspective. Furthermore, there are thousands of more network devices behind the edge of an SD-WAN deployment. Do you really want another tool to manage those devices too? 

Make no mistake, if you’re deploying new software-defined technologies but still lack visibility into the end-user experience delivered by these architectures, you are only solving half of the problem to deliver the network support your business expects. Today, reliable networks need to be experience-proven. And network operations teams have to become experience-driven.

You can learn more about how to tackle the new challenges of user experience in this eBook, Guide to Visibility Anywhere. Read now and discover how organizations can create network visibility across the network edge and beyond.

Networking

In the coming years, NASA’s James Webb telescope will discover the edge of the observable universe, allowing astronomers to search for the very earliest stars and galaxies, formed more than 13 billion years ago.

That’s quite a contrast to today’s network operations visibility, which can sometimes feel like the lens cap has been left on the telescope. Explosive growth in new technology adoption, growing complexity, and the explosive use of internet and cloud networks has created unprecedented blind spots in how we monitor network delivery.

These visibility gaps obscure knowledge about critical applications and service performance. They can also hide security threats making them more difficult to detect. Ultimately, it can impact customer experience, revenue growth, and brand perception.

A global survey by Dimensional Research finds that 81% of organizations have network blind spots. More than 60% of larger companies state they have 50,000 or more network devices and 73% indicate it is growing increasingly difficult to manage their network. According to the study, removing network blind spots and increased monitoring coverage will improve security, reliability, and performance.

Dimensional Research also reports that current monitoring and operations solutions are ill-equipped for the tasks at hand and unable to support a massive influx of new technology over the next two years, leading to slower adoption and deployment with increased business risk.

Without solutions that deliver expanded visibility into remote locations, un-managed networks, and traffic patterns, IT can become overly dependent on end-users to report service issues after these problems have impacted performance. And no organization wants that to happen.

Performance insights across the edge infrastructure and beyond 

The massive adoption of SaaS and Cloud apps has made the job of IT even harder when it comes to understanding the performance of business functions. With no visibility into the internet that delivers these apps to users, IT is forced to resort to status pages and support tickets to determine if an outage does or does not affect users.

Now is the time to rethink network operations and evolve traditional NetOps into Experience-Driven NetOps. You need to extend visibility beyond the edge of the enterprise network to internet monitoring and bring modern capabilities like end-user experience monitoring, active testing of network delivery, and network path tracing into the network operations center. Only by being equipped with such capabilities can organizations ensure networks are experience-proven and network operations teams are experience-driven.  As a result, they gain credibility and build confidence in business users while delivering hybrid working and cloud transformations.

Take the real-world example of a major oil and gas services company. Most employees were set to work from home at the outset of the pandemic. The organization needed to scale up the WAN infrastructure from 10,000 to 60,000 users in just a few weeks. The challenge was to see into VPN gateways, ISP links, and Internet router performance to manage this increase in use.  By standardizing on a modern network monitoring platform, the company benefited from unified performance and capacity analytics that enabled making the right upgrade decisions to increase the number of remote workers by six-fold.

You can learn more about how to tackle the challenges of network visibility in this new eBook, Guide To Visibility Anywhere. Read now and discover how organizations can create network visibility anywhere.

Networking