It feels like just yesterday that we were promised that cloud servers cost just pennies. You could rent a rack with the spare change behind the sofa cushions and have money left for an ice cream sandwich.

Those days are long gone. When the monthly cloud bill arrives, CFOs are hitting the roof. Developer teams are learning that the pennies add up, sometimes faster than expected, and it’s time for some discipline.

Cloud cost managers are the solution. They track all the bills, allocating them to the various teams responsible for their accumulation. That way the group that added too many fancy features that need too much storage and server time will have to account for their profligacy. The good programmers who don’t use too much RAM and disk space can be rewarded.

Smaller teams with simple configurations can probably get by with the stock services of the cloud companies. Cost containment is a big issue for many CIOs now and the cloud companies know it. They’ve started adding better accounting tools and alarms that are triggered before the bills reach the stratosphere. See Azure Cost Management, Google Cloud Cost Management, and AWS Cloud Financial Management tools for the big three clouds.

Once your cloud commitment gets bigger, independent cost management tools start to become attractive. They’re designed to work with multiple clouds and build reports that unify the data for easy consumption. Some even track the machines that run on premises so you can compare the cost of renting versus building out your own server room.

In many cases, cloud cost managers are part of a larger suite designed to not just watch the bottom line but also enforce other rules such as security. Some are not marketed directly as cloud control tools but have grown to help solve this problem. Some tools for surveying enterprise architectures or managing software governance now track costs at the same time. They can offer the same opportunities for savings that purpose-built cloud cost tools do — and they help with their other management chores as well.

What follows is an alphabetical list of the best cloud cost tracking tools. The area is rapidly expanding as enterprise managers recognize they need to get a grip on their cloud bills. All of them can help govern the burgeoning empire of server instances that may stretch around the world.

Anodot

The first job for Anodot’s collection of cloud monitoring tools is to track the flow of data through the various services and applications. If there’s an anomaly or hiccup that will affect users, it will raise a flag. Tracking the cost of instances and pods across your multiple clouds is part of this larger job. The dashboard produces a collection of infographics that make it possible to study each microservice or API and determine just how much it costs to keep it running in times of high demand and low. This granular detail gives you the ability to spot the expensive workloads and find a way to prune them.

Standout features:

Integrated with a broader monitoring system to deliver better customer experience at a reasonable priceAvailable as a white-label platform for integration and reselling

AppDynamics

Tracking and reining in containers in a Kubernetes environment is the goal for Cisco’s AppDynamics, formerly known as Replex. The tool is now part of a larger system that watches clusters in public clouds or running locally to ensure they are performing correctly. Tracking costs is just one small part of a system that is constantly gathering statistics and watching for anomalies. One important reporting process charges back costs to the teams responsible for them so everyone can understand what’s creating the monthly bill. AppDynamics also offers a proprietary machine learning engine to turn historical data into a plan for efficient deployment. A policy control layer offers granular restrictions to ensure teams have access to what they need but are locked out of what they don’t.

Standout features:

Integrates cost management with general application monitoringConnect user experiences and business results for every layer of the software stack

Apptio Cloudability

Apptio makes a large collection of tools for managing IT shops, and Cloudability is its tool for handling cloud costs. The tool breaks down the various cloud instances in use, allocating them to your teams for accounting purposes. Ideally, teams will be able to control their own costs and predict future usage with the reports and dashboards on offer. Cloudability’s True Cost Explorer, for instance, offers pivotable charts to switch between aggregated variables to establish accurate plans and predict future usage. Cloudability integrates with ticketing tools such as Jira for planning and with tracking tools such as PagerDuty or Datadog for monitoring.

Standout features:

Planning future purchasing of reserved instances to lock in savings for the constant demandAllocating upcoming workloads to available instances of the right capabilities

CloudAdmin

Dashboards created by CloudAdmin are simple and direct. The tool tracks cloud usage and offers suggestions for rightsizing your servers or converting them to reserved instances. Server instances can be allocated to teams and then tracked with a budget. If spending crosses a defined line, alerts are integrated with email or other common communication tools such as PagerDuty to notify personnel of the need for attention.

Standout features:

Carefully filtered data feeds extract the key details about spending to save time wading through too much informationAutomated alerts can stop runaway spending when it crosses thresholds

CloudCheckr

CloudCheckr focuses on controlling cloud costs and security. The tool is part of NetApp’s Spot constellation for cloud management and is responsible for cost management by tracking standard spending events, such as consumption, forecasting, and the rightsizing of instances. The tool supports reselling for companies that add their own layers to commodity cloud instances. A white label option makes it possible to pass through all the reporting and charts to help your customers understand their billing. There’s also a focus on supporting public clouds used by governments.

Standout features:

Monitor compliance with privacy regulations by tracking security configurationRightsize reserved instances by tracking baseline consumption

Datadog

Watching over cloud machines, networks, serverless platforms, and other applications is the first job for Datadog’s collection of tools. Tracking cloud costs is just one part of the workload. Its telemetry gathers data about performance and cost, and Datadog builds this into a dashboard to help organizations understand both application cost and performance. The goal is to facilitate decisions about application performance with an eye on the price of delivering it. Understanding the tradeoff can lead to cost savings.

Standout features:

Broad suite for infrastructure monitoring across multiple cloudsMonitoring of real users and simulated users make it easier to deliver a better user experience

Densify

Densify builds a collection of tools for managing cloud infrastructure by juggling containers and VMware instances. The best way to run your clusters, according to Densify, is to keep precise, meticulous records of load and then use this data to scale up and down quickly. Densify’s optimizers focus on cloud resources such as instances, Kubernetes clusters, and VMware machines. Densify suggests this approach improves scaling by 30%. Densify’s FinOps tool generates extensive reports to help keep application developers and bean counters happy.

Standout features:

Track loads on machines to ensure rightsized instance allocationBuild reports summarizing consumption to help developers rightsize hardware

Flexera One

The Flexera One cloud management suite tackles many cloud management tasks, such as tracking assets or organizing governance to orchestrate control. An important section of the suite is devoted to controlling the budget. The tool offers multicloud accounting for tracking spending with elaborate reporting broken down by team and project. Flexera One also offers suggestions for optimizing consumption by targeting wasteful allocations, and it provides automated systems to put these observations into practice. The tool also integrates machine learning and artificial intelligence to help analyze consumption patterns across multiple clouds.

Standout features:

Integrates reporting across multiple clouds to help business groups understand costsIdentifies options for rightsizing instances and eliminating wasteful spending

Harness

DevOps teams can use the CI/CD pipeline that’s the central part of Harness to automate deployment and then, once the code is running, track usage to keep budgets in line. Harness’s cost management features watch for anomalies compared to historic spending, generating alerts for teams. A feature for automatically stopping unused instances can work with spot machines, effectively unlocking their potential for cost savings while working around their ephemeral nature.

Standout features:

Deep integration with the development pipeline to make cost savings part of the software creation processAutomated compliance integrates cost management with regulatory and governance work

Kubecost

Teams that rely on Kubernetes to deploy pods of containers can install Kubecost to track spending. It will work across all major (and minor) clouds as well as pods hosted on premises. Costs are tracked as Kubernetes adjusts to handle loads and are presented in a unified set of reports. Large jumps or unexpected deployments can trigger alerts for human intervention.

Standout features:

Optimized for tracking how Kubernetes deployments affect costsDynamic recommendations track opportunities for lowering spending

ManageEngine

DevOps teams rely on ManageEngine to track a range of potential issues from security to API endpoint overload. Its CloudSpend tool will extract data from cloud spreadsheet bills and aggregate it to provide a useful, actionable level of understanding. Costs can be charged back to the specific teams, and ManageEngine’s predictive analytics will plan reserved instances based on historical data. Currently available for AWS and Azure.

Standout features:

Spend Analysis drills down deeply into the data to granular detailMulti-currency support for worldwide deployment

Nutanix Xi

Organizations with large multicloud deployments can use Nutanix Cost Management (formerly Beam) to track costs across a range of installations, including private cloud machines hosted on premises. The tool can be customized to generate accurate cost estimates of private installations by taking into account heating and cooling costs, hardware, and data center rent. This makes it easier to make accurate decisions about allocating workloads to the lowest-cost deployment. The process can be automated to simplify management and forward-planning for budgeting for reserved instances.

Standout features:

Metering of private clouds builds direct insight into the costs of on-prem hardwareBudget alerting and dynamic optimization help rightsize consumption to minimize costs

ServiceNow

Teams running extensive collections of microservices rely on ServiceNow to manage some of the stack. Many of the tools are customer-facing solutions like IT automation, but there are also more backend tools for optimizing IT operations by intelligently managing performance. Newer AIOps can deliver artificial intelligence solutions too.

Standout features:

Broad selection of tools for tracking and optimizing IT assetsRisk management well integrated with governance tools

Turbonomic

IBM relies on Turbonomic to deliver an AI-powered solution for managing deployment to match application demand with infrastructure. The tool will automatically start, stop, and move applications in response to demand. The data driving these decisions is stored in a warehouse to train the AI that will be making future decisions. The latest version includes a new dashboard and reporting framework based on Grafana.

Standout features:

Full-stack integrated graphics to understand demand and cost across an applicationDesigned to automate resource allocation to save engineering teams from the chore

VMware Aria CloudHealth

VMware built Aria Cost and Aria Automation under the CloudHealth brand to manage deployments across all major cloud platforms as well as hybrid clouds. The cost accounting module tracks spending, allocating it to business teams while optimizing deployments to minimize costs. The modeling layer can build out amortization and consumption schedules to forecast future demand. Financial managers and development teams can drill down into these forecasts to focus on specific applications or constellations of services. The larger product line integrates the cost management with automated deployment and security enforcement.

Standout features:

Spending governance ensures that teams are following individual budgets for resource consumptionIntegrate cloud costs with business metrics and key performance indicators to understand the connection between computational costs and the bottom line

Yotascale

Much of the responsibility for cloud costs comes from the engineers who write and deploy the code. They make the granular decisions to startup more instances and store more data. Yotascale wants to put more information in their hands to enable them to optimize their hardware consumption with tools designed to track machines and allocate their costs directly to the teams responsible. The forecasting tools can also spot anomalies, raising alerts to prevent any surprise bills at the end of the month.

Standout features:

Engineer-targeted tools deliver budget information directly to the teams building the software and starting up the machinesAutomated tracking delivers forecasts and flags problems and overconsumption

Zesty

While many cloud managers offer insights through sophisticated reports, Zesty is designed to automate the work of spinning up and shutting down extra instances. A key feature enables it to watch the spot market for deeply discounted instances with excess capacity on the cloud. It offers a tool informed by artificial intelligence algorithms that can work with AWS’s API to make decisions that keep just enough machines running to satisfy users without breaking the budget. The tool can even control the amount of disk space allocated to individual machines while buying and selling processor time on the spot from reserved instance marketplaces.

Standout features:

Deep management of details such as storage space allocation to minimize costsIntegration with spot market to take advantage of the lowest possible costs
Cloud Computing, Cloud Management

As CIO of United Airlines, Jason Birnbaum is laser focused on using technology and data to enable the company’s 86,000 employees to create as seamless a customer travel experience as possible. “Our goal is to improve the entire travel process from when you plan a trip to when you plan the next trip,” says Birnbaum, who joined the airline in 2015 and became CIO last July.

One opportunity for improvement was with customers who are frustrated about arriving at the gate after boarding time and unable to board because the doors are shut, while the plane is sitting on the ground. “The situation is not only frustrating to our customers, but also to our employees,” Birnbaum says. “We are in the business of getting people to where they want to go. If we can’t help them do that, it drives us crazy.”

So, Birnbaum and his team built ConnectionSaver, an analytics-driven engine that assesses arriving connections, calculates a customer’s distance from the gate, looks at all other passenger itineraries, where the plane is going, and whether the winds will allow the flight to make up time, and then makes a real-time determination about waiting for the connecting passenger. ConnectionSaver communicates directly with the customer that the agents are holding the plane.

ConnectionSaver is a great example of how a “simple” solution resulted from a tremendous amount of cultural, organizational, and process transformation, so I asked Birnbaum to describe the transformation “chapters” behind this kind of innovation.

Chapter 1: IT trust and credibility

“For years, it was common for technology organizations to have too little credibility to drive transformation,” says Birnbaum. “That was our story, and we worked very diligently to change the narrative.”

Key to changing the narrative was giving senior IT leaders end-to-end business process ownership responsibilities. “We started moving toward a process ownership model several years ago, and since then, we’ve made significant improvements in technology reliability, user satisfaction, and our employees’ trust in the tools,” Birnbaum says. “This is important because every transformation chapter depends on use of the technology. If our employees don’t trust the tools, we will never get to transformation.”

A process could be gate management, buying a ticket, managing baggage, or boarding a plane, each of which runs on multiple systems. “Before we moved from systems to process ownership, people would see that their system is up, so they would assume the problem belonged to someone else,” says Birnbaum. “In that model, no one was looking out for the end user. Now, we have collaborative conversations about accountability for business outcomes, not system performance.”

Chapter 2: Improving the employee experience

Like every company, United Airlines has been working to improve the customer experience for years, but more recently has expanded its “design thinking” energies to tools for employees. To facilitate this expansion, Birnbaum grew the Digital Technology employee user experience team from three people to 60, all acutely focused on integrating the employee experience into the customer experience.

The employee user experience team spends time with gate agents, contact centers, and airplane technicians to identify technology to help employees help customers. “The goal of the employee user experience team is to provide tools that are intuitive enough for the employee to create a great customer experience, which in turn, creates a great employee experience,” says Birnbaum. “It is important for companies to invest in change management, but you need less change management if you give employees tools that they really want to use.”

For example, the user experience team learned that flight attendants felt ill equipped to improve the experience of a customer once the customer is on the plane. If a customer agreed to change seats or check a bag, for example, there was little a flight attendant could do to improve the experience in real-time. “All they had was a book of discount coupons, but the customer had to call a contact center with a code to get the discount,” says Birnbaum. “The reward required five more steps for the customer; it did not feel immediate.”

So, the team developed a tool called “In the Moment Care,” which uses an AI engine to make reward recommendations to the flight attendant who can offer compensation, miles, or discounts in any situation. The customer can see the reward on his or her phone right away, which immediately improves both the customer and employee experience. “We knew the customers would be happier with having their problem solved in real-time, but we were surprised by how much the flight attendants loved the tool,” says Birnbaum.  “They said, ‘I get to be the hero. I get to save the day.’”

The employee user experience team then turned its attention to the process of “turning the plane,” which includes every task that happens from the minute a plane lands to when it takes off again. It involves at least 35 employees in a 30-minute window.  

Take baggage, for example. Traditionally, during the boarding process, if the overhead bins were starting to fill up at the back of the plane, that flight attendant had no way to communicate to the flight attendant in the front of the plane that it is time to start checking bags. Their only option was to call the captain to call the network center to call the gate to get them to start checking bags.

To create a better communication channel, the employee user experience team worked with the developers to create a new tool, Easy Chat, that puts every employee responsible for a turn activity into one chat room for the length of the turn. “Whether the bins are filling up, or they need more orange juice, or they are waiting for two more customers to come down the ramp, the team can communicate directly to digitally coordinate the turn,” says Birnbaum. “Once the flight is gone, each employee will be connected to another group in another time and place.”

Again, Birnbaum sees that the value of Easy Chat is well beyond the customer experience. “I was just talking to a few flight attendants the other day, who told me that Easy Chat makes them feel like they are a part of a team, rather than a bunch of people with individual roles,” says Birnbaum. “United has a lot of employees, and they don’t work with the same people every day. The new tool allows us them to work as a team and to feel connected to each other.”

Chapter 3: Data at scale

To improve the analytics capabilities of the company, Birnbaum and his team built a hub and spoke model with a central advanced analytics team in IT that collaborates with each operational area to develop the right data models. 

“The operating teams live and breathe the analytics — they are the people scheduling the planes — so they are key to unlocking the value of the analytics,” says Birnbaum. “Digital Technology’s job is to collect, structure, and secure the data, and help our operational groups exploit it. We want the data scientists in the operating areas to take the lead on how to make the data valuable at scale.”

For example, United has always worked to understand the cause of a flight delay. Was it a mechanical problem? Did the crew show up late? “The teams would spend hours figuring out whose fault it was, which was a huge distraction from running the operation,” says Birnbaum. To solve this problem, the analytics team, in partnership with the operations team, created a “Root Cause Analyzer” that collects operational data about the flight.

“Now, instead of spending time debating why the flight was delayed, we can quickly see exactly what happened and spend all of our time on process improvement,” says Birnbaum.

With the foundational, employee experience, and data chapters now under way, Birnbaum is thinking about the next chapter: Using technology and analytics to integrate and personalize a customer’s entire travel experience.

“If you have a rough time getting to the airport, but the flight attendant greets you by your name and knows what you ordered, you will still have a good trip,” says Birnbaum.  “It is our job to use technology to help our employees deliver that great customer experience.”

Digital Transformation, Employee Experience, Travel and Hospitality Industry

In a bid to help retailers transform their in-store, inventory-checking processes and enhance their e-commerce sites, Google on Friday said that it is enhancing Google Cloud for Retailers with a new shelf-checking, AI-based capability, and updating its Discovery AI and Recommendation AI services.

Shelf-checking technology for inventory at physical retail stories has been a sought-after capability since low — or no — inventory is a troubling issue for retailers. Empty shelves cost US retailers $82 billion in missed sales in 2021 alone, according to an analysis from NielsenIQ.

The new AI-based tool for shelf-checking, according to the company, can be used to improve on-shelf product availability, provide better visibility into current conditions at the shelves, and identify where restocks are needed.

The tool, which is built on Google’s Vertex AI Vision and powered by two machine learning models — product recognizer and tag organizer — can be used to identify different product types based on visual imaging and text features, the company said, adding that retailers don’t have to spend time and effort into training their own AI models.

Further, the shelf-checking tool can identify products from images taken from a variety of angles and across devices such as a ceiling-mounted camera, a mobile phone or a store robot, Google said in a statement. Images from these devices are fed into Google Cloud for Retailers.

The capability, which is currently in preview and is expected to be generally available to retailers globally in the coming months, will not share any retailer’s imagery and data with Google and can only be used to identify products and tags, the company added.

Improving retail website experience

To help retailers make their online browsing and product discovery experience better, Google Cloud is also introducing a new AI-powered browse feature in its Discovery AI service for retailers.

The capability uses machine learning to select the optimal ordering of products to display on a retailer’s e-commerce site once shoppers choose a category, the company said, adding that the algorithm learns the ideal product ordering for each page over time based on historical data.

As it learns, the algorithm can optimize how and what products are shown for accuracy, relevance, and the likelihood of making a sale, Google said, adding that the capability can be used on different pages within a website.

“This browse technology takes a whole new approach, self-curating, learning from experience, and requiring no manual intervention. In addition to driving significant improvements in revenue per visit, it can also save retailers the time and expense of manually curating multiple ecommerce pages,” the company said in a statement.

The new capability, which has been made generally available, currently supports 72 languages.

Personalized recommendations for customers

In order to help retailers create hyperpersonalization for their online customers, Google Cloud has released a new AI-based capability for its Recommendation AI service for retailers.

The new capability, which is expected to advance Google Cloud’s existing Retail Search service, is underpinned by a product-pattern recognizer machine learning model that can study a customer’s behavior on a retail website, such as clicks and purchases, to understand the person’s preferences.

The AI then moves products that match those preferences up in search and browse rankings for a personalized result, the company said.

“A shopper’s personalized search and browse results are based solely on their interactions on that specific retailer’s ecommerce site, and are not linked to their Google account activity,” Google said, adding that the shopper is identified either through an account they have created with the retailer’s site, or by a first-party cookie on the website.

The capability has been made generally available.

Artificial Intelligence, Cloud Computing, Retail Industry, Supply Chain

Low-code/no-code visual programming tools promise to radically simplify and speed up application development by allowing business users to create new applications using drag and drop interfaces, reducing the workload on hard-to-find professional developers. A September 2021 Gartner report predicted that by 2025, 70% of new applications developed by enterprises will use low-code or no-code technologies, up from less than 25% in 2020.

Many customers find the sweet spot in combining them with similar low code/no code tools for data integration and management to quickly automate standard tasks, and experiment with new services. Customers also report they help business users quickly test new services, tweak user interfaces and deliver new functionality. However, low code/no code is not a silver bullet for all application types and can require costly rewriting if a customer underestimates the degree to which applications must scale or be customized once they reach production. So there’s a lot in the plus column, but there are reasons to be cautious, too.

Here are some examples of how IT pros are using low code/no code tools to deliver benefits beyond just reducing the workload on professional developers.

Experimenting with user interfaces, delivering new services

Sendinblue, a provider of cloud-based marketing communication software, uses low code workflow automation, data integration and management tools to quickly experiment with features such as new pricing plans, says CTO Yvan Saule. Without low code, which allows him to test new features at 10 to 15% of the cost of traditional development, “we couldn’t afford all the experiments we’re doing,” he says. “If we had to write 15 different pricing systems, it could’ve taken years,” requiring backend fulfillment systems and credit checks for each specific price.

Financial technology and services company Fidelity National Information Services (FIS) uses the low code WaveMaker to develop the user interfaces for the customer-facing applications it builds for its bank customers, using APIs to connect those applications to the customer’s or FIS’ back-end systems. “It’s for speed to market,” says CTO Vikram Ramani. This approach is especially valuable given the shortage of skilled developers. While FIS is still evaluating the results, Ramani says they expect at least a 20 to 30% speed improvement.

Vikram Ramani, Fidelity National Information Services CTO

Fidelity National Information Services

And among low-code tools, for instance, FIS chose WaveMaker because its components seemed more scalable than its competitors, and its per-developer licensing model was less expensive than the per runtime model of other tools.

At Joist, a startup developing financial and sales management software for contractors, CEO Rohan Jawali is using the no code AppMachine platform to quickly build application prototypes, get customer feedback, and then build the actual application in order to skip a few iterations in the design process. At a previous employer, he could spin out a simple information and contact sharing mobile app for construction workers in a couple days compared to several weeks using conventional languages. Tapping the content management system within AppMachine made it easy for users to upload the required data into it, he says.

Process automation and data gathering

At bottled water producer Blue Triton Brands, Derek Lichtenwalner used Microsoft’s low code Microsoft Power Apps to build an information sharing and communications application for production workers. Before becoming an IS/IT business analyst in early 2022, Lichtenwalner had no formal computer training, but was able to build the app in about a month. It’s now in use at six facilities with about 1,200 users, with plans to expand it to 3,000 at the company’s 27 facilities.

Derek Lichtenwalner, IS/IT business analyst, Blue Triton Brands

Blue Triton Brands

Using non-IT users such as Lichtenwalner to develop apps that share information and automate processes is a good option for industries with small staffs of skilled developers, such as construction, where “there are many processes that need to be digitized and low code and no code can make that easier,” says Jawali.

Some vendors and customers are using low code/no code concepts to ease not only app development, but data sharing among apps. Sendinblue, for example, abstracts its application programming interfaces (APIs) into reusable widgets. By mapping its APIs with those used by its customers’ systems, says Saule, his developers “can drag-and-drop the integration functions, and build new capabilities atop that integration.”

Understand your needs

Low code/no code “can be an IT professional’s best friend for traditional, day-to-day challenges such as workflow approvals and data gathering,” says Carol Dann, VP information technology at the National Hockey League (NHL). But she warns against trying to use such a tool for a new application just because it’s enjoyable to use. And choosing the wrong solution can backfire quickly, she adds, with any quick wins erased by the need to code or work around the shortcomings.

“No code is a good fit when you have a simple application architecture and you want to quickly deploy and test an application,” adds Jawali. It’s best, he says, in “innovative experiments where you want a lot of control over the user experience, user interface—something you can’t get with a low code platform. Low code is more useful when you need to introduce more security and links to other applications, but at the cost of greater complexity and the need for more technical developers.”

The drag and drop simplicity of no code makes it difficult to achieve that final percent of differentiation that makes an application useful for a specific organization, and makes low code a better fit, says Mark Van de Wiel, field CTO at data integration software provider Fivetran. That extra customization might include, for example, the use of fuzzy logic to correct misspelled names in a customer database or the business logic needed to calculate a one to 10 score of the effectiveness of various marketing assets based on metrics such as views versus click-throughs.

Low code/no code apps can also be difficult to scale, says Saule. He recommends limiting their use to non-core strategic parts of the business, and when you want to experiment. “When an experiment succeeds and needs to scale is when it’s time to think about rewriting it,” he says, but in a more traditional yet scalable language.

Just because no code/low code lets more users create their own applications doesn’t mean they should, says Van de Wiel. At Fivetran, an analytics group prepares the marketing effectiveness dashboards used by the rest of the organization. “This allows our employees to focus on what they’re good at,” he says, rather than wasting time creating duplicate dashboards, or wasting money tying up IT infrastructure downloading the same data.

Organizations allowing extensive DIY app development must also ensure this larger pool of non-professional developers follows corporate and regulatory requirements to protect customer data, he says.

Know when to use low code vs no code tools

AppMachine was a good fit for the information-sharing apps Jawali developed because they must only serve hundreds of users rather than tens of thousands or more. It’s less suited to applications that require high levels of security, he says, because it can’t create the user profiles needed for role-based access. It also lacked support for the APIs required to connect to other applications such as product or issue management systems, or delivery tracking applications.

“Trouble can seep in when folks try to make tools do things they weren’t really meant to do rather than concluding it’s not the right fit,” says Dann. Developing a close relationship with the vendor’s customer success managers and understanding everything the tools have to offer—what the tool does well and what it was never intended to do—are critical to make well educated decisions. “Saying, ‘No, that’s really not what our platform does’ is a perfectly acceptable answer from a vendor,” she says.

Among other assessment questions, Dann recommends asking if a no code/low code vendor is willing to take part in an information security review, whether their solution has a robust API to integrate with other applications and whether it has an authentication and authorization strategy that fits with the customer’s security processes.

Don’t overlook “off the shelf” low code/no code solutions

If you think of low code/no code as a strategy to simplify development rather than a product category, you can find opportunities to speed app development with software you already own, or with off-the-shelf capabilities in familiar products such as cloud storage. This is especially true for routine, well-defined functions such as managing documents and workflows.

The NHL configured the integration capabilities of its Monday.com workflow management system as a low code/no code solution to replace a legacy system for channeling requests to its creative staff. It deployed this replacement system in about two weeks, compared to at least six months using traditional development methods, says Dann.

The league met another need—alerting its scouts to new information about promising players—by configuring the Box Relay no code workflow automation tool in the Box cloud-based data storage service to automatically assign tasks and move documents to the proper folders once they were processed. “This took the whole process out of email and streamlined workflow,” she says, “by using a ‘what you see is what you get’ interface rather than writing code.”       

Used properly, low code/no code tools can speed new apps to customers and employees while slashing costs and delays. They can also prevent security or policy breaches, or the need to rewrite an application that can’t scale if it’s successful. But like any tool, understanding what they can and can’t do—and what you need them to do—is essential.

Data Management, IT Leadership, No Code and Low Code

In the beginning, no one needed enterprise architecture tools. A back of an envelope would do in the early years. Thomas Watson Jr., one of the leaders of International Business Machines, supposedly said in 1940s, “I think there is a world market for about five computers.” 

The modern enterprise, however, is much different. Some employees have more than five computers on their desk alone. Even a small organization may have thousands of machines; some can easily have more than a million. That’s before the sensors and smart gadgets that make up the internet of things are taken into account. Enterprise architecture (EA) systems track all these machines and the software that runs on them — not to mention how these software layers interact. Because of this, EA tools are the single source of truth for managing these burgeoning virtual worlds.

The state of the EA tool market

The EA marketplace is robust with more than several dozen serious competitors. Some specialize in specific platforms or clouds. Others offer deeper integration with business intelligence (BI) and business process management (BPM) software. Some began life as generic modeling software; others were purpose-built for enterprise architecture. All compile long lists of machines and offer various tabular and graphical dashboards for tracking them.

EA systems gather device and software information in a variety of ways. The most manual process involves asking stakeholders and developers to fill out forms detailing who owns what machines. The most automated tools log into a company’s clouds directly, counting the machines themselves. Most use a hybrid approach. Some offer drag-and-drop widgets so that developers, architects, and managers can create a model of all the machines, the software those machines run, and how the data flows from one machine to another.

Everyone from the CIO to the rapid-response team can use the charts and graphs from an EA dashboard to look up processes and track the flow of data. Some watch for bad machines or overloaded pipelines. They can repair problems by following a cascade of failure. Others plan for the future by finding bottlenecks or shortcuts. All rely on the data in the system as a springboard for making quick decisions.

Many of the tools use ArchiMate, an open modeling standard designed to capture much of the complexity of enterprise architecture. It’s built to work closely with the TOGAF open framework. The views and visualizations are created in a manner similar to that of UML (Unified Modeling Language), another generalized approach for visualizing design.

An important consideration is the level of integration with the type of software in your local stack. All of the EA tools support big collections of modules that can gather data from particular clouds or operating systems but some support various clouds and operating systems better than others.

Another consideration is their ability to connect with computing and service clouds. Some EA tools specialize in cloud instances and compute pods. All cloud companies maintain their own tools for tracking your systems and some EA tools can absorb this data directly. Single-cloud teams often rely more on the cloud’s own management software to track deployments.

Choosing the best solution for your organization requires evaluating the tools’ ability to integrate with your technology stack and then weighing the usefulness of the charts and tables that the software produces. Following is an overview, in alphabetical order, of the top enterprise architecture platforms available today.

Top 20 enterprise architecture tools

ArdoqAtoll Group SAMUAvolution AbacusBOC Group ADOITBiZZdesign HoriZZonCapsifiCapsteraClausmark Bee360EASLeanIX Enterprise Architecture SuiteMega HopexOrbus Software iServerPlanview Enterprise OneQualiWare Enterprise ArchitectureQuest Erwin EvolveServiceNowSoftware AG AlfabetSparxUnicom System ArchitectValueBlue Blue Dolphin

Ardoq

Ardoq creates a “digital twin” of your organization by collecting information from a variety of users, developers, and stakeholders throughout your enterprise with a collection of simplified forms. The goal is to engage people who understand the roles of various systems. This data creates a more “democratic” opportunity for everyone to use the visualizations of the network and data flows to support and modernize the systems supporting their roles. The tool offers integrations with the major clouds and an API that’s open to customization through all major languages (Python, C#, Java, etc.).

Major use cases:

Simulating architectural stress when demand spikes in order to plan for major eventsUnderstanding how user behavioral shifts lead to demand changesApplication portfolio management for better strategic planning

Atoll Group SAMU

The Atoll Group created SAMU EA Tool to track enterprise architecture by examining deep connections throughout on-prem hardware, the cloud layer, and BPM tools. It offers integration with monitoring tools (Tivoli, ServiceNow, etc.), cloud virtualization managers (VMware, AWS, etc.), configuration management databases (CA, BMC), and service organization tools (BMC, HP). These integrations feed a centralized data model that is augmented with input from stakeholders.

Major use cases:

Creating visualizations of architectureInforming the architectural review and strategic planning processImproving communications by creating a visual foundation for understanding

Avolution Abacus

Avolution’s created Abacus to deliver a diagram-driven dashboard that captures the range and extent of your enterprise architecture. The core integrations with office tools such as SharePoint, Excel, Visio, Google Sheets, Technopedia, and ServiceNow simplify usage for workflows that use them. The tool began adding a machine-learning layer and users can now experiment with training a model that can help answer questions like which staff member is responsible for a particular system.

Major use cases:

Opening up IT to the larger workplace to empower the entire organization to understand data flowsUsing extensive enterprise modeling to build a roadmap for future developmentTracking business metrics that integrate with enterprise performance

BOC Group ADOIT

Helping teams manage resources, predict demand, and track all assets is the goal for BOC Group’s ADOIT, a wide-open tool that maps each system or software package to an object. Data flows between the systems are turned into relationships captured by the objects using a metamodel that can be customized. Business processes are also modeled similarly by a companion product, ADONIS, that is well-integrated. The web-based tool also integrates with tools such as Atlassian’s Confluence for faster data capture and evolution.

Major use cases:

Creating an enterprise-wide model so all team members can understand and improve the stackProviding full access to EA data while away from a desktop with the ADOIT mobile appOrchestrating tech mergers and acquisitions through thorough mapping of assets

BiZZdesign HoriZZon

The philosophy from BiZZdesign is to use its tool HoriZZon to model business workflows and the tech stack that supports them. HoriZZon offers a graph-based model for collecting data from all stakeholders so its analytics engine can generate charts illustrating the current state of the system. Managing change and planning for the future is a big emphasis for BiZZdesign and HoriZZon is designed to help manage the risk of redesign. The tool set supports major standards such as ArchiMate, TOGAF, and BPMN.

Main use cases:

Anticipating future demands through predictive modelingWorking with both business and tech architecture to orchestrate workflowsAnticipate issues with risk, security, and governance by modeling data security needs.

Capsifi

Jalapeno from Capsifi creates business models in its cloud-based platform. The goal is not just capturing the workflow in a model but enabling leadership to understand enough to drive a transformation through innovation. The software allows users to knit together modeling concepts such as “customer journeys” or “value stream” and to integrate this with data gathered from tools such as Jira. This data can yield metrics reported through a collection of charts and gauges designed to measure progress or “burndown.”

Main use cases:

Planning strategically for the future of the enterprise stacksCreating a nexus of communication to coordinate all enterprise stakeholdersManaging continuous transformation through Kanban-style tools for agile teams.

Capstera

The Business Architecture tool from Capstera focuses on creating a map of the business architecture itself. The value and process maps help define and track the roles of the various sections of the business. The connections to the underlying software and tools can be added along the way.

Main use cases:

Producing reports that explore the business architecture firstThinking about the connections between people, divisions, and work requirementsDeveloping strategic summaries for long-term planning

Clausmark Bee360

The team members who turn to Clausmark’s flagship product Bee360 (formerly known as Bee4IT) are coming for a system designed to offer a simple source of truth about a business’s workflow so that many roles can make smarter decisions. The system also offers the ability to track and allocate costs with Bee360 FM (financial management).

Main use cases:

Empowering C-suite level management of projects and asset allocationEvolving an accurate digital twin for both understanding current data flows and planning future enhancementsBuilding an integrated knowledge base to track all digital workflows

EAS

The Essential package from EAS or Enterprise Architecture Solutions began as an open-source project and evolved into a commercially-available cloud solution. It creates a metamodel describing the interactions between systems and business processes. The commercial version includes packages for tracking some common business workflows such as data management or GDPR compliance.

Main use cases:

Evaluating the technical maturity of your architectureDriving security and governance through better tracking of all assetsControlling and managing complexity as it evolves in your system.

LeanIX Enterprise Architecture Suite

The LeanIX collection of tools includes Enterprise Architecture Management and several other tools that perform tasks such as SaaS Management and Value Stream Management to track cloud deployments and the services that run on them. Together, they collect data on your IT infrastructure and present it in a graphical dashboard. The tool is tightly integrated with several major cloud workflow tools, including Confluence, Jira, Signavio, and Lucidchart, an advantage for teams that are already using these to plan and execute development strategies.

Main use cases:

Managing application modernization and cloud migrationEvaluating obsolescence for software servicesControlling and managing cost

Mega Hopex

Mega built the Hopex platform to support modeling enterprise applications while understanding the business workflows they support. Data governance and risk management are a big part of the equation. The tool is built on Azure and relies on a collection of open standards, including GraphQL and REST queries, to gather information from component systems. Reporting is integrated with Microsoft’s Office tools as well as graphical solutions such as Tableau and Qllk.

Main use cases:

Deriving data-driven insights to guide cloud and application deploymentCreating accurate models of usage to understand architectural demandsCapturing an estimate of demand with surveys and other tools to plan for future needs

Orbus Software iServer

Orbus originally built its iServer tools on the Microsoft stack and its product will be familiar and usable to any team that’s tightly aligned with Microsoft’s tools. Reports emerge in Microsoft Word. The data is formatted for Excel. Everything runs well on Azure. The tools aren’t limited to Microsoft environments because its collection of modules support the dominant, open standards for integration to gather data. They’re expanding connections to other reporting platforms such as ServiceNow and Lucidchart.

Main use cases:

Controlling security and compliance risks through better visibility and deeper vision of the underlying architectureDestroying information silos in organizations by opening up access and spreading understandingManaging technical debt and cloud migration

Planview Enterprise One

Planview offers a constellation of products for tracking teamwork, processes, and enterprise architecture. Its enterprise tools are broken into three tiers for Strategic Portfolio Management, Product Portfolio Management, and Project Portfolio Management. Together they create databases of machines and software layers that deliver role-based views for managers and team members. The tool is integrated with common ticket-tracking systems such as Jira for creating workflow analytics and reporting. Planview has integrated tools formerly known as Daptiv, Barometer, and Projectplace that were acquired during a merger.

Main use cases:

Building a long-term, strategic vision for architectural evolutionTracking development work at a project-level and integrating this into any strategyFocusing on customer experience and product structure to drive change

QualiWare Enterprise Architecture

The Enterprise Architecture tool from QualiWare is part of a broad collection of modeling tools aimed at capturing all business processes. It offers a clean slate for building a digital twin that can document just how a customer’s journey progresses. The company is integrating various artificial intelligence algorithms to enhance both documentation and process discovery.

Main use cases:

Establishing a collaborative ecosystem for business managers to understand the enterprise architectureCapturing architectural design elements to build a knowledge ecosystem around the stackEncouraging broad participation in documentation creation and review

Quest Erwin Evolve

Quest’s Erwin Evolve tool began life as a data modeling system and grew to offer enterprise architecture and business process modeling. Teams can use customized data structures to capture the complexity of modern, interlocking software systems and the business workflows that they manage. The web-based tool creates models that generate role-based graphs and other visualizations that form dashboards for all team members. They also have an AI-based modeling tool that can integrate white board sketches. 

Main use cases:

Building a digital twin for strategic modeling of the enterprise data architectureUnderstanding customer journeys through outward facing systemsTracking services and systems using application portfolio management

ServiceNow

The collection of tools from ServiceNow are broken down to focus on particular types of the architecture, including Assets, DevOps, Security, or Service. They catalog the various machines and software platforms to map and understand the various workflows and dataflows in the enterprise. Careful analysis of the reports and dashboards can minimize risk and build resilience into the system.

Main use cases:

Tracking possible assets, services, and systems defining the enterpriseUniting governance issues, risk containment, and IT management and security operations in a single platformManaging customer-facing services by integrating CRM tools

Software AG Alfabet

Alfabet is one of a large collection of products for managing APIs, cloud computing, and applications supporting devices from the internet of things. The system gathers information from a variety of interfaces and produces hundreds if not thousands of potential reports filled with lifecycle maps, charts, rankings, and geographic coordinates. While traditionally Software AG offers tools such as ADABAS that are closely aligned with IBM’s offerings, Alfabet offers tight integration with all major platforms, including collaboration spaces such as Microsoft Teams. Its latest version will include an audible option, Alfabot, that delivers a “conversational user interface.”

Major use cases:

Driving change through tracking projects and running codeEnforcing compliance and software standardsUsing reports, maps, and dashboards to implement business-driven change

Sparx

Sparx created four levels of its tool so that teams of various sizes can tackle projects of various sizes and complexity. All offer UML-based modeling that tracks the parts of increasingly complex systems. A simulation engine enables war gaming and understanding how failure can propagate and cascade, an essential part of disaster planning. Sparx recognizes that models can be built for a variety of reasons from pure analysis, software development, or strategic planning, and they’ve provided hundreds of potential pre-built design patterns to guide modeling.

Major use cases:

Simulating changes in demand and load to understand and project future needsTracing problems and potential issues through a matrix of connectionsGenerating documentation

Unicom System Architect

One of the offerings from Unicom’s TeamBlue is System Architect, a tool that uses a metamodel to gather as much data about the running systems automatically, sometimes through reverse engineering the data flows. This system wide data model can be presented in user-customizable dashboards for team members of all roles. Forward-looking managers can also start simulations to help optimize resource allocation.

Major use cases:

Asking “what if” questions about the architectural modelBuilding a meta-model of data and systemsCreating migration and transformation plans

ValueBlue Blue Dolphin

ValueBlue’s BlueDolphin gathers data in three ways. First, it depends on standards-driven automation (ITSM, SAM) to import basic data. Second, it works with architects and systems designers in file formats such as ArchiMate or BPMN. Finally, it surveys other stakeholders with questionnaires driven by customizable templates. All of this is delivered in a visual environment that tracks the historical evolution of systems.

Major use cases:

Gathering system-wide data from internal and external stakeholders through automated and form-based collectionGenerating forward-looking reports to monitor and drive changeNurturing cooperation and collaboration through open data reporting

More on enterprise architecture:

What is enterprise architecture? A framework for transformationWhat is an enterprise architect? A vital role for IT operations7 traits of successful enterprise architectsThe dark secrets of enterprise architecture6 reasons to invest in enterprise architecture tools12 certifications for enterprise architectsWhy you need an enterprise architectWhy enterprise architecture maximizes organizational valueEnterprise architects as digital transformers
Enterprise Applications, Enterprise Architecture, IT Strategy, ITSM, Software Development

What is project management?

Project management is a business discipline that involves applying specific processes, knowledge, skills, techniques, and tools to successfully deliver outcomes that meet project goals. Project management professionals drive, guide, and execute company-identified value-added goals by applying processes and methodologies to plan, initiate, execute, monitor, and close all activities related to a given business project in alignment with the organization’s overall strategic objectives.

Project management steps

Project management is broken down into five phases or life cycle. Each phase intersects with any of 10 knowledge areas, which include: integration, scope, time, cost, quality, human resources, communication, risk procurement, and stakeholder management. The phases, processes and associated knowledge areas provide an organized approach for project managers and their teams to work through projects, according to the following outline:

Initiating phase:

Integration management: Develop project charter.
Stakeholder management: Identify stakeholders.

Planning phase:

Integration management: Develop project management plan.
Scope management: Define scope, create work breakdown structure (WBS), gather requirements.
Time management: Plan and develop schedules and activities, estimate resources and timelines.
Costs management: Estimate costs, determine budgets.
Quality management: Identify quality requirements.
Human resource management: Plan and identify human resource needs.
Communications management: Plan stakeholder communications.
Risk management: Perform qualitative and quantitative risk analysis, plan risk mitigation strategies.
Procurement management: Identify and plan required procurements.
Stakeholder management: Plan for stakeholder expectations.

Execution phase:

Integration management: Direct and manage all project work.
Quality management: Performing all aspects of managing quality.
Human resource management: Select, develop, and manage the project team.
Communications management: Manage all aspects of communications.
Procurement management: Secure necessary procurements.
Stakeholder management: Manage all stakeholder expectations.

Monitoring and controlling phase:

Integration management: Monitoring and control project work and manage any necessary changes.
Scope management: Validate and control the scope of the project.
Time management: Control project scope.
Costs management: Controlling project costs.
Quality management: Monitor quality of deliverables.
Communications management: Monitor all team and stakeholder communications.
Procurement management: Keep on top of any necessary procurements.
Stakeholder management: Take ownership of stakeholder engagements.

Closing phase:

Integration management: Close all phases of the project.
Procurement management: Close out all project procurements.

Stakeholder expectations

Stakeholders can be any person or group with a vested stake in the success of a project, program, or portfolio, including team members, functional groups, sponsors, vendors, and customers. Expectations of all stakeholders must be carefully identified, communicated, and managed. Missing this can lead to misunderstandings, conflict, and even project failure.

Here are some tips for managing stakeholder expectations.

Assemble a team specific to project goals, ensuring team members have the right mix of skills and knowledge to deliver.
Leave sufficient time in advance of a project for key individuals to delve into and discuss issues and goals before the project begins.
Ensure the project timeline and scheduled tasks are realistic.

Project scope

During the planning phase, all project details must be solidified, including goals, deliverables, assumptions, roles, tasks, timeline, budget, resources, quality aspects, terms, and so on. The customer and key stakeholders work together to solidify and agree on the scope before the project can begin. The scope guides the project work and any changes to the scope of the project must be presented and approved as a scope change request.

Project budgets

Budgets play a large role in whether a project progresses, or if it can be completed. Few companies have an unlimited budget, so the first thing project stakeholders look at in determining whether a project succeeded or failed is the bottom line. This fact fuels the pressure project leaders, and their teams face with each passing day. As such, effective budget management is a primary area of focus for project managers who value their careers. The following are five strategies for maintaining control of your project budget before it succumbs to whopping cost overruns:

Understand stakeholder’s true needs and wants
Budget for surprises
Develop relevant KPIs
Revisit, review, re-forecast
Keep everyone informed and accountable

Project management methodologies

Most projects are conducted based on a specific methodology for ensuring project outcomes based on a range of factors. As such, choosing the right project management methodology (PMM) is a vital step for success. There are many, often overlapping approaches to managing projects, the most popular of which are waterfall, agile, hybrid, critical path method, and critical chain project management, among others. Agile, which includes subvariants such as Lean and Scrum, is increasing in popularity and is being utilized in virtually every industry. Originally adopted by software developers, agile uses short development cycles called sprints to focus on continuous improvement in developing a product or service.

PMO vs. EPMO

Successful organizations codify project management efforts under an umbrella organization, either a project management office (PMO) or an enterprise project management office (EPMO).

A PMO is an internal or external group that sets direction and maintains and ensures standards, best practices, and the status of project management across an organization. PMOs traditionally do not assume a lead role in strategic goal alignment.

An EPMO has the same responsibilities as a traditional PMO, but with an additional key high-level goal: to align all project, program, and portfolio activities with an organization’s strategic objectives. Organizations are increasingly adopting the EPMO structure, whereby, project, program, and portfolio managers are involved in strategic planning sessions right from the start to increase project success rates.

PMOs and EPMOs both help organizations apply a standard approach to shepherding projects from initiation to closure. In setting standard approaches, PMOs and EPMOs offer the following benefits:

ground rules and expectations for the project teams
a common language for project managers, functional leaders, and other stakeholders that smooths communication and ensures expectations are fully understood
higher levels of visibility and increased accountability across an entire organization
increased agility when adapting to other initiatives or changes within an organization
the ready ability to identify the status of tasks, milestones, and deliverables
relevant key performance indicators for measuring project performance

Project management roles

Depending on numerous factors such as industry, the nature and scope of the project, the project team, company, or methodology, projects may need the help of schedulers, business analysts, business intelligence analysts, functional leads, and sponsors. Here is a comparison of the three key roles within the PMO or EPMO, all are in high demand due to their leadership skill sets.

Project manager: Plays the lead role in planning, executing, monitoring, controlling, and closing of individual projects. Organizations can have one or more project managers.

Program manager: Oversees and leads a group of similar or connected projects within an organization. One or more project managers will typically report to the program manager.

Portfolio manager: This role is at the highest level of a PMO or EPMO and is responsible for overseeing the strategic alignment and direction of all projects and programs. Program managers will typically report directly to the portfolio manager.

Project management certification

Successful projects require highly skilled project managers, many with formal training or project management certifications. Some may have project management professional certifications or other certifications from the PMI or another organization. Project management certifications include:

PMP: Project Management Professional
CAPM: Certified Associate in Project Management
PgMP: Program Management Professional
PfMP:Portfolio Management Professional
CSM: Certified Scrum Master
CompTIA Project+ Certification
PRINCE2 Foundation/PRINCE2 Practitioner
CPMP: Certified Project Management Practitioner
Associate in Project Management
MPM: Master Project Manager
PPM: Professional in Project Management

Project management tools

Project management software and templates increase team productivity and effectiveness and prepare the organization for changes brought about by high-impact projects. CIO.com has compiled the ultimate project management toolkit as well as some open-source project management tools to help you plan, execute, monitor, and successfully polish off your next high-impact project.

Project management software falls into multiple categories. Some tools are categorized as project management software; others are more encompassing, such as project portfolio management (PPM) software. Some are better suited for small businesses and others for larger organizations. Project managers will also often use task management, schedule management, collaboration, workflow management, and other types of tools. These are just a few examples of the project management software and tools available to help simplify project management.

Popular project management tools include:

Asana
Changepoint
Clarizen
Planview
Mavenlink
Trello
Wrike

Project management skills

Effective project managers need more than technical know-how. The role also requires several non-technical skills, and it is these softer skills that often determine whether a project manager — and the project — will be successful. Project managers must have these seven non-technical skills: leadership, motivation, communication, organization, prioritization, problem-solving, and adaptability. It’s also beneficial to have a strategic mindset, have change management and organizational development expertise, agility, and conflict resolution capabilities, among other skills.

Project management jobs and salaries

By 2027, the demand for project managers will grow to 87.7 million, according to PMI, but these hires won’t all be project manager titles. While the more generic titles are project manager, program manager, or portfolio manager, each role may differ depending on industry and specialization. There are also coordinators, schedulers, and assistant project managers, among other roles.

Project managers have historically garnered high-paying salaries upwards of six figures, depending on the role, seniority, and location. Indeed provides a searchable list for job salaries, including some annual project management salaries companies are offering for these roles: 

Project manager: Base salary $85,311, bonus $13,500
Program manager: $85,796
Portfolio manager: $100,742
Software/IT project manager: $106,568
Project administrator: $62,093
Project planner: $69,528
Project controller: $90,342
Document controller: $74,899
Project leader: $130,689
Program/project director: $101,126
Head of program/project: $128,827

Careers, Certifications, IT Governance Frameworks, IT Leadership, IT Skills, IT Strategy, Project Management Tools

Companies today face disruptions and business risks the likes of which haven’t been seen in decades. The enterprises that ultimately succeed are the ones that have built up resilience.

To be truly resilient, an organization must be able to continuously gather data from diverse sources, correlate it, draw accurate conclusions, and in near-real time trigger appropriate actions. This requires continuous monitoring of events both within and outside an enterprise to detect, diagnose, and resolve issues before they can cause any damage.  

This is especially true when it comes to enterprise procurement. Upwards of 70% of an organization’s revenue can flow through procurement. This highlights the critical need to detect potential business disruptions, spend leakages (purchases made at sub-optimal prices by deviating from established contracts, catalogs, or procurement policies), non-compliance, and fraud. Large organizations can have a dizzying array of data related to thousands of suppliers and accompanying contracts.

Yet amassing and extracting value from these large amounts of data is difficult for humans to keep up with, as the number of data sources and volume of data only continues to grow exponentially. Current data monitoring and analysis methods are no longer sufficient.

“While periodic spend analysis was okay up until a few years ago, today it’s essential that you do this kind of data analysis continuously, on a daily basis, to spot issues and address them quicker,” says Shouvik Banerjee, product owner for ignio Cognitive Procurement at Digitate.

Enterprises need a tool that continuously monitors data so they can use their funds more effectively. Companies across industries have found success with ignio Cognitive Procurement, an AI-based analytics solution for procure-to-pay. The solution screens purchase transactions to detect and predict anomalies that increase risk, spend leakage, cycle time, and non-compliance.

For example, the product flags purchase requests with suppliers who have a poor track record of compliance with local labor laws. Likewise, it flags urgent purchases whose fulfillment is likely to be delayed based on patterns observed in similar transactions in the past.  It also flags invoices that need to be prioritized to take advantage of early payment discounts.

“It’s a system of intelligence versus other products in the market, which are systems of record,” says Banerjee. Not only does ignio Cognitive Procurement analyze an organization’s array of transactions, it also takes into account relevant market data on suppliers and categories on a daily basis.

ignio Cognitive Procurement is unique for its ability to correlate what’s currently happening in the market with what’s going on inside an organization, and it makes specific recommendations to stakeholders. For example, the solution can simplify category managers’ work, helping them source the best deals for their company, or make decisions such as whether to place an order now or hold off for a month.

Charged with finding the best suppliers and monitoring their success within the context of the market, category managers work better and smarter when they can tap into ignio Cognitive Procurement.

ignio Cognitive Procurement also identifies other opportunities to save money and improve the effectiveness of procurement. For instance, the solution proactively makes business recommendations that seamlessly take into account not only price, but also a variety of key factors like timeliness, popularity, external market indicators, suppliers’ market reputation, and their legal, compliance, and sustainability records.

“Companies also use the software to analyze that part of spend that’s not happening through contracts,” says Banerjee, “and they’ve been able to identify items which have significant price variance.”

To avoid irreversible damage or missed opportunities and to keep a competitive advantage, organizations across industries urgently need an AI-based analytics solution for procure-to-pay that can augment their human capabilities.

To learn more about Digitate’signio Cognitive Procurement, click here.

Analytics, IT Leadership

By Milan Shetti, CEO Rocket Software

According to PwC, almost two-thirds (60%) of Chief Information Officers (CIOs) see digital transformation as one of the most important drivers of growth this year. The cloud has been a major part of most organizations’ IT investments and digital transformation journeys. In fact, Gartner forecasts worldwide public cloud end-user spending will reach nearly $500 billion in 2022. But with all the hype around the cloud, many organizations overlook one of their business’s most critical tools: terminal emulation.

Terminal emulators for IBM Z, IBM i, and other mainframe systems aren’t exactly as popular as the cloud nowadays, but they play a critical role in ensuring secure access to stored data for organizations that rely on the mainframe. From ensuring regulatory compliance to serving customers more efficiently, terminal emulation is key to enabling a range of business processes.

Read on to learn why IT leaders can’t afford to overlook terminal emulation any longer — and what steps to take next.

Terminal emulators are not what they once were

Until recently, terminal emulators have been limited in their capabilities, hindering users from a lack of configurability options and cumbersome interfaces. Because of this, organizations experienced a loss of productivity and an increase in overall frustration for end users and administrators alike.

However, today’s new generation of emulators allow IT teams to access their business-critical applications through home computers, or even mobile devices, without any compromises to functionality. The pandemic was a significant catalyst for innovation in terminal emulator capabilities. When the at-home workforce spiked from 17% pre-pandemic to 44% during the pandemic, it drove the need for professionals to access critical systems securely, no matter where the user of that technology was located.

The latest terminal emulators deliver exceptional configurability and seamless access for users regardless of their physical location, allowing remote team members to be as productive and efficient as they would in a traditional office. Additionally, unlike terminal emulators from 20 years ago, today’s modern terminal emulator provides the latest security features as well as customized, feature-rich customer and user experiences. Terminal emulators that are kept up to date with today’s business needs enable access to applications from any browser, allowing employees to manage even the most complex functions from wherever they are.

Not every emulator is created equal

As companies upgrade their legacy systems, updated terminal emulators will help them access data from centralized systems and facilitate automation. But not all terminal emulators are created equal. Organizations that have a weak terminal emulator that lacks flexibility can experience disruptions in their user workflows, creating discomfort and slowing processes.

One important aspect an organization should look for when evaluating terminal emulator solutions is the ability to unify and customize their existing IT environment. One of the primary motivators for enterprises evaluating their existing terminal emulators is the operational benefits from the perspective of IT teams. Unifying terminal emulation reduces the number of supported solutions needed, which saves IT professionals time and resources. The ability to customize the terminal environment is also a key feature, as many users with decades of experience have strong preferences and have even developed shortcuts for their most often-used tasks. Terminal emulator customization allows users to feel familiar and comfortable with the new environment.

Another critical aspect of today’s terminal emulators for business longevity is their security capabilities. While others may let their terminal emulator solution remain stagnant, vendors like Rocket Software are continually developing and improving their emulation solution to keep up with current security protocols, ensuring it is secure and compliant. This is important, especially for companies in highly regulated industries such as financial services, to avoid costly fines and penalties.

Terminal emulation is one of your business’s most critical tools because it provides access to the data enterprises needed to meet customer needs while also providing improved experiences for employees. Unlike terminal emulators of the past, modern solutions provide customizable terminal environments, ensure security compliance, and allow for excellent customer and user experiences.

[cta] To learn more about the power of Rocket Terminal Emulator, visit our website.

Digital Transformation

Whenever CIOs talk about using low-code tools to enable citizen development, a recurring theme is how to ensure appropriate governance of the applications produced.

Microsoft has heard them loud and clear, and at its Ignite 2022 show in Seattle this week, it introduced a range of new governance capabilities and other enhancements for its Power automation platform.

It also previewed new management capabilities for automated workloads in its Entra Identity governance tool, new compliance reporting tools for monitoring the roll-out of Windows updates on enterprise desktops, and a host of updates to its Azure cloud platform.

Power to the people

Even low-code may seem like a foreign language to some workers, so Microsoft has been experimenting with ways to enable them to generate workflows with Power Automate, describing in natural language what they want to achieve and leaving an AI to build the corresponding flow. The feature, now in preview, will still require workers to set up connectors for the inputs to and outputs from the automated workflow, and to tweak it to ensure it behaves as intended.

Given the scope for ambiguity in natural language, CIOs may want to reinforce governance of applications created in this way — and with the new Managed Environments for Power Platform, Microsoft will help them do just that. First previewed in July, it’s now generally available.

Checks and balances

A new Weekly Digest feature enables admins to see how much use each Power app is getting, directing attention to the most used and reclaiming resources from unused ones.

There are also new tools to limit sharing of apps by security group or number of users, so apps don’t go viral across the enterprise until they’ve been thoroughly tested and channels are set up to communicate changes to them.

Those features will be important to CIOs, according to Kyle Davis, a VP and analyst at Garner covering low-code adoption.

“When it comes to citizen development and low code, governance is front and center,” he said.

Managed Environments is more of an evolution than a revolution, he added, saying, “There really isn’t anything there that someone couldn’t build for themselves if they wanted to.”

Indeed, Managed Environments has its origins in Microsoft’s Automation Center of Excellence starter kit, which enables enterprises to define their own best practices for Power app governance. But as the company itself acknowledges, customers found that this required a lot of manual work and expertise.

Davis said that CIOs looking for the simplicity of low-code development are often also looking for similar simplicity in its management. Managed Environments’ ability to deploy controls in a few clicks will be appealing. “It makes it easier to do things at scale,” he said.

The option to limit usage of an app to a few cubicle neighbors makes sense too, he said, because, “You can just yell across the hallway, ‘Hey, I’m going to make a change,’ and everyone’s aware,” while a change departmental app would need to go through a proper process. “What Microsoft offers with Managed Environments is something that you don’t really get from other low-code vendors in a similar space,” he said.

Environmental awareness

Not all the news at Ignite concerned Power Platform, however. Microsoft also had plenty to say about updates to its Azure cloud infrastructure offering, and an update of Syntex, its AI content management tool. Computerworld has the low-down on Syntex, but CIOs will want to be aware of other innovations that may help them trim management budgets or redeploy staff away from routine tasks.

There are new features for Microsoft Sustainability Manager, an environmental reporting tool for enterprises, including an extended data model to assist them estimating so-called Scope 3 emissions of greenhouse gases by their entire supply chain, and an Emissions Impact Dashboard for Microsoft 365 showing greenhouse gas emissions resulting from their use of Microsoft’s SaaS productivity suite.

Azure Deployment Environments, previewed at the show, offer enterprises a way to apply project-based templates to each development environment they spin up. Much like the managed environments Microsoft is introducing for low-code applications, these new templates will help development teams consistently maintain best practices across projects with minimum effort, the company said.

Cost cutting

Another management feature, Azure Automanage, is now generally available for Azure VMs and has new capabilities including the ability to patch VMs without rebooting, reducing downtime costs.

For variable computing workloads in the Azure cloud, Microsoft is introducing the ability to mix Standard and Spot Virtual Machines in the same scale set, enabling CIOs to profit from the deep discounts available for Spot VMs as their computing needs vary.

But Microsoft also wants customers to see Azure as an economical solution for base workloads. Azure savings plan for compute, available later this month, offers a discount to customers who commit to spending a minimum hourly amount on computing resources for one to three years; consumption above the minimum commitment will be charged at regular rates.

Staying Intune

Microsoft is reshuffling its branding around endpoint management: Intune, previously a component of its enterprise mobility management offering, is now the umbrella brand for its whole range of endpoint management products such as Configuration Manager — with the promise of more to come. At Ignite, the company is previewing new endpoint privilege management capabilities such as the ability to temporarily grant users limited admin permissions, and automated app patching by combining Intune with Microsoft Defender. In January 2023, it will add Microsoft Tunnel so employees can securely access company resources from their own devices without having to enroll them first. And then in March 2023, a new bundle of premium endpoint management services called Advanced Management Suite will be introduced.

Innovation

Most CIOs are limited by their endpoint tools. They know the questions they need to answer about their endpoint environment. They know what actions they must take to manage and secure their endpoints at all times. But they are attempting to answer these questions and take these actions using legacy point tools that no longer work in today’s environments. 

For CIOs to solve these problems, they first need to replace those legacy tools with a new class of converged endpoint management platforms. 

What’s changed? The entire endpoint environment 

In the past, CIOs had to manage and secure a relatively limited number of endpoints, most of which lived on-premises within technology environments that rarely changed.  

CIOs now need to manage and secure millions of dynamic, diverse, and globally distributed endpoints located across cloud and hybrid networks. Each of these endpoints introduces operational risks and security vulnerabilities and must be monitored, managed, and secured in real-time to ensure their performance.  

These endpoints also face a growing wave of cybersecurity attacks. Today, a ransomware attack occurs every 11 seconds, and the potential impact of a breach continues to grow as business processes become increasingly digital and interconnected. 

Unfortunately, many CIOs are struggling to manage and secure their new endpoint environment. They are still using legacy point tools designed to work in the small, static environments of yesterday and are failing in the endpoint realities today. Here’s why.

Silos: Why legacy point tools are failing in today’s environments

Most legacy endpoint tools were built to perform one task—often for just one endpoint category—and operate independently from each other. When CIOs attempt to develop a complete endpoint management and security capability using these tools, they are forced to build a stack of dozens of point solutions. And as the endpoint environment has transformed with new endpoints and new operational and security risks to mitigate, CIOs have been forced to keep adopting more and more tools.

The result? Using legacy endpoint management and security tools, organizations…

Face a growing visibility gap. According to recent research, 95% of organizations have 20% of their endpoints undiscovered and unprotected
Wrestle with increased complexity. 75% of IT, security, and business leaders now report too much complexity from their technology, data, and operations.
Can’t answer basic questions. These include â€œHow many endpoints do I have? What applications run on them? How many have basic controls applied?”

These tools are also creating silos between IT and security teams. Many are licensed and used by individual functions, teams, and employees, giving everyone a different view of the endpoint environment and making it impossible to build cohesive end-to-end endpoint management and security processes. Worst of all, they stop IT and security from collaborating on key efforts such as applying patches and configurations to close commonly exploited endpoint vulnerabilities. 

Clearly, legacy point tools are failing to manage and secure today’s endpoint environments. CIOs need new tools built around a new approach. 

The solution: Converged endpoint management platforms

CIOs need a new technology solution that corrects the problems with legacy point tools and overcomes the challenges of endpoint explosion, tool proliferation, and IT modernization. This solution must offer a holistic approach to endpoint management and security that unifies three core aspects of these activities. It must cover:

Every endpoint: They must create visibility across laptops, desktops, mobile devices, containers, sensors, and every other type of endpoint from one agent. 
 Every workflow: They must perform a full range of actions—from asset discovery to threat hunting, to client management—all from a single console.
Every team: They must use this single source of truth and common set of tools to align cross-functional teams and individual roles.  

A new class of converged endpoint management (XEM) platforms meets these criteria. These platforms consolidate the functionality of dozens of point tools into a single dashboard where teams can see, control, and trust everything happening on their endpoints. By doing so, these converged platforms give CIOs and their teams:

Real-time visibility and a single source of truth for their endpoint data
Reduced tool sprawl and significantly less complexity to manage
Instant and accurate answers to their most important questions

Most importantly, converged platforms eliminate silos in endpoint management and security. They act as the backbone for all crucial interactions between endpoint data, controls, and teams in one place, offering IT, security, risk management, and other technology functions a single space to seamlessly collaborate from. With the right platform, you can drive most of your endpoint use cases for most roles:

CIOs can patch, update and properly configure their endpoints.
CISOs can investigate and respond to threats in real time.
Infrastructure teams can scope cloud migrations in weeks (not years).
Procurement teams can see if they’re licensing software they don’t need.
Data custodians can find and remove sensitive data at scale.
Auditors can track if a company complies with its regulations and compliance.

In sum: With the right converged endpoint management platform, CIOs can solve most of their core operations and security challenges. 

Picking the right converged endpoint management platform

The endpoint tool market is going through a transformation, with these new converged platforms rapidly replacing old point tools. When evaluating the right platform to adopt converged endpoint management, CIOs must ensure they select a solution that provides three key qualities. 

Visibility into every managed or unmanaged endpoint in real time.
Control across cloud, on-prem, and hybrid estates in seconds. 
Truth composed of accurate, high-fidelity data for every endpoint team. 

Consider these table-stakes for any converged endpoint management solution you evaluate, and the key to solving most modern endpoint management and security challenges created by legacy tools. 

Learn how Tanium converged endpoint management platform can solve your core operations and security challenges here.

Endpoint Protection