From quality control to revenue growth and workplace safety, digital transformation strengthens almost every aspect of the business. Those who fail to keep up with the pace of digital technology run serious risks of falling behind. 

To fully leverage digital transformation, businesses today are turning to edge computing. Edge computing allows you to process data at the edge of the network, closer to the source of the data, instead of sending it to a centralized location like a datacenter or the cloud. By keeping sensitive data on-site, edge computing enables faster data processing, reduces bandwidth usage, and enhances data security. Yet, edge deployment remains a complex endeavor, with numerous choices and decisions to be made, the benefits of which are often unclear. 

Reliable, secure, and customizable network solutions are at the heart of edge computing success. With constant improvements taking shape across wired, 4G, and 5G standards, and with public and private options also available, what are the real benefits of each and how do they compare against one another? We are also hearing more about private 5G networks at the edge. What advantages does it provide for edge applications and is it the best short- and long-term choice for every enterprise? These are just a few questions that arise when determining how best to simplify and maximize edge investments.

Bill Pfeifer, who leads Messaging and Thought Leadership for the Dell Technologies Edge team, and Stephen Foster, a product manager within the Edge business unit at Dell Technologies, recently discussed these key issues. 

Bill Pfeifer

Dell

Foster has significant knowledge of connectivity technologies and how they can enhance business outcomes, from both an IT and telecom network perspective. 

Meanwhile Pfeifer’s focus is on distilling the complexity of the edge into simple messages that are immediately useful so that customers can succeed as they build, grow and simplify their edges.

Bill: We have all been seeing non-stop hype throughout the industry about the magic of 5G and how it’s going to change the world, but often, the conversation seems to follow the track that “it will be faster than 4G.” While faster is a good step in connectivity evolution, it is hardly a revolution. Can you explain what 5G is all about, beyond just being faster? 

Stephen: 5G offers many advantages over its predecessor, 4G. Beyond just speed, 5G networks are driving innovation and new applications across a wide range of industries. 

Bill: So those benefits apply to 5G across the board, but I am also hearing lots about how private 5G is on the rise. Also, private 4G, so – how is private wireless notably different or better for enterprise edge deployments?

Stephen: Private wireless has been growing over the years, beginning with 4G, and now moving to 5G. In the early days, it may have just been limited to a shared or dedicated radio serving a single enterprise but, with private 5G, we have a couple of options for keeping the data entirely within the enterprise to take full advantage of the very low latency and data security. 

One option enabled by 5G networking is the deployment of MEC. MEC stands for Multiaccess Edge Computing, which is a 5G-defined technology that enables computing resources to be located at the edge of a network, closer to the end users and devices at the enterprise location. MEC is often deployed as part of the communication service provider’s public 5G network, but it allows for private processing of data either on-premises at an enterprise or nearby on dedicated hardware.

The second option is Standalone Private wireless. Here, the complete cellular network is deployed within the enterprise location. There is no connection to the public network. The management of the network is completely under control of the enterprise including SIM management. Standalone Private solutions started with 4G, but most new ones are using 5G technology.

In either case, Private 5G Wireless networks enable support for challenging use cases and business processes that are restricted in public networks.

Bill: We are also seeing tech refreshes across other connectivity types – wired connections are faster than ever, Bluetooth is great for short-range connectivity, and NFC (near field communication) means we rarely have to swipe our credit cards anymore. But related more closely to this topic is WiFi6. Enterprises the world over have Wi-Fi installed, and it sounds like WiFi6 is a notable enhancement, too. Can you tell us what to expect there?

Stephen: If you have things or people that are moving around the enterprise or if they are difficult or expensive to reach, then the choices come down to Wi-Fi or Private 5G wireless. WiFi6 shares many technology attributes with 5G. Typically, private 5G wireless complements Wi-Fi – they both have a role within the enterprise.

The main differences between private 5G and Wi-Fi include:

Range: Private 5G has a much wider range than Wi-Fi, which means that it can provide connectivity over a much larger area. Private 5G can cover an area of several kilometers, while Wi-Fi is typically limited to a range of a few hundred feet. Serving a large area with Wi-Fi will require many access points to operate and maintain. 

Capacity: Private 5G has much greater capacity than Wi-Fi, which means that it can support a much larger number of devices and data-intensive applications. Private 5G can support up to one million devices per square kilometer, while Wi-Fi is typically limited to a few hundred devices per access point.

Security: Private 5G provides stronger security than Wi-Fi, with better encryption and authentication mechanisms. This is particularly important for enterprises that are dealing with sensitive data or operating in high-security environments.

Reliability: Private 5G is more reliable than Wi-Fi, with better coverage and fewer dropped connections. This is achieved through technologies such as beamforming and network slicing, which enable the network to allocate resources more efficiently.

Bill: So, bringing those points together, do you have thoughts on when a typical enterprise might want private wireless vs. Wi-Fi? Can you describe a few scenarios where someone might prefer one over the other and explain why, so that we can start to understand how they really compare?

Stephen: In general, the drivers for going with private 4G/5G wireless vs. Wi-Fi are the need for a more secure solution, more predictable performance including lower latency, throughput and coverage, and the need for large and bounded geographical coverage areas, like factories, shipping ports, airports, and mining areas. Many of these areas, especially outdoor ones, are incompatible with Wi-Fi. Even indoor areas like large factories or warehouses cannot always be predictably reached by Wi-Fi. Think of reliable connections to automate guided vehicles within a factory or locate shipping containers at a port. 

Another difference to consider is the interference in the spectrum. Wi-Fi operates in an unlicensed spectrum and is often prone to interference. Private 5G wireless operates in licensed spectrum and is very well suited for mission-critical applications. Coverage, reliability, and predictability are a few of the major factors influencing the choice of private 5G wireless. 

Bill: To wrap up this conversation, could you give a quick summary of key considerations that folks should be making? Let’s say we’re talking to a typical enterprise organization that has a legacy wired network and is looking to move to Wi-Fi, or public 5G, or private 4G/5G – I’m sure none of them are a one-size-fits-all solution, so what are the key points to consider when trying to decide which technology to use?

Stephen: When enterprises are considering an upgrade from a legacy wired network, they should consider several factors, including coverage, bandwidth and speed, latency, security, cost, and customization. To choose the right wireless technology, enterprises must weigh the advantages and disadvantages of Wi-Fi and private 5G. Wi-Fi is the most cost-effective option, but it may not be the most secure and may not offer adequate range. 

On the other hand, private 5G provides wider coverage, higher speeds, flexibility in coverage, and strong security features. Private 5G networks offer the lowest latency, which is essential for applications that require real-time response. Ultimately, the choice between Wi-Fi and private 5G depends on the specific needs and requirements of the enterprise.

It is important to note that the networking options of Private 5G and Wi-Fi are just one piece of the puzzle in achieving a total solution for the enterprise. Edge computing serves as a platform to support multiple enterprise applications, like computer vision, digital twins, AR/VR, and more. These applications play a crucial role in supporting various business outcomes, such as workforce productivity, operational efficiency, quality improvements, cost savings, workplace safety, and sustainability. 

Edge computing can help enterprises process data closer to the source, reducing latency and improving response times. By combining the capabilities of edge computing with the benefits of private 5G or Wi-Fi, we can build a comprehensive solution that meets specific needs for today while putting in place a robust foundation that supports the digital transformation journey.

And of course, Dell Technologies and Intel are always collaborating to help our customers succeed across a broad range of workloads at the edge, working with industry leaders and the open-source community to produce powerful, comprehensive solutions that are optimized to meet our customers’ needs today with the flexibility to address what comes next. Public and private wireless networks powered by Dell and Intel technologies help enterprises capitalize on 5G, MEC and edge computing, to further improve how businesses operate today, and tomorrow.

Bill: Great information, Stephen! Thanks for your time today, and for sharing your perspective and expertise. 

Learn more how Dell helps enterprises build a simpler edge with Private 5G and more www.dell.com/edge

***

Bill currently leads Messaging and Thought Leadership for the Dell Technologies Edge team. His focus is on distilling the complexity of the edge into simple messages that are immediately useful so that customers can succeed as they build, grow and simplify their edges.

Edge Computing

As recently spotlighted at VMware Explore US, Sovereign Cloud continues to gain momentum.​ Sovereign Cloud business estimated the total addressable market (TAM) will be $60bn by 2025, in no small part due to the rapid increase of data privacy laws (currently 145 countries have data privacy laws) and the complexity of compliance in highly regulated industries.​

As the need to monetise data grows and nations seek to realise the true value of data, VMware is delivering on our sovereign cloud position: sovereign security, sovereign compliance, sovereign control, sovereign autonomy, and sovereign innovation.

Previously, we looked at what data sovereignty is and how it impacts business operations when it comes to personal, sensitive or classified data. Now let’s look at how an organisation can better comply with data sovereignty laws by choosing the right cloud architecture.

Most businesses have moved to cloud computing for at least some of their data. Cloud provides greater flexibility, scale, and computational power than traditional on-premises data centres. While public clouds are popular for their high capacity and low costs, some organisations have started moving data out of them to comply with regulations. Some 81% of decision-makers in regulated industries have repatriated some or all data and workloads from public clouds.

Some have moved data back on-premises, whereas others are using a mix of public and private clouds. Ultimately, protecting and realising national data has never been a more important factor in building a cloud. From the combination of increasing country regulations: compliance with the US Cloud Act, EU’s GDPR, China’s Personal Information protection law. With data privacy laws in 132 countries and with an annual increase of ~10%, choosing the right data sovereignty solution has become a hot topic.

To better understand why a business may choose one cloud model over another, let’s look at the common types of cloud architectures:

Public – on-demand computing services and infrastructure managed by a third-party provider and shared with multiple organisations using the public internet. Public clouds are usually multi-tenant, meaning multiple customers share the same server, although it’s partitioned to prevent unauthorised access. Public clouds offer large scale at low cost.

Private – infrastructure is dedicated to a single user organisation. A private cloud can be hosted either in an organisation’s own data centre, at a third-party facility, or via a private cloud provider. Private clouds are generally more secure than public due to limited access and can meet regulatory requirements such as data privacy and sovereignty. However, they require more resources to set up and maintain.

Community – shared cloud that is integrated to connect multiple organisations or employees for collaboration. This can be multiple private clouds connected together to facilitate the exchange of data. These are frequently used by regulated industries where public clouds are not compliant, but they are complicated to set up due to having multiple groups involved.

Government – a type of private or community cloud designed specifically for government bodies to maintain sovereignty and control.

Multi-cloud – using multiple public clouds to take advantage of different features. An organisation may host some services in one cloud and others with a different provider. This model has the highest level of security risk due to the volume of data and access.

Hybrid – a mix of public and private clouds. The term is sometimes also used to refer to a mix of public cloud and on-premises private data centres.

While public clouds are suitable for public information that isn’t subject to data sovereignty laws, a hybrid or other more private solution is needed for overall compliance. Private clouds can meet data sovereignty requirements, but they need dedicated data centres, operated either by the organisation itself or via a provider using dedicated hardware. This can be expensive and time-consuming. The quickest or off the shelf solution may not include the level of security or compliance necessary to be sovereign. Key factors in consideration are jurisdictional control, local oversight, data portability and customisability to name a few.

Sovereign cloud is an option designed specifically to meet data sovereignty requirements. Think of this as a semi-private cloud, combining some of the best features of public and private. They are operated by experienced cloud providers that are smaller, local, multi-tenant operations. A sovereign cloud provides the data sovereignty benefits of a private cloud without the IT headaches.

Sovereign cloud can be used in conjunction with public cloud as part of a hybrid cloud architecture. Data and services subject to data sovereignty laws would live in the sovereign cloud while non-sensitive data and services might live in the public cloud. The exchange of data between these clouds must be carefully controlled to ensure compliance.

When it comes to finding a sovereign cloud provider, customisability, flexibility and frictionless implementation is critical. You need to be able to audit operations and access to make sure compliance is maintained. Local, self-attested sovereign cloud providers can follow implement and build residency requirements correctly so that data residency and sovereignty requirements are met. Cross-border restrictions and jurisdictional control must also be understood addressing privacy concerns with no remote processing of data. At the end of the day, true sovereignty ensures that other jurisdictions are unable to access authority over data stored beyond national borders; fostering national data interest and growth.

True sovereign clouds require a higher level of protection and risk management for data and metadata than a typical public cloud. Metadata, or information about the data such as IP addresses or host names, must be protected along with the data itself. VMware Sovereign Cloud providers offer transparency around security measures, both cybersecurity protections and physical security in the data centre.

VMware Sovereign Cloud providers are…

trusted approved partners in providing best in class IaaS Security and compliance

experts in local platform builds as well as local data protection laws

able to provide solutions for data choice and control, cost efficient (TCO) solutions that are flexible and customisable

able to grow with customer needs providing a complete solution that is future proof  

Customers requiring sovereign solutions demand the expertise and transparency offered by VMware Sovereign Cloud providers – ensuring security and compliance with local data privacy and sovereignty laws. This expertise and transparency becomes invaluable, enabling data security and compliance.

To find out more on how to improve data control and compliance with sovereign clouds click here.

Cloud Management, Cloud Security, Data Management, Data Privacy, VMware

Is the cloud a good investment? Does it deliver strong returns? How can we invest responsibly in the cloud? These are questions IT and finance leaders are wrestling with today because the cloud has left many companies in a balancing act—caught somewhere between the need for cloud innovation and the fiscal responsibility to ensure they are investing wisely, getting full value out of the cloud.  

One IDC study shows 81% of IT decision-makers expect their spending to stay the same or increase in 2023, despite anticipating economic “storms of disruption.” Another 83% of CIOs say despite increasing IT budgets they are under pressure to make their budgets stretch further than ever before—with a key focus on technical debt and cloud costs. Moreover, Gartner estimates 70% overspending is common in the cloud

The need for cloud innovation amid economic headwinds has companies shifting their strategies, putting protective parameters in place, and scrutinizing cloud value with concerted efforts to accelerate return on investment (ROI), specifically on technology.  

New Parameters Designed to Protect Cloud Investments 

While many companies are delaying new IT projects with ROI of more than 12 months, others are reducing innovation budgets while they try to squeeze more value out of existing investments. Regardless of how pointed their endeavors are, most IT and finance leaders are looking for ways to better govern cloud transformation. That’s because, in today’s economic climate, leaders aren’t just responsible for driving ingenuity, they are held accountable for ensuring the company is a good steward of its technology investments with concentrated emphasis on: 

ROI: Capitalizing quickly on new cloud technology, recognizing benefits, and taking ownership of IT assets, success measurement, and feedback loops Operationalization: The ability to effectively use and secure cloud assets as well as manage new service providers and expenses Sustainability: Ensuring that cloud transformation can continue to afford positive outcomes with minimal impact on the business for both near- and long-term success 

If the past three years were dedicated to accelerated cloud transformation, 2023 is being devoted to governing it. But it’s not just today’s tumultuous times calling for executives to heed to the reason of fiduciary responsibility. The cloud also necessitates it—particularly when companies want to achieve ROI faster. 

Cloud ROI Dynamics: Understanding the Economics of Innovation 

The cloud can make for an uneven balance sheet without proper oversight. It needs to be closely watched from a financial perspective. Why? The short answer: variable costs. When the cloud is infinitely scalable, costs are infinitely variable. Pricing structures are based on service usage fees and overage charges where even marginal lifts in usage can incur steep increases in cost. While this structure favors cloud providers, it starkly contrasts the needs of IT financial managers—most have per-unit budgets and prefer predictable monthly costs for easier budgeting and forecasting.  

Additionally, companies aren’t always good at estimating what they need and using everything they pay for. As a result, cloud waste is now a thing. In fact, companies waste as much as 29% of their cloud resources.  

As companies lift and shift their workloads to the cloud, they trade in-house management for outsourced services. But as IT organizations are loosening their reign, financial management teams should be tightening their grip. Those who aren’t actively right sizing their cloud assets are typically paying more than necessary. Hence, why overspending can easily reach 70%. 

Achieving Cloud ROI in One Year 

Achieving ROI in one year requires tracing where your cloud money goes to see how and where it is repaid. Budget dollars go down the drain when companies fail to pay attention to how they are using the cloud, don’t take the time to correct misuse, or overlook service pausing features and discounting opportunities.  

But cloud cost management is not always a simple task. The majority of IT and financial decision-makers report it’s challenging to account for cloud spending and usage, with the C-suite cite tracing spend and chargebacks of particular concern. The key to cost control is to pinpoint and track every cloud service cost across the IT portfolio—yes even when companies have on average 11 cloud infrastructure providers, nine unified communications solutions, as well as a cacophony of unsanctioned applications consuming up to 30% of IT budgets in the form of Shadow IT.  

When you factor in these dynamics and consider that cloud providers have little incentive to improve service usage reports, helping clients better balance the one-sided financials of the relationship, you can see why ROI can be slow-moving.  

FinOps comes in to bridge this gap. 

Managing Cloud Cost Centers: The Rise of FinOps 

Cloud services are now dominating IT expense sheets, and when increasing bills delay ROI, IT financial managers go looking for answers. This has given rise to the concept of FinOps (a word combining Finance and DevOps) which is a financial management discipline for controlling cloud costs. Driving fiscal accountability for the cloud, FinOps helps companies realize more business value and accelerate ROI from their cloud computing investments. 

Sometimes described as a cultural shift at the corporate level, FinOps principles were developed to foster collaboration between business teams and IT engineers or software development teams. This allows for more alignment around data-driven spending decisions across the organization. But beyond simply a strategic model, FinOps is also considered a technology solution—a service enabling companies to identify, measure, monitor, and optimize their cloud spend, thus shortening the time to achieve ROI. Leading cloud expense management providers, for example, save cloud investors 20% on average and can deliver positive ROI in the first year. 

FinOps Best Practices  

As the cloud makes companies agile, managing dynamic cloud costs becomes more important. FinOps help offset rising prices and insert accountability into organizations focused on cloud economics. Best practices for maximizing ROI include reconciling invoices against cloud usage, making sure application licenses are properly disconnected when no longer necessary or reassigned to other employees, and reviewing network servers to ensure they aren’t spinning cycles without a legitimate business purpose. 

Key approaches include: 

Auditing: The ability to granularly collect and maintain service information across the broader cloud ecosystem, analyzing real-time usage data in a central system using AI-powered analytics Cost Optimization: The insights to recognize cloud waste and quickly reduce inefficiencies, adjusting services and reallocating unused app licenses or infrastructure resources Vendor and Expense Management: The ability to validate spending and use automation to reduce the management burdens of bill pay, chargebacks, and allocation Professional Services: Strategic and tactical help at key moments including cloud migrations, cloud service discovery, contractual negotiations, and IT budget forecasting and spending 

Is the cloud a good investment? Yes, as long as the company can effectively see and use its assets, monitor its expenses, and manage its service. The cloud started as a means to lower costs, minimize capital expenses, and gain infinite scalability, and that reputation should payout even after being pressure tested by the masses. With a collaborative and disciplined approach to management, companies of every size can recognize quick ROI without generating significant waste or adding unnecessary complexity.  

To learn more about cloud expense management services, visit us here.     

Cloud Computing

The transition to a modern business intelligence model requires IT to adopt a collaborative approach that includes the business in all aspects of the overall program. This guide focuses on the platform evaluation and selection. It is intended for IT to use collaboratively with business users and analysts as they assess each platform’s ability to execute on the modern analytics workflow and address the diverse needs of users across the organization.

“It all went live in less than two months,” said Paul Egan, It Manager of Business Intelligence at Tableau. “The CEO had his new production-strength dashboards in Tableau in less than two months of the server being deployed—and that was a pretty phenomenal turnaround.”

Download this free whitepaper to learn more.

Digital Transformation

Wendy M. Pfeiffer is a technology leader who’s as dedicated to excellence in operations and delivery as she is to maintaining a focus on innovation. She joined Nutanix as SVP and CIO following a successful career leading technology teams at companies like GoPro, Yahoo, Cisco Systems, and Robert Half. Highly regarded by her industry peers for her courageous transparency and candor, Pfeiffer also serves on the boards of Qualys, SADA Systems, and the American Gaming Association (AGA). 

On a recent episode of the Tech Whisperers podcast, Pfeiffer shared her insights about the numerous demands being placed on CIOs today, what she’s gained from her board experiences, and how the ways in which we work are evolving. Afterwards, we spent some time talking through Pfeiffer’s five-part series for “The Forecast by Nutanix” on IT’s role in enabling hybrid work, as well as what she’s learned running IT for a hybrid-first company. What follows is that conversation, edited for length and clarity.

Dan Roberts: What motivated you to produce this series on hybrid work?

Wendy Pfeiffer: Work is now eternally hybrid. What I mean by that is, we’re not going to be able to count on having everyone in the same place at the same time ever again. So, how do we respond to this changed nature of work? Instead of just being observers and saying, “Well, it’s not like it used to be,” how do we focus on changing the methods we use to respond to that?

If I think about my primary mission as an IT professional, it’s to enable technology in service of business and people, and today, business is different. Technology is different. People are different. So, I’ve been thinking about this a lot, studying it, reading, and speaking to folks. And the bottom line is, I didn’t find that anyone had any great ideas.

So, I started thinking about the IT experience as a product, and the employee experience as a product. If I were delivering a product into a new marketplace, I would need to learn everything I could about that marketplace, and then I would need to adjust some parameters of the product to be appropriate. And in doing so, I discovered these simple principles that I think make a difference towards enabling hybrid work.

Hybrid work is asynchronous, so you have to enable asynchronous things. It is characterized by context switching, and context switching has a negative impact on productivity. How do we counterbalance that? There are just all sorts of principles that are at the heart of hybrid work that, if you take them one at a time, we already have some tooling to address. We already have some techniques in use. We already have some design thinking around how to address those things. If we focus our efforts on those, we can improve the nature of work.

Part one focuses on managing constant change. Why should that become a bigger focus in 2023, and what are some everyday ways we can do that in a hybrid workplace?

I think that hybrid work in general is characterized by the change that comes from continuous context switching on purpose. Like most people, I find change to be stressful, and yet, I have developed some ways to deal with change. One is to have a home base, a foundation that’s not changing and that’s solid even as things around it changes. So, I was thinking about how to create a solid foundation in technology, and one of the easiest ways to do that is through ‘anchor technologies.’

In my organization, we have chosen a handful of anchor technologies, and we’ve doubled down on enabling our employees to be very comfortable with them, to always expect that those technologies will be available, functioning, and even to feel expert in them, the same way that you feel expert after you’ve been using your smartphone for a while. We want people to feel like, I’m not just a medium user of Zoom or Slack — I’m a Zoom ninja and know all the secrets. As soon as that competence and bedrock are there, then that gives us the foundation from which to launch new things.

For example, with online whiteboarding, I’m not launching a new technology; I’m launching a new online whiteboarding feature in the context of Zoom. So, I’m minimizing the amount of change the employee has to go through. They already feel comfortable when they see it show up as a feature in one of these anchor technologies.

It’s psychological, but it makes a huge difference in terms of our adoption of new technologies. We find that when we are launching new technologies in the context of our anchor applications, we see a massive uptick in adoption. In the past, we would launch a new technology, and about 30 days in, we would see about a 25% adoption rate. Now we see about an 80% adoption rate. And, you know, that’s a beautiful thing — much less training time, immediate productivity for our employees, etc.

In your second video, you get into asynchronous productivity, and you talk about those ‘watercooler’ conversations that many CIOs are concerned about — chance meetings that foster collaboration and innovation. How has that changed in the hybrid workplace?

Before the pandemic, most of us who worked in global companies already had people who worked in different time zones in different locations all over the world and were part of our teams. But back then, we didn’t care what kind of an experience they had. We didn’t pay much attention to them. You sort of had to be physically in the room, where the conversation was happening, to have a voice in that conversation. So we were leaving some of the productivity of those ‘remote’ participants on the table.

For example, before the pandemic, about 30% of our employees, globally, were full-time remote. They were not associated with a hub office. But 99% of the time, when we would have ideation sessions or strategy or planning sessions, those of us in a US time zone would physically get together in a conference room. If somebody couldn’t be in that conference room, they could be on the call, but they wouldn’t really collaborate and participate. We would use whiteboards, and the very act of stepping up and writing scribbles down on the whiteboard is an exclusive in-room experience. If you’re not in that room while it’s happening, you can look at those whiteboards afterwards and they’re unintelligible. If you’re listening in or you’re even viewing that conversation from a camera in the room, you can’t understand those whiteboard scribblings.

In 2023, we’re looking at fully 60% to 80% of most knowledge workers, at least at some point every week, working remotely. We’re never all going to be back in that room. Therefore, the biggest request I get as a CIO — and it’s usually from senior executives — is related to that: ‘Ideation has stopped; innovation is going to grind to a halt because we can’t all sit in this room and whiteboard together.’

I have a different point of view. I think that perhaps if we can find a way to ideate in a hybrid mode or asynchronously, then we can suddenly take advantage of that 30% of our employee population whom we used to not engage. Now we can have 100% participation.

What are some of the ways you’re doing that?

Asynchronous work requires a steady-state set of content that people can interact with. It requires writing, for lack of a better term. It requires expressing ideas in a context that transcends space and time. And then, of course, not everybody likes to read, not everybody speaks the same language, so we also need tooling that makes recordings and that creates transcripts of those recordings, so that over the course of 24 hours, a global team that might be living in 15 different time zones and 30 different countries can all take part in contributing to a conversation.

We are using tooling that creates persistent ways of communicating so that, even if you’re not in the room where it happens, you can still understand what happened, have a voice in what happens in the future, and make your mark. There are other tools and ideas as well. Nutanix’s Head of Design, Satish Ramachandran, talks about the need to make organizational changes to create ecosystems of collaboration around a time zone radius, so that we treat our global workers more respectfully.

Back in the day when we would have critical meetings in the US time zone, we had another set of executives who were missing dinnertime or getting up at four in the morning to participate. Most of us learned when we were all working from home that that’s incredibly disrespectful to our families and ourselves. People don’t want to go back to that. And yet, those people are key contributors, so we need to find ways of ensuring that we’re respectful of all participants.

Parts three and four are about reducing context switching and focusing on automation and self-service. Why is that important? 

One thing that happens when you are working in multiple modes is that the work itself can become complex and the technologies that we use can become complex. The more that we personalize, the more we have people engaging in using technology from all different contexts — this creates the need to do a little bit of everything.

The question becomes: How can we deal with the complexity of a work environment that’s inclusive of consumer tech and public internet and yet also must be very performant in physical offices and needs to happen across time zones and SaaS applications and on-premises data centers and all of those things? It’s overwhelming even to talk about it!

When we have great complexity and high volumes, those are wonderful times to automate, to take those high-volume tasks, those complex tasks, break them down into components and hand them off to the machine. It’s the same principle behind assembly lines. It’s setting up employees to succeed using the right mix of technologies, processes, and methodologies.

The fifth part explores consumer technology experiences for hybrid work. How do companies benefit by integrating consumer technology into the hybrid work system?

I think one of the things we miss as employers is that over the last 15 years or so, technology has become fun. I’m a huge gamer. I love the art and the science and the capability and the interaction design that’s grown up around that space. I’m a huge proponent of mobile devices. All of these things blend some serious technology, but also with serious fun. There are all kinds of interactions that are available that are just super cool. So why do we have to be so 1980s and sad and serious in the workplace?

If we brought our sense of engagement and our sense of fun and our sense of pleasure in using those technologies to work, what could we achieve, particularly if we’re working in a company that’s making products for other human beings? There’s no rule against me sitting here in my gaming chair, using one of my gaming computers, to do work. I’m even curating my own visual experience. I have this really cool streamer camera that lets me show up beautifully. Even using that consumer tech to curate my appearance is one of the things that’s available to me, so why not have a little fun as I’m working? Why wouldn’t we enable our employees to be comfortable and feel good about themselves and how they’re showing up professionally?

There are multiple studies that show a direct correlation between employee happiness and employee productivity. In fact, many studies show that employees are about 15% more productive when they report their mood as being happy versus their mood as being sad. So why not? Why not have happier employees by using technology to give them the experiences that they enjoy, even while they’re working?

For more from Pfeiffer on the changing nature of work and her passion for developing the human side of technology, tune in to the Tech Whisperers podcast.

Collaboration Software, IT Leadership, Staff Management

Data is critical to success for universities. Data provides insights that support the overall strategy of the university. It can also help with specific use cases: from understanding where to invest resources and discovering new ways to engage pupils, to measuring academic outcomes and boosting student performance. Data also lies at the heart of creating a secure, Trusted Research Environment to accelerate and improve research.

Yet most universities struggle to collect, analyse, and activate their data resources. There are many reasons why.

For a start, data is often siloed according to the departments or functions it relates to. That means the various “dots” that join these datasets are missed, along with any potentially valuable insights.

This has not been helped by the fact that universities have traditionally lagged the private sector in terms of cloud adoption, a key technology enabler for effective data storage and analysis. One thing holding universities back has been a reluctance to move away from traditional buying models. Long-term CapEx agreements have helped universities manage costs, but such models are inflexible. In the age of the cloud, what’s needed is a more agile OpEx-based approach that enables universities to upgrade their data infrastructure as and when required.

Finally, the skills gap remains a challenge to the better use of data. Eighty-five percent of education leaders identify data skills as important to their organisation, but they currently lack 19% of skilled professionals required to meet their needs.

How can universities overcome these barriers? The first step is to put in place a robust data strategy. Each strategy will be different according to the unique needs of the university, but at a minimum it should include the following:

Evaluation of current data estate to understand pinch points and siloes so these can start to be tackled.Alignment of organisation strategy with technical requirementsEvaluation of the cloud market and cloud adoption roadmap to enable data transformation and agile, integrated data use.Comprehensive upskilling programme to overcome data skills gaps.

As universities embark on this journey, finding the right partner will be critical. One option is to team up with a company like SoftwareONE, which has extensive experience in enabling data strategies for large organisations.

Significantly, SoftwareONE is an Amazon Web Services (AWS) Premier Consulting Partner, which means it can bring to bear the capabilities of one of the world’s leading cloud platforms. SoftwareONE adds value by optimising and automating AWS infrastructure as code, which makes it faster and less expensive for universities to get their cloud data programmes up and running. The company also offers a rapid, cost-effective, and secure path to building trusted cloud-based research environments. 

What’s more, partners like SoftwareONE can help address the skills challenge, and not only through automation. SoftwareONE helps to upskill IT teams at universities and provides a full infrastructure as a managed service. Whatever your organisation’s level of comfort with the cloud, SoftwareONE can help you leverage cloud-based data tools with ease.

For more information about how SoftwareONE can help build your data strategy click here.

Education and Training Software, Education Industry

Cybersecurity breaches can result in millions of dollars in losses for global enterprises and they can even represent an existential threat for smaller companies. For boards of directors not to get seriously involved in protecting the information assets of their organizations is not just risky — it’s negligent.

Boards need to be on top of the latest threats and vulnerabilities their companies might be facing, and they need to ensure that cybersecurity programs are getting the funding, resources and support they need.

Lack of cybersecurity oversight

In recent years boards have become much more engaged in security-related issues, thanks in large part to high-profile data breaches and other incidents that brought home the real dangers of having insufficient security. But much work remains to be done. The fact is, at many organizations board oversight of cybersecurity is unacceptable.

Research has shown that many boards are not prepared to deal with a cyberattack, with no plans or strategies in place for cybersecurity response. Few have a board-level cybersecurity committee in place.

More CIOs are joining boards

On a positive note, more technology leaders including CIOs are being named to boards, and that might soon extend to security executives as well. Earlier this year the Security Exchange Commission (SEC) proposed amendments to its rules to enhance and standardize disclosures regarding cybersecurity risk management, strategy, governance, and incident reporting by public companies.

This includes requirements for public companies to report any board member’s cybersecurity expertise, reflecting a growing understanding that the disclosure of cybersecurity expertise on boards is important when potential investors consider investment opportunities and shareholders elect directors. This could lead to more CISOs and other security leaders being named to boards.

Greater involvement of IT and security executives on boards is a favorable development in terms of better protecting information resources. But in general, boards need to become savvier when it comes to cybersecurity and be prepared to take the proper actions.

Asking the right questions

The best way to gain knowledge about security is to ask the right questions. One of the most important queries is which IT assets the organization is securing? Knowing the answer to this requires having the ability to monitor the organization’s endpoints at any time, identify which systems are connecting to the corporate network, determine which software is running on devices, etc…

Deploying reliable asset discovery and inventory systems is a key part of gaining a high level of visibility to ensure the assets are secure.

Another important question to ask is how is the organization protecting its most vital resources? This might include financial data, customer records, source code for key products, encryption keys and other security tools, and other assets.

Not all data is equal from a security, privacy and regulatory perspective, and board members need to fully understand the controls in place to secure access to this and other highly sensitive data. Part of the process for safeguarding the most vital resources within the organization is managing access to these assets, so boards should be up to speed on what kinds of access controls are in place.

Board members also need to ask about which entities pose the greatest security risks to the business at any point in time, so this is another key question to ask. The challenge here is that the threat vectors are constantly changing. But that doesn’t mean boards should settle for a generic response.

Accessing threats from the inside out

A good assessment of the threat landscape includes looking not just at external sources of attacks but within the organization itself. Many security incidents originate via employee negligence and other insider threats. So, a proper follow-up question would be to ask what kind of training programs and policies the company has in place to ensure that employees are practicing good security hygiene and know how to identify possible attacks such as phishing.

Part of analyzing the threat vector also includes inquiring about what the company looks like to attackers and how they might carry out attacks. This can help in determining whether the organization is adequately protected against a variety of known tactics and techniques employed by bad actors.

In addition, board members should ask IT and security executives about the level of confidence they have in the organization’s risk-mitigation strategy and its ability to quickly respond to an attack. This is a good way to determine whether the security program thinks it has adequate resources and support to meet cybersecurity needs, and what needs to be done to enhance security via specific investments.

It’s most effective when the executives come prepared with specific data about security shortfalls, such as the number of critical vulnerabilities the company has faced, how long it takes on average to remediate them, the number and extent of outages due to security issues, security skills gaps, etc.

In the event of an emergency

Finally, board members should ask what the board’s role should be in the event of a security incident. This includes the board’s role in determining whether to pay a ransom following a ransomware attack, how

board members will communicate with each other if corporate networks are down, or how they will handle public relations after a breach, for example.

It has never been more important for boards to take a proactive, vigilant approach to cybersecurity at their organizations. Cyberattacks such as ransomware and distributed denial of service are not to be taken lightly in today’s digital business environment where an outage of even a few hours can be extremely costly.

Boards that are well informed about the latest security threats, vulnerabilities, solutions and strategies will be best equipped to help their organizations protect their valuable data resources as well as the devices, systems and networks that keep business processes running every day.

Want to learn more? Check out this Cybersecurity Readiness Checklist for Board Members.

Risk Management

Cybersecurity breaches can result in millions of dollars in losses for global enterprises and they can even represent an existential threat for smaller companies. For boards of directors not to get seriously involved in protecting the information assets of their organizations is not just risky — it’s negligent.

Boards need to be on top of the latest threats and vulnerabilities their companies might be facing, and they need to ensure that cybersecurity programs are getting the funding, resources and support they need.

Lack of cybersecurity oversight

In recent years boards have become much more engaged in security-related issues, thanks in large part to high-profile data breaches and other incidents that brought home the real dangers of having insufficient security. But much work remains to be done. The fact is, at many organizations board oversight of cybersecurity is unacceptable.

Research has shown that many boards are not prepared to deal with a cyberattack, with no plans or strategies in place for cybersecurity response. Few have a board-level cybersecurity committee in place.

More CIOs are joining boards

On a positive note, more technology leaders including CIOs are being named to boards, and that might soon extend to security executives as well. Earlier this year the Security Exchange Commission (SEC) proposed amendments to its rules to enhance and standardize disclosures regarding cybersecurity risk management, strategy, governance, and incident reporting by public companies.

This includes requirements for public companies to report any board member’s cybersecurity expertise, reflecting a growing understanding that the disclosure of cybersecurity expertise on boards is important when potential investors consider investment opportunities and shareholders elect directors. This could lead to more CISOs and other security leaders being named to boards.

Greater involvement of IT and security executives on boards is a favorable development in terms of better protecting information resources. But in general, boards need to become savvier when it comes to cybersecurity and be prepared to take the proper actions.

Asking the right questions

The best way to gain knowledge about security is to ask the right questions. One of the most important queries is which IT assets the organization is securing? Knowing the answer to this requires having the ability to monitor the organization’s endpoints at any time, identify which systems are connecting to the corporate network, determine which software is running on devices, etc…

Deploying reliable asset discovery and inventory systems is a key part of gaining a high level of visibility to ensure the assets are secure.

Another important question to ask is how is the organization protecting its most vital resources? This might include financial data, customer records, source code for key products, encryption keys and other security tools, and other assets.

Not all data is equal from a security, privacy and regulatory perspective, and board members need to fully understand the controls in place to secure access to this and other highly sensitive data. Part of the process for safeguarding the most vital resources within the organization is managing access to these assets, so boards should be up to speed on what kinds of access controls are in place.

Board members also need to ask about which entities pose the greatest security risks to the business at any point in time, so this is another key question to ask. The challenge here is that the threat vectors are constantly changing. But that doesn’t mean boards should settle for a generic response.

Accessing threats from the inside out

A good assessment of the threat landscape includes looking not just at external sources of attacks but within the organization itself. Many security incidents originate via employee negligence and other insider threats. So, a proper follow-up question would be to ask what kind of training programs and policies the company has in place to ensure that employees are practicing good security hygiene and know how to identify possible attacks such as phishing.

Part of analyzing the threat vector also includes inquiring about what the company looks like to attackers and how they might carry out attacks. This can help in determining whether the organization is adequately protected against a variety of known tactics and techniques employed by bad actors.

In addition, board members should ask IT and security executives about the level of confidence they have in the organization’s risk-mitigation strategy and its ability to quickly respond to an attack. This is a good way to determine whether the security program thinks it has adequate resources and support to meet cybersecurity needs, and what needs to be done to enhance security via specific investments.

It’s most effective when the executives come prepared with specific data about security shortfalls, such as the number of critical vulnerabilities the company has faced, how long it takes on average to remediate them, the number and extent of outages due to security issues, security skills gaps, etc.

In the event of an emergency

Finally, board members should ask what the board’s role should be in the event of a security incident. This includes the board’s role in determining whether to pay a ransom following a ransomware attack, how

board members will communicate with each other if corporate networks are down, or how they will handle public relations after a breach, for example.

It has never been more important for boards to take a proactive, vigilant approach to cybersecurity at their organizations. Cyberattacks such as ransomware and distributed denial of service are not to be taken lightly in today’s digital business environment where an outage of even a few hours can be extremely costly.

Boards that are well informed about the latest security threats, vulnerabilities, solutions and strategies will be best equipped to help their organizations protect their valuable data resources as well as the devices, systems and networks that keep business processes running every day.

Want to learn more? Check out this Cybersecurity Readiness Checklist for Board Members.

Risk Management

Education is changing. In part, this shift is driven by students, who increasingly demand virtual and hybrid learning experiences that better match the ways they like to consume content at home. Meanwhile, virtual education has become an essential element of resilience for educational institutions by ensuring that students don’t fall behind during closures.

In the schools and universities of tomorrow, hybrid and virtual learning will play a central role in enabling inclusive education that’s focused on the unique needs of individual students and better able to drive engagement at all levels. As a result, student outcomes will likely improve. Evidence from corporate training programmes suggest that this could be the case, demonstrating that virtual learning boosts retention rates by 25% to 60% compared to 8% to 10% using traditional methods.

However, as schools and universities make the move to virtual and hybrid learning, many are encountering barriers that are slowing progress considerably.

The key challenge is one of complexity. The average number of edtech tools in schools is over 1,400, and IT teams will likely struggle to ensure the efficacy of such a large number of systems.  There are also questions around the impact on students. With no easy way to monitor student engagement there is no clear path to optimising virtual and hybrid experiences. Similarly, a lack of necessary features and capabilities in many of the tools, such as the ability to combine live, real-time, and video functionality,  mean that institutions can struggle to offer a range of learning experiences, necessary if they’re to tailor virtual learning to the needs of different students. 

Overcoming these barriers is crucial for educators, for the simple reason that doing so unlocks a range of benefits. For one, the curriculum is extended to any location, and schools can benefit from a talent pool of educators that includes anywhere with a good broadband connection. Virtual and hybrid learning creates both global and remote learning and delivers accessibility and localisation for learners.

Of course, there are still some people for whom broadband access is still a problem. But if this gap is closed, then the approach unlocks a 24/7 model for learning for all, where content is always available to students, and they can learn in a self-paced asynchronous manner. Additionally, virtual and hybrid learning can support a range of content formats to support self-serve learners, such as video on demand (VoD). This is a much more tailored approach based on providing personalised learning journeys for students. And of course, virtual experiences are available regardless of whether schools and universities are open or not, helping to build resilience.

Thanks to the cloud, the barriers currently holding institutions back can be overcome. Kaltura’s Video Experience Cloud for Education is a case in point. Kaltura is a cloud company focused on providing compelling video capabilities to organisations.

Kaltura Video Cloud for Education powers real-time, live and video on-demand for online development and virtual learning. Its products include virtual classroom, LMS video, video portal, lecture capture, video messaging, virtual event platform, and other video solutions — all designed to create engaging, personalised, and accessible experiences during class and beyond.

Kaltura content, technology, and data is fully interoperable and seamlessly integrates with all major learning management systems, enabling schools to quickly deploy and get started in transforming learning for their students and staff. The Kaltura Video Cloud for Education helps drive interaction, build community, boost creativity, and improve learning outcomes

Built on the Amazon Web Services (AWS) Cloud, Kaltura provides an elastic, reliable, performant, and secure platform that can enable schools and universities to accelerate their move to virtual and hybrid learning. 

For more information on how to use video to drive student engagement online click here to discover Kaltura’s Video Cloud Experience.

Education and Training Software, Hybrid Cloud, Virtualization

The digital transformation bandwagon is a crowded one, with enterprises of all kinds heeding the call to modernize. The pace has only quickened in a post-pandemic age of enhanced digital collaboration and remote work. Nonetheless, 70% of digital transformation projects fall short of their goals, as organizations struggle to implement complex new technologies across the enterprise.

Fortunately, businesses can leverage AI and automation to better manage the speed, scale, and complexity of the changes that come with digital transformation. In particular, artificial intelligence for IT operations (or AIOps) platforms can be a game changer. AIOps solutions use machine learning to connect and contextualize operational data for decision support or even auto-resolution of issues. This simplifies and streamlines the transformation journey, especially as the enterprise scales up to larger and larger operations.

The benefits of automation and AIOps can only be realized, however, if companies choose solutions that put the power within reach – ones that package up the complexities and make AIOps accessible to users. And even then, teams must decide which business challenges to target with these solutions.  Let’s take a closer look at how to navigate these decisions about the solutions and use cases that can best leverage AI for maximum impact in the digital transformation journey.

Finding the right automation approach

Thousands of organizations in every part of the world see the advantages of AI-driven applications to streamline their IT and business operations. A “machine-first” approach frees staff from large portions of tedious, manual tasks while reducing risk and boosting output.

AIOps for decision support and automated issue resolution in the IT department can further add to the value derived from AI in an organization’s digital transformation.

Yet conversations with customers and prospects invariably touch on a shared complaint: Enterprise leaders know AI is a powerful ally in the digital transformation journey, but the technology can seem overwhelming and takes too long to scope and shop for all the components.  They’re looking for vendors to offer easier “on-ramps” to digital transformation. They want SaaS options and the availability of quick-install packages that feature just the functions that address a specific need or use case to leap into their intelligent automation journey.

Ultimately, a highly effective approach for leveraging AI in digital transformation involves so-called Out of the Box (OOTB) solutions that package up the complexity as pre-built knowledge that’s tailored for specific kinds of use cases that matter most to the organization.

Choosing the right use cases

Digital transformations are paradoxical in that you’re modernizing the whole organization over the course of time, but it’s impossible to “boil the ocean” and do it all at once. That’s why it’s so important to choose highly strategic and impactful use cases to get the ball rolling, demonstrate early wins, and then expand more broadly across the enterprise over time. 

OOTB solutions can help pare down the complexity. But it is just as important to choose the right use cases to apply such solutions. Even companies that know automation and AIOps are necessary to optimize and scale their systems can struggle with exactly where to apply them in the enterprise to reap the most value.

By way of a cheat sheet, here are four key areas that are ripe for transformation with AI, and where the value of AIOps solutions will shine through most clearly in the form of operational and revenue gains:

IT incident and event managementA robust AIOps solution can prevent outages and enhance event governance via predictive intelligence and autonomous event management. Once implemented, such a solution can render a 360° view of all alerts across all enterprise technology stacks – leveraging machine learning to remove unwanted event noise and autonomously resolve business-critical issues.Business health monitoring – A proactive AI-driven monitoring solution can manage the health of critical processes and business transactions, such as for the retail industry, for enhanced business continuity and revenue assurance. AI-powered diagnosis techniques can continually check the health of retail stores and e-commerce sites and automatically diagnose and resolve unhealthy components. Business SLA predictions – AI can be used to predict delays in business processes, give ahead-of-time notifications, and provide recommendations to prevent outages and Service Level Agreement (SLA) violations. Such a platform can be configured for automated monitoring, with timely anomaly detection and alerts across the entire workload ecosystem.IDoc management for SAP – Intermediate Document (IDoc) management breakdowns can slow progress in transferring data or information from SAP to other systems and vice versa. An AI platform with intelligent automation techniques can identify, prioritize, and then autonomously resolve issues across the entire IDoc landscape – thereby minimizing risk, optimizing supply chain performance, and enhancing business continuity. 

Conclusion

Organizations pursuing digital transformation are increasingly benefiting from enhanced AI-driven capabilities like AIOps that bring new levels of IT and business operations agility to advanced, multi-cloud environments.  As these options become more widespread, enterprises at all stages of the digital journey are learning the basic formula for maximizing the return on these technology investments: They’re solving the complexity problem with SaaS-based, pre-packaged solutions; and they’re becoming more strategic in selecting use cases ideally suited for AIOps and the power of machine learning.

To get up and running fast at any stage of your digital journey, visit Digitate to learn more.

Digital Transformation, IT Leadership