Customer relationship management (CRM) software provider Salesforce has signed a definitive agreement to acquire cloud-based point-of-sale (PoS) software vendor PredictSpring to augment its existing Customer 360 capabilities in an attempt to get a stronger hold in the retail industry.

The California-headquartered startup’s PoS systems enable store associates to engage shoppers and complete transactions from anywhere in the store via a mobile device, Salesforce said in a statement.

Additionally, the PoS software comes with features that enable store operations including store fulfillment, keeping profiles of clients, and allowing customers to order items online that may not be available for immediate purchase.

“The combined talent, resources, and innovation of Salesforce and PredictSpring will empower brands and retailers to drive frictionless and personalized engagement across all touchpoints,” Jeff Amann, executive vice president of Salesforce Industries, said in a statement.

The startup is already an existing Salesforce ecosystem partner with several strategic retail customers and is already integrated with Commerce Cloud and Service Cloud, Amann added.

This partnership was formed back in 2019 when Salesforce had added new tools to its Commerce Cloud.

PredictSpring, founded in 2013 and counts Salesforce Ventures as one of its investors, is led by Nitin Mangtani, the founder of the startup.

Before starting PredictSpring, Mangtani worked as a product manager at Google leading the shopping merchant and search infrastructure team. Mangtani also led the Google Retail Promotions project, which was aimed at optimizing mobile conversions.

PredictSpring also has investments from Felicis Ventures, Novel TMT Ventures, and Beanstalk Ventures. The company raised $16 million in a Series B funding round from its existing investors and before that, it had raised $11.4 million in a Series A financing round led by Felicis Ventures.

While Salesforce didn’t disclose the transaction value of the acquisition, it said that it will hire the entire PredictSpring team, approximately 31 employees.

The acquisition is expected to close in the third quarter of Salesforce’s fiscal year 2025. The company follows February to January as its 12-month fiscal period.

Earlier in the year, Salesforce was reportedly attempting to acquire enterprise data management software provider Informatica. However, the acquisition talks fell through as the two companies, reportedly, couldn’t agree on the terms of the deal. Some other previous notable Salesforce acquisitions include the company taking over firms such as Slack, MuleSoft, and Tableau. The company also acquired slack-bot maker Troops.ai in 2022.

Enterprises seeking to thrive in an innovation-centric economy are capitalizing on multi-cloud strategies to leverage unique cloud services. These services help accelerate initiatives supporting AI, data processing, and other pursuits, such as driving compute to the edge.

That’s all well and good – until the CIO gets the bill.

In a survey of more than 1,000 global IT decision makers conducted by Forrester Research with HashiCorp, 94% of respondents said their organization was currently paying for avoidable cloud expenses.1

Meanwhile IDC’s Archana Venkatraman, Research Director, Cloud Data Management, Europe, adds: “While cloud adoption has accelerated, cloud governance and control mechanisms haven’t kept pace. As a result, up to 30% of cloud spend is categorized as ‘waste’ spend.”2

Examples of cloud cost surprises

Even inside the controlled environment of an enterprise’s datacenter, it’s not always easy for IT staffers to keep track of resource utilization. Now imagine the challenge of tracking usage from dozens of engineers in a multi-cloud environment where each service provider has its own tooling, processes, and procedures. Being able to bring all that data into a single view is extremely difficult.

Here are the top three cloud cost surprises that CIOs are likely to encounter.

Unused resources: Over time, cloud environments inevitably sprawl, leading to unused storage volumes, idle databases and zombie test instances.

Modernization: Businesses are slow to adopt newer instance types which offer 20% or more efficiency gains due to lack of visibility and understanding of the upgrade path. Given the hundreds of instance types, it’s easy to understand why.

Anomalies: The biggest cloud cost surprise is the one that comes out of nowhere – an unexpected spike that could be caused by a variety of factors – misconfigurations, orphaned instances, runaway crypto-mining malware, or unauthorized deployments.

Enter FinOps

FinOps has become the de facto way in which enterprises manage cloud cost uncertainties, typically with machine learning used to help deliver insights to DevOps, CloudOps and the C-Suite. What’s more, FinOps provides a common language between the developers, infrastructure and business leaders to help show Return on Invetsment per project or set of services.

A disciplined FinOps practice, coupled with tools like OpsNow, helps in three ways:

FinOps tools can discover unused or orphaned resources and enable organizations to “rightsize” their cloud deployments.

Through the use of anomaly detection, FinOps provides an early warning system that alerts IT teams to usage spikes and budget overruns before they get out of control.

Cloud environments are constantly changing, so FinOps is never done; it’s an ongoing process that can help organizations optimize their cloud spend over time, do a better job of budgeting and forecasting, as well as avoid those billing surprises.

The OpsNow approach

There are many options for FinOps ranging from developing tools in-house to purchasing FinOps platforms. Managing your own platform and tooling presents  CIOs with investment and ongoing maintenance challenges.  In the fast moving world of the cloud that is a risk many prefer to not pursue.

OpsNow offers a different take and allows you to deploy without the investment nor maintenance.  Coupled with its methodologies and metrics you have a way to monitor and track your success.  

OpsNow, a spinoff from Bespin Global, provides  a SaaS platform which utilizes a “shared savings”  model – customers only pay a small percentage based on actual savings and are free to utilize the breadth of capabilities at no cost otherwise.

The OpsNow platform provides a single pane of glass across multi-cloud environments to identify unused resources, recommend capacity adjustments, provide AI insight for optimization, and provide no-risk cost savings from their AutoSavings tool which best even  the multi-year commitments offered from the clouds.

Advanced cost analytics and machine learning also ensure this technology can support a more efficient approach to cloud spend, which ensures IT leaders can innovate without worrying about unexpected costs. The automation is one aspect, but the ability to closely model true usage and ensure the coverage and utilization are closely aligned to the forecast is where the savings typically add up. 

Begin your journey to better cloud cost efficiency now.

1 HashiCorp, Forrester Research Report: ​​Unlocking Multicloud’s Operational Potential, 2022
2 IDC, IDC Blog, The Era of FinOps: Focus is Shifting from Cloud Features to Cloud Value, February 2023

Cloud Computing

Public cloud services provider Oracle on Monday said it will launch a new cloud region in Serbia, which will make it the first among rivals including Microsoft, Amazon Web Services (AWS), Google and IBM, to offer a hyperscale data center in the Eastern European country.

The new cloud region, which will serve Southeast Europe, will be located at the Jovanovac village region in the proximity of Serbia’s fourth largest city, Kragujevac, Oracle said in a statement.

The Serbian government has plans to develop Kragujevac into an innovation hub, earmarking nearly 56,000 square meters and €120 million (US$130 million) for the entire effort, which is to be carried out in three phases.

The new region will also support the increasing cloud computing demands of private and public sector organizations throughout Serbia, Oracle said.

Oracle will offer over 100 Oracle Cloud Infrastructure (OCI) services and applications, including Oracle Autonomous Database, MySQL HeatWave Database Service, and Oracle Container Engine for Kubernetes via the upcoming region.

Other Oracle cloud regions in Europe are located in cities including Paris, Marseille, Frankfurt, Milan, Amsterdam, Madrid, Stockholm, Zurich, London, and Newport. Also, the company runs two government cloud regions in the UK.

Most rival hyperscalers have presence in cities such as London, Frankfurt, Paris, Milan, Zurich, Stockholm and Madrid.

Microsoft is planning to open new regions in Vienna, Copenhagen, Helsinki, Athens, Milan, Warsaw,  and Madrid, the company’s website shows.

Oracle continues to invest in cloud regions

Oracle has continued to invest in expanding its cloud region footprint in an effort to compete with rival hyperscalers including AWS, Microsoft and Google Cloud.

In addition to its existing regions in Europe, Oracle has announced intentions to launch two sovereign cloud regions in the region, located in  Germany and Spain.

Last month, the company announced its intent to open a second region in Singapore to meet demand.

Oracle also has plans to invest about $2.4 billion every quarter for the next few quarters on cloud infrastructure, CEO Safra Catz said during an earnings call for the quarter that ended in November.

In December last year, the company launched a public cloud region in Chicago, its fourth in the US after Virginia, California, and Arizona.

Cloud Computing

There’s no denying the fact that cloud technology is headed in many different directions, all aimed at providing rapid, scalable access to computing resources and IT services.

Yet as cloud technology evolves, many organizations are becoming more thoughtful and intentional in their transformation journey as they look to close the gap between simply running on the cloud and creating enterprise-wide value, observes Cenk Ozdemir, cloud and digital leader at business consulting firm PwC. “Organizations are really focused on achieving the elusive ROI of cloud that only a minority have been able to secure,” he says.

Here’s a quick rundown of the top enterprise cloud trends that promise to lead to greater ROI through innovation and enhanced performance.

1. AI/ML takes center stage

All the major cloud providers are rolling out AI/ML features and products, many designed for use with their core cloud offerings, says Scott W. Stevenson, technology partner at national law firm Culhane Meadows. He notes that most providers are also using AI/ML to improve provisioning of their own services.

While no one wants to be left behind if the promises of AI/ML hold true, there are varying levels of concern about reliability, security, and bias, particularly on the customer side, Stevenson says.

“There’s little doubt that adoption will continue at a fast pace overall, but larger enterprise customers — particularly in highly regulated industries — will be more measured,” he observes. Yet Stevenson doesn’t expect to see many enterprises sitting on the sideline. “It may be that the lessons they learned when migrating to cloud solutions in recent years will serve as a partial road map for adoption of AI/ML technologies — although on an accelerated timeline.”

Technology-driven organizations that prioritize innovation and digital transformation will be the most likely early AI/ML adopters in the cloud, says Michael Ruttledge, CIO and head of technology services at Citizens Financial Group. “Additionally, organizations that are data-driven and rely heavily on data analysis and insights will be able to leverage the best AI/ML services from different providers to enhance decision-making, automate processes, and personalize customer experiences,” he predicts.

Ruttledge notes that his enterprise’s cloud and AI/ML transition is driving stability, resiliency, sustainability, and speed to market. “Our AI/ML capabilities are increasing our ability to stay lean and drive insights into our internal and external customer services,” he says.

2. Industry clouds fuel innovation

Industry clouds are composable building blocks — incorporating cloud services, applications, and other key tools — built for strategic use cases in specific industries. Industry clouds enable greater flexibility when allocating resources, helping adopters make strategic choices on where to differentiate, explains Brian Campbell, a principal with Deloitte Consulting. “This ecosystem is evolving rapidly, driving the need to consistently monitor what exists and what works.”

By leveraging the ever-expanding number of cloud players serving industry-specific business needs in a composable way, industry clouds provide an opportunity to accelerate growth, efficiency, and customer experience. “Allowing for further differentiation on top of these solutions forges a close collaboration between business and technology executives on where to focus differentiation and resources,” Campbell says.

Enterprises looking to lead or stay ahead of their industry peers drove the first wave of industry cloud adopters. The success experienced by those organizations generated a rapid follower wave sweeping across a broader market. “Industry clouds are also leveling the playing field, so midmarket clients now have access to advanced capabilities they no longer need to build internally from the ground up to compete against their larger global competitors,” Campbell says.

3. Modernizing core apps for the cloud

Most large enterprises have sought quick wins on their digital transformation and cloud adoption journeys. They’ve brought smaller, less critical workloads to the cloud, containerizing legacy applications to make them more cloud friendly, and have adopted a cloud-first strategy for any new application development, observes Eric Drobisewski, senior enterprise architect at Liberty Mutual Insurance.

Yet an early emphasis on quick wins has left many vital business applications and related data stuck in enterprise data centers or private cloud ecosystems still in need of eventual migration. “Often, these workloads are tightly coupled to costly hardware and software [platforms] that were built at a time when all that was available was a vertically bound architecture,” Drobisewski explains.

Drobisewski warns that continuing to maintain parallel ecosystems with applications and data splintered across data centers, private clouds, public clouds, physical infrastructures, mainframes, and virtualized infrastructure is both complex and costly. “Simplification through modernization will reduce costs, address operational complexity, and introduce horizontal scale and elasticity to dynamically scale to meet emerging business needs,” he advises.

4. Making the most of the multicloud hybrid-edge continuum

The multicloud hybrid-edge continuum marks a crucial step forward for enterprises looking to drive ongoing reinvention by leveraging the convergence of disparate technologies. “Enterprises must focus on defining their business reinvention agenda and using the cloud continuum as an operating system to bring together data, AI, applications, infrastructure, and security to optimize operations and accelerate business value,” says Nilanjan Sengupta, cloud and engineering lead with Accenture Federal Services.

This trend will enable organizations to steer clear of an overreliance on a single public-cloud provider, Sengupta says. “It satisfies a multitude of business demands while unlocking innovation advancements in data, AI, cyber, and other fields, aligning capabilities to mission and business outcomes.” Hybrid architectures are rapidly becoming the only viable option for most organizations, he notes, since they provide the flexibility, security, and agility necessary to adapt to rapidly changing business needs.

The multicloud hybrid-edge continuum will impact CIOs and their enterprises by forcing them to address several key issues holistically, such as determining the right operating model, integrating and managing different technology platforms, finding the right talent, and managing costs, Sengupta says. “CIOs will need to develop strategies and roadmaps to transition to hybrid cloud environments, while also fostering a culture of agility and continuous innovation within their organizations,” he adds.

5. Reaping the rewards of cloud maturity

After years of aggressive adoption, the cloud is now firmly embedded in the IT and enterprise mainstream. “Cloud maturity is not something an organization gains overnight, but when taken seriously, it becomes a distinct competitive advantage,” says Drew Firment, vice president of enterprise strategies and chief cloud strategist at online course and certification firm Pluralsight.

Firment believes that cloud maturity typically starts with creating a Cloud Center of Excellence (CCoE) to establish a clear business intent, and gain experience with a single cloud before adding others. “Once an organization masters one cloud environment and is firmly established in the cloud-native maturity level, they can begin using other cloud providers for specific workloads,” he explains.

For example, Firment says, a customer service application might be built on Amazon Web Services while leveraging artificial intelligence services from Google Cloud Platform. “The goal is to align the strengths of each cloud provider to better support your specific business or customer needs.”

A purposeful and deliberate approach to a multicloud strategy gives CIOs and their organizations great power, Firment says. “While many technologists in 2023 will be focused on investments in multicloud tools like Kubernetes and Terraform, leaders will be focused on investing in the multicloud fluency of their workforce.”

6. The rise of FinOps and cloud cost optimization

Cloud FinOps offers a governance and strategic framework for organizations to manage and optimize their cloud expenditures transparently and effectively.

“By implementing a holistic FinOps strategy, an organization can drive financial accountability by increasing the visibility of cloud spending across the organization, reducing redundant services, and forecasting future cloud expenditures, allowing for more accurate planning,” says Douglas Vargo, vice president, emerging technologies practice lead at IT and business services firm CGI. “Driving more visibility and fiscal accountability around cloud costs will enable organizations to refocus that spending on innovation initiatives and realize more business value for their cloud investments.”

Organizations that effectively deploy FinOps governance and strategies will reduce cloud costs by as much as 30%, Vargo predicts, enabling them to re-invest those savings into innovation initiatives. “An effectively executed FinOps framework will improve the ROI of cloud spend and open up funding for other expenditures such as increased innovation funding,” he adds.

7. Hyperscalers adjust to slower growth

The three major hyperscalers — Amazon Web Services, Microsoft Azure, and Google Cloud Platform — have grown rapidly over the past few years, observes Bernie Hoecker, partner and enterprise cloud transformation leader with technology research and advisory firm ISG. Meanwhile, many enterprises have accelerated their digital transformation to meet the emerging demands created by remote work teams, as well as to provide customers with improved digital experiences.

“In many cases, however, enterprises overinvested in IT and cloud capabilities,” he notes, “and they’re now focused on optimizing the investments they’ve made rather than moving new workloads to the cloud.”

Yet enterprises weren’t the only overinvestors. “The Big Three hyperscalers also are going through some rightsizing after each of them overhired during the pandemic, and are now forced to deal with some bloat in their workforce,” Hoecker says. He reports that Amazon recently cut 9,000 more jobs in addition to the 18,000 they announced in January. Microsoft laid off 10,000 employees in January and Google, among other cost-cutting measures, has dismissed 12,000 staffers.

Cloud Computing, Cloud Management, Hybrid Cloud, Innovation, Private Cloud, Technology Industry

Like most CIOs you’ve no doubt leaned on ROI, TCO and KPIs to measure the business value of your IT investments. Maybe you’ve even surpassed expectations in each of these yardsticks.

Those Three Big Acronyms are still important for fine-tuning your IT operations, but success today is increasingly measured in business outcomes. Put another way: Did you achieve the desired results for your IT investments?

For more than a decade, IT departments derived business value from cloud computing—public, private and maybe hybrid. Of late, concerns about the public “cloud-first” approach have emerged to challenge business value and skewer ROI, TCO and KPIs. And it drew the curtain on a critical reality: IT profiles are much more complex.

A more thoughtful approach to procuring and managing assets is needed to help hurdle the challenges posed by those diverse estates. To understand how to get there, it helps to first unpack how we got here.

When Diminishing Returns Become Budget Busters

For years enterprises scrambled to build applications in public cloud environments; there was legitimate business value in rapid innovation, deployment and scalability, as well as unfettered access to more geographical regions.

“Cloud-first strategy” became a cure-all for datacenter impediments, as well as an IT leader’s tentpole for digital transformation.

More recently some organizations have reported diminishing returns from their public cloud implementations. Some companies calculated savings after moving from public clouds to on-premises—or cloud repatriation. Others conducted apples-to-apples comparisons of public cloud versus on-premises costs.

In some instances, poor implementation and faulty configurations were the culprits for deteriorating ROI, TCO and KPI values. Collectively these factors have dulled the initial sheen of agility and innovation around the public cloud.

The reality is the decision to put applications in the public cloud or on-premises systems is not an either-or argument; rather, it requires a nuanced conversation, as consultant Ian Meill points out in this sober assessment.

Smart Workload Placement is Key

Meill is right. The real argument about where to allocate applications to generate business value is around the most appropriate location to place each workload. Because, again, IT environments are far more complex these days. They’ve become multicloud estates.

To accommodate an accrual of disparate applications, you’re likely running a mix of public (probably more than one) and (maybe) private clouds in addition to your traditional on-premises systems. You might even operate out of a colo facility for the benefits cloud adjacency affords you in reducing latency. Maybe you manage edge devices, too.

Workload placement is based on several factors, including performance, latency, costs, and data governance rules, among other variables. How, where and when you opt to place workloads helps determine the business value of your IT investments.

For example, you may elect to place a critical HR application on-premises for data locality rules that govern in which geographies employee data can run. Or perhaps you choose to offload an analytics application to the public cloud for rapid scalability during peak traffic cycles. And maybe you need to move an app to the edge for speedier data retrieval.

Of course, achieving business value via strategic workload placement isn’t a given. There is no setting them and forgetting them.

As you navigate the intricacies of workload placement, you face many challenges such as: Economic uncertainty (the market is whipsawing); deficit in IT talent (do you honestly recall a time this wasn’t an issue?); abundant risk (data resiliency, cybersecurity, governance, natural disasters); and other disruptions that threaten to crimp innovation (long IT procurement cycles and slow provisioning of developer services).

You can try to tackle those challenges with a piecemeal approach, but you’ll get more value if you deploy an intentional approach to running workloads in their most optimal location. This planning is part of a multicloud-by-design strategy that will enable you to run your IT estate with a modern cloud experience.

A Cloud Experience Boosts Business Value

As it happens, an as-a-Service model can help deliver the cloud experience you seek.
For instance, developers can access resources needed to build cloud-native applications via a self-service environment, freeing up your staff from racking and stacking, provisioning and configuring assets to focus on other business critical tasks.

To help you better align cost structure with business value, pay-as-you-go consumption reduces your reliance on the rigorous IT procurement process. This cloud experience will also help you reduce risk associated with unplanned downtime, latency and other issues that impact performance and availability SLAs aligned to your needs.

Leveraging such a model—and in conjunction with trusted partners—IT departments can reduce overprovisioning by 42% and support costs by up to 70%, as well as realize a 65% reduction in unplanned downtime events, according to IDC research commissioned by Dell1.

Dell Technologies APEX portfolio of services can help you successfully manage applications and data spanning core datacenters to the edge, as well as the mix of public and private clouds that comprise your multicloud environment. This will help you achieve the business outcomes you seek.

Regardless of where you opt to run your assets, doing so without a modern cloud experience is bound to leave business value languishing on your (or someone else’s) datacenter floor.

Learn more about our portfolio of cloud experiences delivering simplicity, agility and control as-a-Service: Dell Technologies APEX.

[1] The Business Value of Dell Technologies APEX as-a-Service Solutions, Dell Technologies and IDC, August 2021

Cloud Management

In the first use case of this series, Stay in Control of Your Data with a Secure and Compliant Sovereign Cloud, we looked at what data sovereignty is, why it’s important, and how sovereign clouds solve for jurisdictional control issues. Now let’s take a closer look at how data privacy and sovereignty regulations are driving security, privacy, and compliance.

Data Privacy and Security

The EU’s GDPR has formed the basis of data privacy regulations not just in EU but around the world. A key principle of the regulation is the secure processing of personal data. The UK GDPR states that security measures must ensure the confidentiality, integrity, and availability of data (known in cybersecurity as the CIA triad) and protect against accidental loss, destruction, or damage.1

Restricting access to sensitive and restricted data is a crucial aspect of data security, along with ensuring trust and flexibility for portability needs. 

Sovereign clouds are built on an enterprise-grade platform and customized by partners to meet local data protection laws, regulations, and requirements. Locally attested providers use advanced security controls to secure applications and data in the cloud against evolving attack vectors, ensuring compliance with data regulation laws and requirements to safeguard the most sensitive data and workloads.

Protected data should employ micro-segmentation with zero-trust enforcement to ensure workloads cannot communicate with each other unless they’ve specifically been authorized and are encrypted to secure them from foreign access. A multi-layered security approach secures data and applications in the sovereign cloud, keeping them safe from loss, destruction, or damage.

Sovereignty and Compliance

Data residency – the physical location where data (and metadata) is stored and processed – is a key aspect of data privacy and sovereignty regulations Data residency laws require that companies must operate in a country and that data should be stored in that country, often due to regulatory or compliance requirements. For companies that have customer data in multiple countries, it becomes a challenge to keep data secure. A sovereign cloud helps minimize risk and offers more robust controls and trusted endpoints needed to keep data secure and compliant.

In addition, data residency requirements continue to evolve and vary by country or region. Multi-national companies frequently rely on in-country compliance experts to help ensure they’re following the latest rules correctly and to avoid significant fines and legal action. 

With VMware, we provide best-in-class enterprise-grade cloud, security, and compliance solutions that provide the ultimate platform for data choice and control.

“A law can change, and it can change your entire way of doing business,” one Fortune 500 CISO said.2  And with the ever-changing geopolitical landscape, platform flexibility is needed to minimize risk with self-attested, trusted code. VMware provides simpler lift-and-shift portability and interoperability, as well as greater compliance with local laws and regulations.

Faced with changing regulations, it’s not surprising that compliance is a top cloud challenge according to 76% of organizations.3  One reason is a lack of skilled personnel. A recent survey from ISACA found that 50% of respondents said they experienced skills gaps in compliance laws and regulations, as well as in compliance frameworks and controls. Another 46% are dealing with a gap in privacy-related technology expertise.4

With these challenges, it’s not surprising that 81% of decision-makers in regulated industries have repatriated some or all data and workloads from public clouds.5  Some have moved data back on-premises, whereas others are using hybrid cloud architectures. 

With VMware Sovereign Cloud, solutions are provided by locally attested partners who provide full-service, sovereign solutions and ensure that compliance is achieved, implemented and configured. Sovereign cloud meets data residency requirements with local data centers to contain all regulated data, including metadata, and you can respond faster to data privacy rule changes, security threats, and geopolitics with a flexible cloud architecture and knowledgeable local experts.

Learn more about VMware Sovereign Cloud:

Download the Security and Compliance 1 pager

Watch the Sovereign Cloud Overview video  

Find and connect with a Sovereign Cloud Provider in your region

Join the conversation on Sovereign Cloud on LinkedIn

Next, we’ll explore data access and integrity, and how that can ignite innovation.

Sources:
1. UK information Commissioner’s Office, Guide to the General Data Protection Regulation (GDPR) Security, accessed June 2022
2. CSO, Data residency laws pushing companies toward residency as a service, January 2022
3. Flexera 2022 State of the Cloud Report
4. ISACA, Privacy in Practice 2022, March 2022.
5. IDC, commissioned by VMware, Deploying the Right Data to the Right Cloud in Regulated Industries, June 2021

Cloud Management, IT Leadership

Cloud technology is a springboard for digital transformation, delivering the business agility and simplicity that are so important to today’s business. Cloud is also a powerful catalyst for improving IT and user experiences, with operating principles such as anywhere access, policy automation, and visibility.

The benefits of cloud for the business, for IT operations, and for employee experiences are clear. But what if you could take the best principles of cloud and apply them across your entire IT infrastructure?

Simpler operations belong everywhere—not just the cloud

There’s no reason that the benefits of cloud need to be limited to the cloud. With the right strategy, platforms, and solutions, organizations can bring the cloud operating model to the network and across the entire cloud and network IT stack. In fact, in a recent IDC study, 60% of CIOs stated they are already planning to modify their operating model to manage value, agility, and risk by 2026.

Transitioning to this new operating model unlocks more benefits for IT leaders, in more environments and use cases. It simplifies operations for on-premises and cloud infrastructures, cutting down the complexity and fragmentation created by disconnected tools and consoles—and the different skill sets needed to work with them.

Expanding the cloud operating model also sets the stage for better collaboration between network, development, and cloud operations. By introducing a common model and language that transcends operational silos, this approach helps reduce points of friction between organizational handoffs.  The result: teams can collaborate and work together to solve problems more smoothly. Processes become more consistent, predictable, and less prone to manual errors.

Bringing the cloud operating model to the network helps your teams execute faster and be more agile. It can automate tasks such as deploying a new distributed application for users in the home and office. For example, with a cloud-managed SD-WAN, a company can establish connectivity and security in about an hour. With a traditional siloed approach, those same steps could take NetOps, DevOps, and SecOps teams days.

Once an application is up and running, the cloud operating model can support greater visibility into cloud and data center operations, application deployment, and performance. When you have improved end-to-end visibility, you can react more quickly. Your teams can troubleshoot faster, tune performance more easily, and enjoy a more intuitive experience as they do it.

When you simplify IT, better experiences and outcomes follow

What happens when the cloud operating model is brought to the network? Organizations gain the benefits of a simplified IT approach and better user experiences. But that’s not all. It also frees IT leaders to focus, innovate, and deliver better business outcomes.

Improving the application experience

Applying the cloud operating model expands visibility, creating an end-to-end view that enables more consistent governance across the infrastructure, from the network to the internet to the cloud, to help ensure a better application experience for every user.

Powering a more agile, proactive business

Making IT more agile ripples across the whole organization. By automating manual processes, you can get out in front of business changes, deploying resources to support new applications, so you can meet changing needs for business stakeholders, faster.

Controlling costs

Expanding a common operating model helps your teams work smarter with consistent management of the deployment, optimization, and troubleshooting lifecycles, both in the cloud and on-premises.

Breaking down silos for productivity

Cloud operating principles can enable consistent governance that helps bring down the barriers between siloed cloud and network teams—and help IT move beyond fragmented operations with different policies and processes.

Applying stronger security everywhere

Cloud consistency can also enhance security. With automation and improved end-to-end visibility, you can build security into every environment and make automated security updates an integral part of all lifecycle management.

Bring the best of the cloud across your infrastructure

There’s no “one size fits all” approach to a cloud operating model. It needs to be designed and tailored to align with each organization. With the right strategy, platforms, and services, you can take a big step toward simplifying IT to deliver unified experiences and improved business agility.

Discover how.

Digital Transformation

As of late, debate has rekindled around cloud repatriation and whether it is a real phenomenon or just a myth. Much of the confusion may stem from lack of agreement on the term itself: many envision repatriation as an organization completely shifting from a public cloud provider back to on-premises infrastructure, but this is seldom the case.

Recent evidence suggests that repatriation is just one aspect of a larger trend towards rationalizing and optimizing workloads across various IT environments. As a result, organizations are rethinking their workload distribution in public clouds for a variety of reasons, including performance, cost optimization, and security.

This indicates that repatriation is not necessarily a sign of failure in public cloud migrations, but rather an indication that organizations are becoming more adept at optimizing their workloads. This was the topic of Dell’s latest Power of Technology podcast by Mick Turner and Nick Brackney. Read on to learn more about their insights.

Some are questioning the long-term cost benefits of public cloud consumption

One reason some organizations begin to rethink workload placement stems from the financial implications of operating in the cloud long-term. Such was the case for 37 Signals, parent company for the project management solution Basecamp. After running a cost analysis, CTO David Heinemeier Hansson concluded that Basecamp’s predictable growth and relatively stable usage made it better suited for owning their own physical infrastructure than remaining in the public cloud.

Heinemeier Hansson observed that the public cloud makes sense for applications with runaway growth or wild peaks in usage, scenarios which never applied to Basecamp. “By continuing to operate in the cloud, we’re paying an at times almost absurd premium for the possibility that it could [have wild peaks in usage]. It’s like paying a quarter of your house’s value for earthquake insurance when you don’t live anywhere near a fault line.”

Security remains the top reason for migrating to and from the public cloud

Although there are growing doubts about the long-term cost benefits of public cloud, cost is not the primary factor that organizations consider when moving their workloads. According to a recent survey conducted by Dell of 233 IT decision makers[1], security is still the top reason for organizations to move their workloads both out of, but also into, the public cloud.

There are a few potential reasons for this. Organizations continue to perceive security benefits in the public cloud—automation, reduced IT overhead and access to best-of-breed capabilities to name a few. But along with those upsides come with—you guessed it—some downsides. For example, organizations that have been running on premises may benefit from years of institutional knowledge of internal practices that can be lost with a wholesale cloud migration. Issues around data sovereignty, compliance and regulatory guidelines can also complicate things. Many people simply find it challenging to manage a multicloud environment where there are subsets of data that can and can’t live in the public cloud.

In certain instances, these challenges can result in organizations fundamentally rethinking their environments—which can lead to a wholesale repatriation effort, or a smaller rollback of data or applications on premises. Either way, they may increasingly find themselves requiring solutions that help them easily manage and apply consistency across an IT estate that straddles between on premises, colos, edge locations, private cloud and public cloud environments.

Multicloud is complex, but it doesn’t have to be hard

Ultimately, the conversation may be less about cloud repatriation and more about adopting a thoughtful approach to where and how to place workloads. But this requires organizations to take a realistic view of their entire IT landscape and engage in honest discussions with stakeholders and business partners to fully understand their needs and requirements. While operating in a multicloud environment is inherently complex, it doesn’t have to be difficult. There are solutions available that can help organizations integrate with public cloud providers to simplify operations in a multicloud environment and bring the agility of the cloud operating model to dedicated IT environments. Although specific implementations may vary by organization, the principles of designing for multicloud environments remain consistent.

To learn more, listen to Debunking Cloud Repatriation from the Power of Technology, and stay tuned for more installments in this series in the coming weeks.

[1] 2022 survey of 233 IT decision makers, Dell internal study

Cloud Management

Continuing with current cloud adoption plans is a risky strategy because the challenges of managing and securing sensitive data are growing. Businesses cannot afford to maintain this status quo amid rising sovereignty concerns.

Some 90% of organisations in Europe and 88% in the Middle East, Turkey, and Africa (META) now use cloud technology, which is a keystone for digital transformation – according to an IDC InfoBrief, sponsored by VMware. As it becomes a dominant IT operating model, critical data is finding its way into the cloud. Almost 50% of European companies are putting classified data in the public cloud.

While private on-prem cloud remains an organisation’s primary cloud environment for storing high-sensitivity data, 23% of those surveyed chose public cloud for this data class. Some 32% of companies use global public cloud providers to store confidential data.

Rising volumes of sensitive data in public cloud make sovereignty an imperative

Organisations are starting to value strategic autonomy to ensure resilience amid growing geopolitical and economic uncertainties. Digital sovereignty starts with data sovereignty, which forms the legal basis for organisations to ensure regulatory compliance. Data sovereignty is about making sure that data is subject to the laws and governance structures of the country it belongs to. With a large amount of sensitive data now hosted in cloud, sovereignty should influence an organisation’s future cloud strategy. This is becoming a priority as sensitive data volumes are growing exponentially.

The importance of sovereignty for EMEA organisations

The only option for customers to get sovereign cloud security is to engage with cloud providers which are well positioned in local markets.

Drivers for considering sovereignty:

Relevance of data sovereignty cited as “very important” or “extremely important” by 88% of very large organisations (5,000 FTEs) and 63% of all EMEA organisations.

In Europe, organisations are driven by the need for continuous compliance, regulations, and legal obligations.

In META, organisations are driven by the introduction of internal/corporate policies.

Business drivers for data sovereignty:

Customer expectations about privacy and confidentiality

Need to protect future investments in data

Continued macroeconomic volatility, ambiguity, and uncertainties are heightening interest in sovereign solutions

Protection against future EU ruling that could impact your business

How VMware can help

Sovereign Cloud is all about choice and control. VMware’s offering addresses the strategic imperatives for data sovereignty on data security, protection, residency, interoperability, and portability: 

Leveraging the VMware Multicloud Foundation 

Innovating on sovereign capabilities (Tanzu, Aria, open ecosystem solutions) 

Leveraging a broad ecosystem of sovereign cloud providers 

VMware is well recognised on trust and on several capabilities for data sovereignty needs: flexibility and choice/data security and privacy/control of data access/multicloud/ data residency. It is already deployed with more than 20 Sovereign Cloud Providers. 

Laurent Allard, Head of Sovereign Cloud, VMware, says: “To ensure success in their sovereign journey, organisations must work with partners they trust and that are capable of hosting authentic and autonomous sovereign cloud platforms. VMware Cloud Providers recognised within the VMware Sovereign Cloud initiative commit to designing and operating cloud solutions based on modern, software defined architectures that embody key principles and best practices for data sovereignty. More than 36 global and 14 EU VMware Sovereign Cloud Partners can deliver to customers cloud services in alignment with security and local regulations, while enabling sovereign innovation.”

To read the full InfoBrief click here. Find out more about VMware’s Sovereign Cloud here.

Cloud Management, Cloud Security, Data Management, Data Privacy

Simply put, and despite claims customers may hear and/or see in this infant market, the reality is that there is no one-size-fits-all definition to “data sovereignty”, and the true source of the definition to “data sovereignty” as applicable to any workload being contemplated is the legal, policy or guidelines applicable to that data that are prescribing it as a requirement.

For example, a government customer who is planning to acquire cloud services for workloads related to their defence ministry/department would have different data sovereignty applicable to legal, policy and guidelines than when the same government is acquiring the cloud services for their revenue ministry/department. And both of those would be different compared to when that same customer is acquiring cloud services for their parks/forestry ministry/department. Furthermore, a defence ministry of one government may have different requirements than the defence ministry of another government, and the single defence ministry may have different requirements for two different purchases depending on the workload they are considering. It is therefore understandable that a cloud offering can be compliant with the data sovereignty requirements for one customer workload, but not for another of the same customer.

In sum, the definition of data sovereignty varies from jurisdiction to jurisdiction, and from workload to workload, even within the same jurisdiction (depending on the applicable laws, policies, or guidelines that are prescribing it as a requirement). That being said, the common denominator amongst most definitions is that data must remain subject to the privacy laws and governance structures within the nation where the data is created or collected. Because the location of data is not, under many jurisdictions, a bar to foreign jurisdictions asserting control over the data, data sovereignty often requires that it remains under the control and/or management of entities and individuals who cannot be compelled by foreign governments to transfer the data to foreign governments (or, again depending on the requirements, certain foreign governments).  As an example of a requirement that may be different, some, but not all, require that the cloud vendor employees who are supporting the underlying infrastructure hold citizenship and security clearance (i.e., data residency and jurisdictional control would not suffice).  

The other important terms to define are as follows:

Data Residency – The physical geographic location where customer data is stored and processed is restricted to a particular geography. Many customers and vendors confuse this concept with data sovereignty.

Data privacy – Data privacy looks at the handling of data in compliance with data protection laws, regulations, and general privacy best practices.

Jurisdictional control of data – A jurisdiction retains full control of data without other nations/jurisdictions being able to access, or request access, to that data.

Data Governance – The process of managing the availability, usability, integrity, and security of the data in systems, based on internal data standards and policies that also control data usage.

Global hyperscale commercial cloud – Foreign company-owned cloud infrastructure where data is held by a foreign Provider, and as a result may be subject to foreign laws.

VMware Sovereign Cloud Initiative

VMware recognizes that regional cloud providers are in a great position to build on their own sovereign cloud capability and establish industry verticalised solutions aligned to differing data classification types and under their nation’s jurisdictional controls.

Data Classification is core to understanding where your data needs to reside and the protections that must be in place to safeguard and protect its ‘sovereignty’ with jurisdictional controls. The VMware Sovereign Cloud initiative has established a framework of trust scale, based on the classification of data which varies by vertical. Examples vary by industry and region, for example, official UK government classifications such as Official, Secret, and Top Secret. Examples from the commercial sector can include Confidential, Internal Use, Public, Sensitive, and Highly Sensitive. The classifications that a Sovereign Cloud Provider chooses to include in the platform by default will depend on a combination of local jurisdictional norms and the type of customers the platform is intended to serve.

The principle for data classification and trust is that the Sovereign Cloud Provider security can be organised into different trust zones (architecturally called security domains). The higher the classification type, the more trustworthy and sovereign the offering, and the more unclassified the more risk mitigation and safeguards are required (such as encrypting your data, confidential computing, and privacy-enhancing computation). However, there are some hard stops, such as security stopping at the last most secure zone that is always within a sovereign nation and under sovereign jurisdiction.

The placement of data must be based on the least trusted/sovereign dimension of service. Assessing your data classification requirements against the proposed services will result in understanding where the data can reside based on the necessary locations and available mitigations. This is an opportunity for VMware Sovereign Cloud partners to overlay solutions. By this, I mean that in many cases, a specific data classification can be placed on a particular platform (or security domain) if certain security controls are in place. E.g., Confidential Data can reside on Shared Sovereign Cloud infra if encrypted and the customer holds their own keys.

Using this risk and data classification analysis, VMware Sovereign Cloud Providers understand where their proposed Sovereign Cloud offerings sit on the scale, in relation to their other services such as public hyperscale cloud. They can then determine how to shift everything towards the most sovereign dimension of service as necessary using technology and process and enhance a customer’s Sovereign protection and cloud usage.

For the reasons noted above, VMware Sovereign Cloud providers, using VMware on-premises software, are in an ideal position to build compliant data sovereign hosted cloud offerings in alignment with data sovereignty laws, policies, and frameworks of their local or regional jurisdictions, – all in a model that is a more optimal approach to assuring jurisdictional control and data sovereignty.

My thanks to Ali Emadi for co-authoring this article. To read the full article Will the Real Data Sovereign Cloud please stand up? Click here.

Cloud Management, Cloud Security, Data Management, Data Privacy