As the world works to reverse climate change through decarbonization and reduced reliance on fossil fuels, the oil and gas industry finds itself at the epicenter of this challenge. Governments, institutional investors, consumers, and employees continue to exert growing pressure on oil and gas providers to decarbonize and adopt renewable solutions. In response, oil and gas majors are making headway in terms of carbon reporting, net-zero targets, and accountability. Many have even spun up renewable energy arms.

Innovation underpins corporate sustainability efforts. From an investment standpoint, sustainable solutions can also perform double duty, often yielding significant added value such as productivity and efficiency gains and new revenue streams. Increasingly, oil and gas companies are making strategic technology investments to accelerate sustainable digital transformation and deliver a competitive advantage. In this article, we’ll share opportunities for oil and gas companies to increase sustainability while achieving other business benefits.

Understand and reduce the environmental impact of services and workloads

Ironically, while technology holds the key to sustainability, it has also contributed to the problem. Recent studies indicate that data centers consume one percent of the world’s electricity, and The Royal Society estimates that digital technology contributes up to 5.9% of global emissions.

While inefficient equipment, buildings, and HVAC systems contribute to the problem, one of the most significant factors is the underutilization of data center equipment. Up to 25% of data center power is consumed by equipment that no longer performs useful work,[i] and only 10-30% of server capacity is used.[ii] Furthermore, according to HPE internal data, average storage utilization hovers around 40%.[iii] While organizations must plan for usage spikes and failovers, they also have opportunities to clean up workloads, retire unused equipment, and leverage newer, more efficient hardware and solutions.

Power savings represents just part of the equation. If an enterprise can do more work with less hardware, using fewer servers, licenses, and people, they save money while lowering carbon emissions.

Apply technology to improve efficiency, productivity, and sustainability

Clear visibility across infrastructure enables organizations to identify opportunities to expand operational efficiency, meet sustainability goals, and improve productivity. Using intelligent automation, oil and gas companies can monitor workloads and boost server utilization to optimize their investments while reducing their environmental footprint.

Infrastructure management solutions enable organizations to simplify lifecycle management through automation and surface new ways to operate more sustainably and efficiently. Oil and gas companies are also deploying self-healing solutions that predict, detect, and correct issues across the infrastructure using artificial intelligence and machine learning, often before the operator is aware of an issue. 

Furthermore, monitoring solutions allow companies to track real-time energy consumption, enabling them to reposition energy-heavy workloads to locations with lower energy costs or optimize usage for carbon emissions.

Make asset management more sustainable

Oil and gas customers rely on the latest technologies to give them a competitive edge. This often means replacing assets with several years of useful life remaining.

Upcycling programs can help enterprise businesses manage the financial and sustainability impacts of surplus equipment. Many companies have budget or purchasing policy constraints or do not need the latest top-performing equipment. By securing reasonable compensation for used equipment from such companies, oil and gas companies can extend the useful life of their assets and further reduce waste.

Drive sustainable digital transformation with HPE GreenLake

Through strategic investments, oil and gas companies not only increase their sustainability but can also reap additional rewards, such as increased efficiency and productivity, while maintaining a competitive edge.

HPE has a unique vantage point rooted in its own sustainability journey. HPE is committed to becoming a net-zero enterprise by 2040 and offers a portfolio of sustainable innovation, technologies, solutions, and cloud services. The HPE GreenLake edge-to-cloud platform can reduce the environmental impact of IT by enabling customers to flexibly scale IT to meet their needs, thereby improving utilization levels and avoiding the waste of overprovisioning.

GDT can help your organization make the most of HPE solutions to fast-forward sustainability, grow stronger, and become more resilient. Contact the experts at GDT to learn more about how we can help you accelerate sustainable digital transformation.

[i] Jon Taylor and Jonathan Koomey (2017) “Zombie/Comatose Servers Redux,” available at: https://www.anthesisgroup.com/report-zombie-and-comatose-servers-redux-jon-taylor-and-jonathan-koomey/  (accessed October 18, 2022)

[ii] Uptime Institute (2020) “Beyond PUE: Tackling IT’s wasted terawatts,” available at: https://uptimeinstitute.com/beyond-pue-tackling-it%E2%80%99s-wasted-terawatts (accessed October 18, 2022)

[iii] Storage Utilization: HPE customer experience

Innovation

Artificial Intelligence (AI) is fast becoming the cornerstone of business analytics, allowing companies to generate value from the ever-growing datasets generated by today’s business processes. At the same time, the sheer volume and velocity of data demand high-performance computing (HPC) to provide the power needed to effectively train AIs, do AI inferencing, and run analytics. According to Hyperion Research, HPC-enabled AI, growing at more than 30 percent, is projected to be a $3.5 billion market in 2024.

This rising confluence of HPC and AI is being driven by businesses and organisations honing their competitive edge in the global marketplace as digital transformation is accelerated and brought to the next level through IT transformation processes.

“We’re seeing HPC-enabled AI on the rise because it extracts and refines data quicker and more accurately. This naturally leads to faster and richer insights, in turn enabling better business outcomes and facilitates new breakthroughs and better differentiation in products and services while driving greater cost savings,” said Mike Yang, President at Quanta Cloud Technology, better known as QCT.

While HPC and AI are expected to benefit most industries, the fields of healthcare, manufacturing and higher education and research (HER) and Finance stand to gain perhaps the most due to the high-intensity nature of the workloads involved.

Application of HPC-enabled AI in the fields of next-generation sequencing, medical imaging and molecular dynamics have the potential to speed drug discoveries and improve patient care procedures and outcomes. In manufacturing, finite element analysis, computer vision, electronic design automation and computer-aided design are facilitated by AI and HPC to speed product development, while analysis generated from Internet-of-Things (IoT) data can streamline supply chains, enhance predictive maintenance regimes and automate manufacturing processes. HER utilises technology to explore fields such as dynamic structure analysis, weather prediction, fluid dynamics and quantum chemistry in an ongoing quest to solve global problems like climate change and achieve breakthroughs and deeper exploration through cosmology and astrophysics.    

Optimising HPC and AI Workloads

The AI and Machine Learning (ML) algorithms underlying these business and scientific advances have become significantly more complex, delivering faster yet more accurate results, but at the cost of significantly more computational power. The key challenge now facing organisations is building HPC, AI, HPC-enabled AI, and HPC-AI converged workloads—while shortening project implementation time. Ultimately, this will allow researchers, engineers, and scientists to concentrate fully on their research.

IT support would also need to actively manage their HPC and AI infrastructure, leveraging the right profiling tool for optimisation of HPC and AI workloads. Optimised HPC/AI infrastructure should deliver the right resources at the right time for researchers and developers to accelerate computational processes.

In addition, understanding workload demands and optimising performance helps IT avoid additional workload and extra labour for finetuning, significantly reducing the total cost of ownership (TCO). To optimise HPC and AI workloads effectively and quickly, organisations can consider the following steps:

Identify key workload applications and data used by the customer, as well as the customer’s expectations and pain pointsDesign infrastructure and building the cluster, ensuring that hardware and software stack can support the workloadsContinue the process of always adjusting and finetuning

QCT leverages Intel’s profiling tool Intel Granulate gProfiler to reveal the behaviour of the workload before tapping its deep own deep expertise to analyse the behaviour and design a fine-tuning plan to help with optimisation. Through this process, organisations can ensure rapid deployment, simplified management, and optimised integrations—all at cost savings.

AI continues to offer transformational solutions for businesses and organisations, but the growing complexity of datasets and algorithms is driving greater demand on HPC to enable these power-intensive workloads. Workload optimisation effectively enhances the process and, at the heart of it, enables professionals in their fields to focus on their research to drive industry breakthroughs and accelerate innovation.

To discover how workload profiling can transform your business or organisation, click here.

Artificial Intelligence, Digital Transformation, High-Performance Computing

By Michael Loggins, award-winning executive IT leader

Industry 4.0 has vast potential to transform what factories can do. Manufacturing can be faster, more data-driven, more responsive to the needs of workers and customers, and more powered by innovations such as artificial intelligence, internet of things, digital supply chains, and blockchain. While the possibilities of Industry 4.0 are extraordinary—and realizing them is seemingly just within our reach—there are still obstacles to overcome before we can feel truly comfortable making them a reality.

Where I see the biggest dissonance today is in how companies are allowing both IT and the manufacturing groups to exist inside their organizations. Traditionally, the value of IT in the manufacturing industry has been to provide the factory floor with the resources they need, and then to stay out of the way. And in the past, that was really the best approach, because the controls that IT needs—particularly for security—typically aren’t conducive to maintaining an efficient and optimized factory environment.

Industry 4.0 Requires New Ways of Working Together

In the world of Industry 4.0, the separation between IT and the factory floor pretty much disappears. Today, it’s almost mandatory that IT sits in the middle of the factory and is seen as a valuable partner and an essential business function. But, in many organizations, the traditional dissonance between IT and the factory floor is still there; leading to conflicts in which the health and security of the business are jeopardized due to misalignment. Whether that’s the security of the entire organization, or the efficiency and efficacy of the operational technology on the factory floor, neither scenario is acceptable as they’re both preventable.

What’s needed now is a growing understanding on both sides, so the divisions and dissonance are eliminated, and cooperation and teamwork are celebrated. IT needs to figure out how to reduce its need to control everything, so that teams can protect what needs to be protected while supporting the operational technology (OT) environment in ways that don’t negatively impact productivity, efficiency, and automation on the factory floor.

At the same time, the factory needs to understand that they are not technologists and don’t have a wide enough scope to view the entire environment in order to protect OT. This means they’ll need to be able to bend a little to let IT be part of their conversations. If the IT team is somehow iced out, the factory may run just fine, but business operations are substantially more vulnerable to a major disruption due to a cybersecurity attack. Nobody wants that to happen. So both sides will need to drop tradition and ego to create a win-win situation for the organization.

How IT Can Support the Changes Needed for Industry 4.0

Let’s look at some ways IT can do our part.

Earn our seat at the table. Firstly, if we can’t keep the printers and computers on the factory floor running, there’s no way we’re going to be invited in to even talk about securing the environment. So there is a minimum “pay to play” mindset of operational excellence that has to be put in place to even get a seat at the table.

At the table, the IT team must be prepared. We can’t go in talking about the factory floor in the same language and terms that we would talk about a traditional office environment. It’s a different world, and if IT doesn’t understand that world–if we don’t take the time to live in that world–then how can we possibly go about protecting it? 

That means spending time on the factory floor; talking to factory staff and management to get deep in the weeds to understand the methodology they are using for quality, efficiency, and everything in between. You have to make sure you figure out how to maintain it before you can figure out how to protect it.

Practice patience. The other key mindset for IT is patience. Once you get into the operational side of things, you’ll be overwhelmed by how much there is to learn, and by the amount of technology and processes you’ll need to protect. If you try to address everything at the same time, you’ll fail. Worse, you will burn bridges, reinforce the dissonance and, eventually, you’ll get removed from the table.

So, for us in IT, it’s about starting small, making sure your OT colleagues understand that you have their environment in mind, and that you’re not going to inadvertently shut down the factory. Ultimately, IT needs to be viewed as a true business partner protecting the factory from all kinds of vulnerabilities, while also creating the assurance that OT won’t be held back. It’s about doing the work in a way that is sustainable and secure.

Building Empathy to Realize Industry 4.0

Without people and process, the new technologies of Industry 4.0 are never going to be fully maximized. In fact, I’ve seen organizations put in amazing technology, but without paying enough attention to how it impacted the factory floor; the return on the investment was pretty much zero. CISOs need to demonstrate empathy and a true understanding of the challenges of keeping the factory working every day. This includes knowing how failures of equipment and machinery can be disastrous for the OT team.

It helps to become friends, or at least tight colleagues, with factory management, floor supervisors, and machinists. Get to really know those people who are your customers. As with any relationship, there needs to be a strong commitment from both IT and the factory floor to resolve issues, but I think it’s our responsibility in IT to go a little further than halfway in order to train our people, and transform our mindsets.

We have to make sure our IT staff are equipped to work with the OT side of the company. We have to spend time on the factory floor and engage with the philosophy and values and mindset of the people there. Sometimes working on the factory line gives you the right amount of empathy to understand what’s going on.

Collaboration Enables Innovation for Industry 4.0

If you can get your teams working together, the possibilities are tremendous. The speed of delivery should increase and more importantly, you’ll have alignment between your IT and engineering groups, creating space for real innovation to happen. Bot IT and OT are composed of problem solvers that are in their fields because they know how to make things better: they just have different sets of tools.

By taking people who have similar drives, backgrounds and passions for fixing problems and putting them in a room, you’ll achieve amazing levels of innovation and countless creative solutions. And because the work is done together, as a team, the designs are more stable at every stage. They will be easier to implement, easier to manage and operate, easier to secure, making adoption measurably faster.

By removing the dissonance, you can totally change how you’re able to deliver value both at the factory floor and to your customers. Industry 4.0 becomes more than just an exciting possibility; it becomes the new reality.

Read more on Industry 4.0 in this article

About Michael Loggins:

SRT author, Michael Loggins is an award-winning executive IT leader focused in strategic business alignment, customer success and standardizing global IT operations.

IT Leadership, Manufacturing Industry

Insurance or not, many organizations are transforming themselves with agile models. We spoke to a top leader of an international insurance firm that is leveraging Agile approaches more often and in more projects. Here are some learnings we discovered.

What challenges did you need to overcome to be successful?

As we looked to scale Agile across our organization, one of the biggest problems that we experienced was that our tool wasn’t, well, agile. It was little more than a fancy looking spreadsheet and our staff spent their time battling with the tool rather than leveraging the tool to help the business. That just wasn’t sustainable.

In what ways do you address these issues?

Just like any other aspect of business, the ability to deliver work effectively using Agile requires a combination of the right information driving the ability to make sound decisions in a timely manner, and a tool that allows people to focus on doing their work rather than interacting with the tool. We needed to find a solution that could easily integrate with our other enterprise tools, and that could help us become more effective and efficient.

What was your end solution, and what impact did it have?

For us, Rally Software from Broadcom was the answer. We recently ran our first PI planning session using the tool and we cut the duration of the planning session by two hours. Multiply that across the number of people and the number of times we plan PIs and it becomes a material saving. And of course, that efficiency means staff time can be redirected into work that adds value to the business.

Rally integrates with our other tools — delivering information, consuming information, and generally improving workflow and automation. That means people have the information they need in a way that works for them, allowing them to focus on their tasks. We’re also planning to leverage Rally as a decision-making tool for the business — helping teams to prioritize and refine user stories and drive more improvements.

How is this driving your success?

We’re breaking down silos. With the ability to collaborate in a tool that actually helps us deliver, we are strengthening relationships between business and IT. That improves understanding and ultimately drives engagement in ensuring that the best possible solutions are delivered — so we can keep increasing customer and business value.

Conclusion

Through effective implementation of agile solutions such as Rally Software, teams can enhance innovation, optimally balance resources, and fuel dramatic improvements in delivery. Going agile is the first step toward more impactful Value Stream management — so what are you waiting for? If you find yourself in a similar business scenario and would like to learn best practices to unlock excellence with Agile analytics, be sure to download our eBook, “How To Interpret Data from Burnup / Burndown Charts.

Collaboration Software

IT analyst firm GigaOm is quick to point out that primary data is the first point of impact for ransomware attacks. This fact puts primary storage in the spotlight for every CIO to see, and it highlights how important ransomware protection is in an enterprise storage solution. When GigaOm released their “GigaOm Sonar Report for Block-based Primary Storage Ransomware Protection” recently, a clear leader emerged.

GigaOm named Infinidat as the industry leader in ransomware protection for block-based storage. Infinidat is a leading provider of enterprise storage solutions. According to GigaOm’s independent analysis, Infinidat distinguishes itself for its modern, software-defined storage architecture, securing enterprise storage with a strategic, long-term approach, broad and deep functionality, and high quality of innovation.

One of the top CMOs in the tech industry, Eric Herzog, is leading the marketing charge at Infinidat and had this to say about this recognition from GigaOm:

“Infinidat has taken the benefits of ransomware protection on enterprise block storage to the next level, including guaranteed immutable snapshot recovery in one minute or less, greater ease of use, and comprehensive cyber resilience.”

“Being recognized as the industry leader for combatting ransomware not only gives us enormous forward momentum as a solution provider of cyber storage resilience and modern data protection, but it also gives Infinidat a seat at the table to talk to large enterprises and service providers about what we can do to eliminate the threat of ransomware for them,” he added.

The GigaOm Sonar Report showcases the strength of Infinidat’s novel InfiniSafe cyber resilience technology embedded across all its platforms: InfiniBox®, InfiniBox™ SSA and InfiniGuard®. The report states:

“Infinidat offers a complete and balanced ransomware protection solution. InfiniSafe brings together the key foundational requirements essential for delivering comprehensive cyber-recovery capabilities with immutable snapshots, logical air-gapped protection, a fenced forensic network, and near-instantaneous recovery of backups of any repository size.”

Infinidat has delivered the industry’s first cyber storage guarantee for recovery on primary storage – the InfiniSafe® Cyber Storage guarantee.

The company recently extended cyber resilience to its InfiniBox and InfiniBox SSA II enterprise storage platforms with the InfiniSafe Reference Architecture, allowing Infinidat to provide its immutability snapshot guarantee and the recovery time of immutable snapshots at one minute or less. InfiniSafe was announced on the InfiniGuard modern data protection and cyber storage resilience platform in February this year.

The GigaOm Sonar Report recognizes the features and functionality of Infinidat’s cyber resilience technology: “InfiniGuard delivers solid cybersecurity features at no extra cost, allowing customers to quickly and securely restore data, even at scale, in case of an attack.”

Through near instantaneous cyber recovery, Infinidat helps organizations avoid having to pay the ransom, yet still retrieve their valuable enterprise data, uncompromised and intact. Think about how significant this really is, given how much of a threat ransomware is.

When ransomware takes data hostage, it can destroy backup copies of data, steal credentials, leak stolen information, and worse. It has caused businesses of all sizes to shut down operations overnight, so it is not unusual for a company to pay a large sum of money to restore their business. Infinidat’s solutions can put a stop to it.

It is an honor that GigaOm has recognized the technology leadership. The analyst community has been spot-on about how enterprises and service providers should strategize to not just take “baby steps” but actually take a quantum leap forward to address these cyberattacks.

In addition, GigaOm recognized Infinidat as a “Fast Mover,” one of only two vendors awarded that accolade. “Fast Movers” are expected to deliver on their solutions and technologies faster and with more features/functionality than other vendors known as “Forward Movers.” Infinidat has been rapidly delivering new technology, several guarantees, and new capabilities over the past 18 months, including the extension of new features and functions to InfiniSafe.

Max Mortillaro, Analyst at GigaOm, shared his perspective: “Primary data is the first point of impact for ransomware attacks, so it is critical for organizations to implement primary storage solutions that incorporate ransomware protection, such as Infinidat’s cyber resilience solutions.”

He went on to say, “Our new GigaOm Sonar Report on ransomware protection for block storage comes at a time when ransomware attacks have become so prevalent and such a persistent threat for all organizations across all industries. We have seen through our analysis how ransomware can cause significant damage to companies and government agencies.”

The time is right for Infinidat to step forward as a recognized industry leader for ransomware protection.

To download the full analyst report, click here.

To read more about Infinidat’s cyber resilience solutions, click here.

Security

A report issued Monday by private investment company Bain Capital indicated that, despite the numerous disruptions to the technology industry—including a global supply chain crisis and Russia’s invasion of Ukraine—most IT decision makers foresee either stable budgets or increases for the coming year.

Over the past two years, the pandemic’s effects on that figure have been noticeable—at the onset, less than half of those polled said that they expected anything but a decrease in their budget for the coming year. The number changed rapidly as the economy emerged from the worst effects of the COVID crisis, however, with 75% in 2021 and 90% in 2022 saying that they expected stable or increasing budgets to come.

That number shrank in the latest report—to 77%—but that’s still an indicator of strong demand for products and services in a sector that’s still facing more than its share of headwinds, according to the head of Bain’s global technology practice, David Crawford.

“CIOs and CTOs are increasing their technology spending,” he wrote in the report. “Of course, there may be budget pressure in the future, but over the long term, to them—and to us—tech is not so much a cost as an investment that spurs productivity.”

Much of the report is devoted to vendors and their potential best moves to weather a tough economic situation, which offers some insight into what IT departments can expect from companies they deal with in the future.

Along with changes to streamline sales and reduce travel, businesses can expect some of their vendors to move in the direction of consumption-based pricing, thanks to higher demand for that model, and to do more strategic work around product development, as Bain’s research shows that return on investment for R&D spending is frequently not at the level that management is looking for.

The chip shortage, according to Bain, is gradually easing, but recovery isn’t unlikely to be particularly fast or painless. Given global economic conditions, a simple lessening of demand may be one of the most important contributing factors to the silicon market’s recovery, and the company’s researchers identified two other factors likely to determine how short—or long—the recovery is.

Extreme ultraviolet lithography equipment—$150 million machines that are necessary for the latest generation of silicon, and are only made by one manufacturer—represents a present bottleneck to building out fabrication capability.

Moreover, geopolitical friction among numerous countries presents its own stumbling blocks to recovery, as import restrictions make it difficult to source key resources. Russia’s restriction on the sale of noble gases like neon, which is important to silicon fabrication, Japan’s tightening of control over the supply of high-purity hydrogen fluoride, and similar trade issues are likely to exacerbate the chip shortage in the short term unless those issues can be resolved.

Budgeting, IT Strategy, Technology Industry

Cropin, an agritech startup backed by the Bill and Melinda Gates Foundation, on Tuesday said that it was launching its industry cloud for agriculture, built on Amazon Web Services (AWS).

Dubbed Cropin Cloud, the suite comes with the ability to ingest and process data, run machine learning models for quick analysis and decision making, and several applications specific to the industry’s needs.

The company claims that the cloud suite, which is built on a knowledge base of more than 500 crops and 10,000 crop varieties across 92 nations, will solve planet-scale challenges such as food security and climate-related issues, while reducing the environmental impact of farming.

The suite, according to the company, consists of three layers: Cropin Apps, the Cropin Data Hub and Cropin Intelligence.

Cropin Apps, as the name suggests, comprises applications that support global farming operations management, food safety measures, supply chain and “farm to fork” visibility, predictability and risk management, farmer enablement and engagement, advance seed R&D, production management, and multigenerational seed traceability.

These applications can also aid nutrition management as well as deforestation and carbon-emissions management, and help farmers adopt regenerative agriculture and climate-safe practices, the company said.

The second layer, Data Hub, can ingest data from a variety of sources including on-farm devices, drones, IoT devices and satellites. Agriculture businesses and farmers can use the hub to access structured and contextualized data from various sources for correlation and analysis at scale, the company said.

Cropin Data Hub also has prebuilt data frameworks designed to solve the most challenging problems, such as cloud-free satellite imagery, boundary detection of farm plots, and segmentation of land use, Cropin said. 

The third layer, Cropin Intelligence, uses the company’s 22 prebuilt AI and deep-learning models to provide insights about crop detection, crop stage identification, yield estimation, irrigation scheduling, pest and disease prediction, nitrogen uptake, water stress detection, harvest date estimation, and change detection, among others.

The company claims to have deployed such predictive maintenance or analysis across 200 million acres of land globally.

Bengalaru-based Cropin, which was founded in 2010 by Krishna Kumar, Kunal Prasad and Chittaranjan Jena, claims to have raised around $33 million to date from 12 investors such as ABC World Asia, BEENEXT, Invested Development and Sophia Investment APs among others. 

The company says it has partnered with more than 250 B2B customers.

There are other startups across the world that offer solutions and services similar to Cropin, including:

New Zealand-based startup Onfarm Data, which was founded in 2017, offers a cloud-based platform for farmers to control, monitor, and manage irrigation systems remotely.Founded in 2016, Malaysian startup Agritix offers a plantation workforce management solution, dubbed Agritix Workforce.Another startup, founded in 2018 in the UK under the name Glas Data, provides a cloud-based agriculture analysis platform that can aggregate data from various sources in the farm and provide insights in the form of dashboard visualizations.Norwegian startup Dynaspace, also founded in 2018, offers a platform called InsightSphere, that uses satellite imagery to provide a map of agriculture operations.In the US, Aggio, founded in 2016, offers a cloud-based sales and market-intelligence platform.

The global farm management software market is projected to reach $1.9 billion by 2028, from $921.4 million in 2021, a Valuates report showed. It also shows that the growth will be driven by factors such as growing awareness and increasing implementation of cloud computing in real-time farm data management, growing population, and a subsequent rise in demand for food worldwide.

Agriculture Industry, Cloud Computing

There are many different healthcare interoperability and industry clouds on the market. Which one should you choose? Some offer information management pipelines, while others focus on digital imaging communications in medicine (DICOM). You might want to start by considering your goals and which cloud will help you meet them.

Interoperability cloud offerings

Microsoft Azure Healthcare API

Azure Healthcare APIs provide a PaaS platform where customers can ingest and manage their PHI data. Customers who work with health data can use these Azure APIs to connect disparate sets of PHI for machine learning, analytics, and AI.

Key features include:

Structured data such as medical records from HL7 or C-CDA, generated by health devices, available through apps like HealthKit and Google Fit, or accessible on different databases, can be ingested and translated for the FHIR.Unstructured data can be mapped and annotated to FHIR, which is viewable alongside other structured clinical information.DICOM data can be ingested through an API gateway, and the technology will extract relevant metadata from images and map it to patient records.Devices generating biometric data can provide essential insights on health trends to care teams through FHIR integration.

Amazon Healthlake

Amazon released its HealthLake service, which means users no longer have to worry about obtaining, provisioning, and managing the resources needed for infrastructure. Users will only need to create a new datastore on the AWS Console and configure it according to their encryption method preference (i.e., AWS-managed key or Bring Your Key).

Once the datastore is available, users can directly create, read, update, delete, and query their data. Furthermore, since Amazon HealthLake exposes a REST Application Programming Interface (API), users can integrate their application through several SDKs.

If you are working with a format that is not FHIR, the company has included several connectors which allow easy conversion from HL7v2, CCDA, and flat file data to FHIR.

Google Healthcare Data Engine

Healthcare Data Engine contains the Google Cloud Healthcare API, tailored to provide longitudinal clinical insights in FHIR. It can map more than 90 percent of HL7 v2 messages – medications and patient updates – to FHIR across leading EHRs.

The goal is to enable a cloud environment for advanced analytics and AI applications to help healthcare, and life sciences organizations harmonize data from EHRs, claims data, and clinical trials.

Cloverleaf FHIR Server

Infor has traditionally been at the forefront of seeking to help solve interoperability challenges within a healthcare organization. The Infor Cloverleaf suite has released a next-generation solution.

Infor FHIR Server provides a way for healthcare organizations to use modern technologies to digitize their operations by connecting data from both legacy and modern solutions into a single system. Implementations also support local requirements of the HL7 FHIR standard, making data available through secure web APIs for further analysis.

The FHIR server is part of a more overarching data interoperability platform that helps organizations with clinical data exchange. It has prebuilt connectors for easy integration into modern and legacy systems and continuous or batch processes.

Healthcare industry clouds

Google 

Is it enough to have big clients like the Mayo Clinic and CommonSpirit, among others, on board? Is Google’s traction in the market significant enough? Fitbit’s acquisition might provide another benefit since it will be integrated into Google’s virtual care and remote patient monitoring services. 

The care studio platform, which allows for a single centralized view of a patient from diverse EMR systems, has also been beneficial. I am a fan of the Google search capability for clinicians.  

Microsoft 

With the recent buy of Nuance, Microsoft’s health cloud is placing a greater emphasis on voice solutions. The primary product is DAX integration with Microsoft Teams for virtual care. Microsoft has a superior stickiness in the 365 ecosystems because most healthcare institutions already use 365.

Microsoft has a significant advantage since it’s one of the easier products to get up and running quickly. I believe that Microsoft will do well in this market.

Workday

The healthcare ERP cloud vendor has a particular emphasis on employee experience, given the fact that health institutions around the world are facing shortages in all areas. Workday ERP adoption has been widespread among healthcare organizations, partly because supply chain is at the forefront of cost savings, and companies want to get to the bottom line of patient care.

Oracle

The recent acquisition of Cerner by Oracle has caused a stir in the industry, with many wondering if it will be a game-changer or just another failed attempt at integration. Only time will tell. The company still has a long way to go before achieving its bold vision of creating a master patient database, but I applaud the effort nonetheless.

Key themes for decision makers

Who is your preferred partner? CIOs will utilize their partners to select their cloud interoperability platform. If you’re already a heavy user of Azure and 365, stick with Microsoft. The same applies to the other providers.Pick a partner and go all in. This is not a time to pilot since these solutions solve the same problem and provide a similar playbook on interoperability.Invest in upskilling engineers emphasizing native cloud development while mastering cloud-to-cloud integration. Avoid any potential for vendor lock-in.If you solicit big four consulting firms for help with your assessment, be mindful that they may give you biased advice because of their existing partnership and joint ventures with healthcare cloud providers.Cloud Management, Healthcare Industry

There are many different healthcare interoperability and industry clouds on the market. Which one should you choose? Some offer information management pipelines, while others focus on digital imaging communications in medicine (DICOM). You might want to start by considering your goals and which cloud will help you meet them.

Interoperability cloud offerings

Microsoft Azure Healthcare API

Azure Healthcare APIs provide a PaaS platform where customers can ingest and manage their PHI data. Customers who work with health data can use these Azure APIs to connect disparate sets of PHI for machine learning, analytics, and AI.

Key features include:

Structured data such as medical records from HL7 or C-CDA, generated by health devices, available through apps like HealthKit and Google Fit, or accessible on different databases, can be ingested and translated for the FHIR.Unstructured data can be mapped and annotated to FHIR, which is viewable alongside other structured clinical information.DICOM data can be ingested through an API gateway, and the technology will extract relevant metadata from images and map it to patient records.Devices generating biometric data can provide essential insights on health trends to care teams through FHIR integration.

Amazon Healthlake

Amazon released its HealthLake service, which means users no longer have to worry about obtaining, provisioning, and managing the resources needed for infrastructure. Users will only need to create a new datastore on the AWS Console and configure it according to their encryption method preference (i.e., AWS-managed key or Bring Your Key).

Once the datastore is available, users can directly create, read, update, delete, and query their data. Furthermore, since Amazon HealthLake exposes a REST Application Programming Interface (API), users can integrate their application through several SDKs.

If you are working with a format that is not FHIR, the company has included several connectors which allow easy conversion from HL7v2, CCDA, and flat file data to FHIR.

Google Healthcare Data Engine

Healthcare Data Engine contains the Google Cloud Healthcare API, tailored to provide longitudinal clinical insights in FHIR. It can map more than 90 percent of HL7 v2 messages – medications and patient updates – to FHIR across leading EHRs.

The goal is to enable a cloud environment for advanced analytics and AI applications to help healthcare, and life sciences organizations harmonize data from EHRs, claims data, and clinical trials.

Cloverleaf FHIR Server

Infor has traditionally been at the forefront of seeking to help solve interoperability challenges within a healthcare organization. The Infor Cloverleaf suite has released a next-generation solution.

Infor FHIR Server provides a way for healthcare organizations to use modern technologies to digitize their operations by connecting data from both legacy and modern solutions into a single system. Implementations also support local requirements of the HL7 FHIR standard, making data available through secure web APIs for further analysis.

The FHIR server is part of a more overarching data interoperability platform that helps organizations with clinical data exchange. It has prebuilt connectors for easy integration into modern and legacy systems and continuous or batch processes.

Healthcare industry clouds

Google 

Is it enough to have big clients like the Mayo Clinic and CommonSpirit, among others, on board? Is Google’s traction in the market significant enough? Fitbit’s acquisition might provide another benefit since it will be integrated into Google’s virtual care and remote patient monitoring services. 

The care studio platform, which allows for a single centralized view of a patient from diverse EMR systems, has also been beneficial. I am a fan of the Google search capability for clinicians.  

Microsoft 

With the recent buy of Nuance, Microsoft’s health cloud is placing a greater emphasis on voice solutions. The primary product is DAX integration with Microsoft Teams for virtual care. Microsoft has a superior stickiness in the 365 ecosystems because most healthcare institutions already use 365.

Microsoft has a significant advantage since it’s one of the easier products to get up and running quickly. I believe that Microsoft will do well in this market.

Workday

The healthcare ERP cloud vendor has a particular emphasis on employee experience, given the fact that health institutions around the world are facing shortages in all areas. Workday ERP adoption has been widespread among healthcare organizations, partly because supply chain is at the forefront of cost savings, and companies want to get to the bottom line of patient care.

Oracle

The recent acquisition of Cerner by Oracle has caused a stir in the industry, with many wondering if it will be a game-changer or just another failed attempt at integration. Only time will tell. The company still has a long way to go before achieving its bold vision of creating a master patient database, but I applaud the effort nonetheless.

Key themes for decision makers

Who is your preferred partner? CIOs will utilize their partners to select their cloud interoperability platform. If you’re already a heavy user of Azure and 365, stick with Microsoft. The same applies to the other providers.Pick a partner and go all in. This is not a time to pilot since these solutions solve the same problem and provide a similar playbook on interoperability.Invest in upskilling engineers emphasizing native cloud development while mastering cloud-to-cloud integration. Avoid any potential for vendor lock-in.If you solicit big four consulting firms for help with your assessment, be mindful that they may give you biased advice because of their existing partnership and joint ventures with healthcare cloud providers.Cloud Management, Healthcare Industry

Elaborating on some points from my previous post on building innovation ecosystems, here’s a look at how digital twins, which serve as a bridge between the physical and digital domains, rely on historical and real-time data, as well as machine learning models, to provide a virtual representation of physical objects, processes, and systems.

Keith Bentley of software developer Bentley Systems describes digital twins as the biggest opportunity for IT value contribution to the physical infrastructure industry since the personal computer, and they’re used in a wide variety of industries, lending enterprises insights into maintenance and ways to optimize manufacturing supply chains.

By 2026, the global digital twin market is expected to reach $48.2 billion, according to a report by MarketsAndMarkets.com, and the infrastructure and architectural engineering and construction (AEC) industries are integral to this growth. Everything from buildings, bridges, and parking structures, to water and sewer lines, roadways and entire cities are ripe for reaping the value of digital twins.

Here’s a look at how digital twins are disrupting the status quo in the infrastructure industry — and why IT and innovation leaders at infrastructure and AEC enterprises would be wise to capitalize on them.

Redrafting the business model

For decades in the AEC industry, work has been performed on a project-by-project basis using computer-aided design (CAD) and more recently building information modeling (BIM) software to create specific 2D and 3D deliverables. The industry is now moving toward integrated suites of tools and industry clouds, which open the door to new business models, industry ecosystems, and more collaborative ways of working.

As the use of digital twins advances, new possibilities for annuity revenues are opening up as well for AEC firms to manage and maintain infrastructural digital twins for their clients.

These new business models are disrupting the infrastructure industry and reconfiguring opportunities as the industry adjusts to new ways of working. Digital twins will likely do for the infrastructure space what various platform models have already done for music, books, retail, and gig economy services.

Due to the cloud-based, platform business model, possibilities will open up not only for operations and maintenance services around core digital twin models, but for value-added digital services wrapped around these twins such as visualization, collaboration, physical and cybersecurity, data analytics, and AI-enabled preventative maintenance.

Plus, infrastructure developers can partner with digital twin providers and the surrounding ecosystem of service providers to benefit from the sale of the physical asset as well as the provisioning of ongoing digital services via digital twin models. Over time, these subscription-based services could add a significant amount to the original sale price. For example, a real estate project of 100,000 square feet could net $1 million in add-on revenues over five years from digital twin-related services, and nearly 80% of an asset’s lifetime value is realized in operations.

Digital twin use cases and ROI

The full suite of digital twin use cases encompasses many areas, but one of the largest is in helping infrastructure become more efficient, resilient, and sustainable. With 70% of the world’s carbon emissions having some link to the way infrastructure is either planned, designed, built, or operated, digital twins can help with visibility and insights for real-time decisions. Using our earlier example, if a 100,000 square foot building has $200,000 in annual maintenance costs, the digital twin may save 25% from that and add additional value of $160,000 in terms of environmental, security, and useability benefits like booking of meeting rooms, space utilization analytics, and process visibility.

Another use case relates to worker safety. Bridge inspectors, for instance, often still suspend themselves from ropes, but with drone-based bridge inspections, such as those by Manam that capture photogrammetry used to assemble a 3D digital twin, they can now move much of the inspection process into the office. This saves time and greatly reduces injury risk. With each state in the US often having tens of thousands of bridges to inspect, the ROI for state Departments of Transportation becomes highly significant. Bridge inspectors still need to go out into the field with tools, however, but the 3D model provides an additional technique for rapid visual inspection, detailed analysis, and even AI-detected defects.

And from a security perspective, a digital twin for the Capital One Arena in Washington D.C., for instance, acts as a proving ground for the latest innovations in intelligent building sensor suites to help first responders rapidly prioritize search and rescue areas when emergencies occur.

A real-time system of record

By addressing the full lifecycle from construction to operations and maintenance, infrastructure digital twins provide a system of record and a single source of truth for all parties involved. The former BIM approach was the system of record during the plan, design, and build phases of a project, but it typically stopped once delivery was made to building operators.

As a living system of record, the digital twin merges the visual and geometric representation of the asset, process, or system with the engineering data, IT data, and operational data (such as IoT and SCADA) all in a real-time representation of the physical asset.

Without digital twins, architects often have no visibility into the operational side of their designs, something that could be valuable for feedback and continuous improvement in order to modify and refine designs over time.

For owners and operators, the digital twin provides an up-to-date virtual model they can view anytime from anywhere. They also have visibility into how these assets are performing including past, present, and future indicators.

Visualization and the metaverse

For complex systems such as buildings, visualization — including renderings, videos, and AR/VR/XR — is an indispensable element to clearly unlock the benefits of digital twins by communicating plans and ideas. AR inspection in particular helps site managers immediately flag mistakes for time and cost savings. They can also scan QR codes onsite to inspect the digital twin data associated with any physical equipment in the facility, such as HVAC systems or mechanical, electrical, and plumbing (MEP) equipment. And in VR mode, they can perform remote inspections of all data layers built into the digital twin model via fly throughs.

“We’ve seen an uptake in live digital twins in recent months,” says Martin Rapos, CEO of 3D BIM developer Akular. “In addition to the master integration of building data to break IoT and other building systems silos, there’s increased need for advanced visualization, where the data needs to be geolocated and accurately tagged on 2D or 3D files. The use of VR, MR and mobile devices in working with the digital twin is on the rise as well, allowing builders and asset operators to bring the digital twin from the office to the site, which is what the industry has been trying to achieve for years.”

As also discussed in my previous post, integrating visualization tools and capabilities into digital twin solutions is key to the technology stack and overall ecosystem so customers can better visualize and collaborate around design or operational decisions regarding their physical assets. Compared to other industries, infrastructure has been slow to digitally transform. But over the next two years, the shift to digital twins will likely move to early mainstream and propel the industry forward, so CIOs and executives working in the industry should watch these developments closely and structure their own digital twin strategies for how best to unlock their potential.

Digital Transformation, Infrastructure Management