Each quarter HP’s security experts highlight notable malware campaigns, trends and techniques identified by HP Wolf Security. By isolating threats that have evaded detection tools and made it to endpoints, HP Wolf Security gives an insight into the latest techniques used by cybercriminals, equipping security teams with the knowledge to combat emerging threats and improve their security postures.

Discover the following highlights uncovered this quarter.

Threat actors continued to thrive off living-off-the-land tactics in Q3, abusing tools built into Windows to conduct their attacks. The HP Threat Research team identified a new malware campaign that relied entirely on living-off-the-land tools. The attackers impersonated a shipping company to spread Vjw0rm and Houdini script malware.2 3 But time may be up for these malware families, given the deprecation of VBScript announced by Microsoft in October 2023. We expect threat actors will shift to tools written in other interpreted languages like Batch and PowerShell in response.

The team identified a surge in the abuse of Excel add-in (XLL) files in Q3.5 Macro-enabled Excel add-in malware rose to the 7th most popular file extension used by attackers, up from 46th place in Q2. HP Wolf Security detected attackers trying to infect devices with Parallax RAT through malicious Excel add-ins masquerading as scanned invoices.

In Q3, HP Wolf Security detected a malware campaign targeting hotels in Latin America with macro-enabled PowerPoint add-ins. The presentations, sent via email, were disguised as information from a hospitality management software vendor.

HP uncovered attackers hosting fake remote access trojans (RATs) on GitHub, attempting to trick inexperienced cybercriminals into infecting their own PCs. The code repositories claim to contain full versions of a popular malware kit called XWorm that sells for up to $500 USD, but instead downloads and runs malware on the aspiring hacker’s machine.

Click here to read the full report. To find out more about HP’s Workforce Security Solutions click here.

Cybercrime, Security

According to Infosys research, data and artificial intelligence (AI) could generate $467 billion in incremental profits worldwide and become the cornerstone of enterprises gaining a competitive edge.

But while opportunities to use AI are very real – and ChatGPT’s democratisation is accelerating generative AI test-and-learn faster than QR code adoption during the Covid pandemic – the utopia of substantial business wins through autonomous AI is a fair way off. Getting there requires process and operational transformation, new levels of data governance and accountability, business and IT collaboration, and customer and stakeholder trust.

The reality is many organisations still struggle with the data and analytics foundations required to progress down an advanced AI path. Infosys research found 63 per cent of AI models function at basic capability only, are driven by humans, and often fall short on data verification, data practices and data strategies. It’s not surprising only one in four practitioners are highly satisfied with their data and AI tools so far.

This status quo can be partly explained by the fact eight in 10 only began their AI journey in the last four years. Just 15 per cent of organisations have achieved what’s described as an ‘evolved’ AI state, where systems can find causes, act on recommendations and refine their own performances without oversight.

Then there are the trust and accuracy considerations around AI utilisation to contend with. Gartner predictions forecast 85 per cent of all AI projects by 2022to wind up with an erroneous outcome through mistakes, errors, bias and things that go wrong. One in three companies, according to Infosys, are using data processes that increase the risk of bias in AI right now.

Ethical use of AI is therefore an increasingly important movement being led by government, industry groups and thought leaders as this disruptive technology advances. It’s for these important reasons the Australian Government deployed the AI Ethics Principles framework, which followed an AI ethics trial in 2019 supported by brands such as National Australia Bank and Telstra.

Yet even with all these potential inhibitors, it’s clear the appetite for AI is growing and spend is increasing with it.

So what can IT leaders and their teams do now to take AI out of the data science realm, and into practical business applications and innovation pipelines? What data governance, operational and ethical considerations must we factor in? And what human oversight is required?

It’s these questions technology and transformation leaders from finance, education and retail sectors explored during a panel session at the Infosys APAC Confluence event. Here’s what we discovered.

Operational efficiency is the no-brainer use case for AI

While panellists agreed use cases for AI could well be endless and societally positive, the ones gaining most favour right now orient to operational efficiency.

“We are seeing AI drive a lot deeper into the organisation around how we can revolutionise our business processes, change how we run our organisation, and all add that secret sauce from a data and analytics perspective to improve customer outcomes,” said ANZ Bank CIO for Institutional Banking and Markets, Peter Barrass.

An example is meeting legislative requirements to monitor communications traders generate in 23 countries. AI is successfully used to analyse, interpret and monitor for fraudulent activity at a global scale. Crunching and digitisation of documents, and chatbots are other examples.

Across retail and logistics sectors, nearly three in 10 retailers are actively adopting AI with strong business impact, said Infosys APAC regional head for Consumer, Retail and Logistics, Andal Alwan. While personalisation is often a headline item, AI is also increasing operational efficiencies and frictionless experiences across the end-to-end supply chain.

Cyber security is another favoured case for AI across multiple sectors, once again tying to risk mitigation and governance imperatives.

Advancing AI can’t be done without a policy and process rethink

But realising advanced AI isn’t only a technical or data actionability feat. It requires transformation at a systematic, operational and cultural level.

Just take the explosion of accessible AI to students from a learning perspective. With mass adoption comes the need for education institutions such as the Melbourne Archdiocese Catholic Schools (MACS) to actively build policies and positions around AI use. One consideration is how open accessibility of such tools can influence students. Another is protecting academic integrity.

Then it’s making sure leadership is very clear from an education system perspective to gain consistency across MACS’ 300 schools for how to utilise AI in learning. “We need to educate our teachers to be able to think about how their students will use AI and how they can maximise the learning for individual students, taking on-board some of these types of tools available,” MACS chief technology and transformation officer, Vicki Russell, said.   

Elevating data governance and sharing is critical

Simultaneously, data governance and practices need refinement. Alwan outlined two dimensions to the data strategy debate: Intra-organisation; and inter-organisation.

“Intra-organisation is about how I govern the data: What data I collect, why I’m collecting it and how am I protecting and using it,” she explained. “Then there’s inter-organisation, or between retailers, producers and logistic firms, for instance. Collaboration and sharing of data is very important. Unless there is visibility end-to-end of the supply chain, a retailer isn’t going to know what’s available and when it’s going to be arriving. All of this requires huge amounts of data, which means we’re going to need AI for scaling and to predict trends too.”

A further area of data collaboration is between retailers and consumers, which Alwan referred to as “autonomous supply chains”. “It’s about understanding demand signals from the point of consumption, be it online or physical, then translating that in real time to organisation systems to get more security of planning and supply chain. That’s another area of AI maturity we’re seeing evolving.”

Infosys Knowledge Institute’s Data + AI Radar found organisations wanting to realise business outcomes from advanced AI must develop data practices that encourage sharing and position data as currency.

But even as the financial sector works to pursue data sharing through the Open Banking regime, Barrass reflected on the need to protect the information and privacy of customers and be deliberate about the value data has to both organisation and customer.

“In the world of data, you have to remember you have multiple stakeholders,” he commented. “The customer and person who owns the data and who the data is associated with is really the curator of that information, and should have right to where it’s shared and how it’s shared. Corporates like banks have a responsibility to customers to enable that. That needs to be wrapped up in your data strategy.”

Internally, utilising the wealth of education learning and data points MACS has been capturing is a critical foundation to using AI successfully.

“The data and knowledge a business has about itself before it enters into an AI space is really important in that maturity curve,” Russell said. “Having great data and knowing what you have to a certain extent before you jump into your AI body of work or activities is important. But I also think AI can help us leapfrog. There’s knowing enough but also being open to what you might discover along that journey.”

Building trust with customers around AI still needs human oversight

What’s clear is the onus is on organisations to structurally address trust and bias issues, especially as they lean towards allowing AI to generate outcomes for customers autonomously. Ethical use of data and trust in what and how information is used must come into play. As a result, parallel human oversight of what the machine is doing to ensure outcomes are accurate and ethical remains critical.

“Trust in the source of information and really clear ownership of that information is really important, so there’s clear accountability in the organisational structure for who is responsible for maintaining a piece of information driving customer decision outcomes,” said Barrass.  “Then over time, as this matures, we potentially could have two sets of AI tools looking at the same problem sets and validating each other’s outcomes based on different data sets. So you at least get some validation of one set of information drivers.”

Transparency of AI outcomes is another critical element with customers if trust in AI is to evolve over time. This again comes back to stronger collaboration with data owners and stakeholders, an ability to detail the data points driving an AI-based outcome, and explaining why a customer got the result they did.

“It’s very important to be conscious of the bias and how you balance and provide vast sets of data that constantly work against the bias and correct it,” Alwan added. “That’s going to be key for the success of AI in the business world.” 

We all need to work with ChatGPT, not against it

Even as we strive for responsible AI use, ChatGPT is accelerating generative AI adoption at an unprecedented rate. Test cases are being seen in everything from architectural design to writing poetry, creating law statement of claims and developing software code. Panellists agreed we’re only scratching the surface of use cases this generative AI can tackle.

In banking, it’s about experimenting in a controlled way and understanding the ‘why’ so generative AI is applied to achieve solid business outcome, Barrass said. In the world of retail and consumer-facing industries, conversational commerce is already front and centre and ChatGPT is set to accelerate this further, Alwan said.

For Russell, the most important thing is ensuring the future generation learns how to harness openly accessible AI tools and can prompt it appropriately to gather great information out of it, then reference it. In other words, education evolves and works with it.

It’s a good lesson for us all.

Infosys

Technological disruptions continue to redefine the CIO role within corporations. As innovators and value creators, CIOs are charged with managing developments like low-code/no-code (LCNC), which is revolutionizing user-generated innovation by enabling people with little to no coding experience, or “citizen developers,” to quickly and easily deliver new capabilities on demand without having to rely on established development teams.

We are experiencing an undeniable shift toward this kind of democratized technology. Industry research shows that in 2021, LCNC platforms accounted for 75% of new app development, and Accenture’s own research says that 60% of LCNC users expect their use of the platform to increase. But sustaining this type of development in a declining talent market isn’t easy.    

With nearly one in five business leaders experiencing constraints due to the decline in tech talent, CIOs need to look beyond their traditional pool of IT professionals to a broader community, and cultivate and nurture new talent networks that bring together citizen developers with their professional counterparts.  

As the borders between business and IT blur, there’s a massive opportunity for forward-thinking CIOs to rethink how they work and lead their organizations, and accepting LCNC platforms to operate smarter and faster achieves sharp breakthrough gains in corporate profitability and efficiency.

What’s accelerating LCNC adoption

Diana Bersohn

The main areas fueling LCNC adoption are ease of use, ease of integration with existing solutions and technologies, and faster value creation. Corporations are under pressure to innovate and solve problems quicker, and those that focus on delivering experiences outperform their peers by six times in year-on-year profitability over one, three, five and seven years. So how do SMBs utilize LCNC platforms and stay relevant among larger businesses?

Tech waves have shown to accelerate SMB business growth, and LCNC will have an equal, if not larger, impact on SMBs. Today, there’s a growing set of SMBs utilizing LCNC delivering value in every facet of the business by enabling and simplifying everything from customer acquisition to back-end processes. This comes at a time when SMBs are looking for ways to compete and differentiate against larger companies and other SMBs.

There are three factors that make LCNC relevant for SMBs in today’s business environment:

Digital maturity as a competitive necessity: More than 70% of small businesses worldwide are accelerating digitization, and 93% say COVID-19 made them more reliant on technology.Challenging access to digital talent: One in five SMBs surveyed said their LCNC platform search was driven by the scarcity of digitally fluent staff.Enterprise IT solutions do not meet SMB needs: Up to 47% of SMBs think enterprises don’t understand the challenges they face and movement towards LCNC illustrates that point. 

New people are engaging with technology within the enterprise and broader ecosystems, and “bring your own” is fast becoming “make your own” as citizen developers take advantage of rapidly advancing LCNC tools.

Embracing a new operating model

Putting the power into people’s hands requires careful management. LCNC operating models must simultaneously balance the need of innovation, stabilization, and scaling for the business and technology so the CIO and IT teams can better enable crucial business change and innovation, rather than act as technological gatekeepers.  

Christian Kelly

CIOs should also think of capabilities falling into different categories, particularly those that are customer-facing, enterprise-wide or departmental. This categorization will help determine optimum team structures, like determining the right mix between new citizen developers and pro-code developers within the IT organization.

Plus, there’s a need to create new engagement models to enable better collaboration with CISOs and chief data officers for security and data governance. To do so, those teams must have clear roles and responsibilities to deliver user experience and foster innovation. The technology portfolio should also be segmented to work within the new model by evaluating existing applications to be migrated into LCNC.

Another model includes creating a new pool of funding for innovation with LCNC. CIOs should take charge in this way and drive LCNC platform providers to expose more of the inner workings of the platform, create joint options for supporting the citizen developers, and simplify the effort to address CISO’s concerns.

Over time, CIOs need to develop operating models by balancing a mix of pro-code and citizen developers within LCNC platform providers to drive maturity. CIOs will continue to be the guardians of technology, but they must become stewards and co-innovators as well, guiding others, including citizen developers, to realize the promise of innovation at scale. This change requires new operating models designed to support co-innovation, enable personal productivity, and ensure that access to data by LCNC platforms is managed and backed by robust governance and security. Companies with a clear approach to LCNC that empower their people with the right tools and systems will achieve innovation at the next level and beyond.

Sriram Sabesan, senior manager, Technology Strategy, Software and Platforms at Accenture also contributed to this article.

CIO