Cyber hygiene offers a preventative approach to future attacks in order to avoid costly remediation and recovery incidents – much like dental hygiene recommends flossing and brushing to avoid later cavities and painful procedures. 

Asking a good CISO which applications and devices should be inventoried and secured is like asking a dentist which teeth you should floss between. Four out of five will tell you, “Only the ones you want to keep.”

Cyber hygiene, while considered a key aspect of cybersecurity, is also a distinct preventative practice that uncovers data, application, infrastructure and network risks – especially the ones we’re not looking for.

A SecOps pro shared a story with me about their first sitewide inventory exercise, which discovered a PlayStation 5 running in a break room in the headquarters. That may not sound like a big deal, but that game console is also a full-fledged computer that can see file systems and devices on the corporate network, capture pictures and sound from the room, surf websites and download automatic software updates.

Prevention is easier than treatment if we can remember to do it. We all know it would be safer to prevent risks and breaches through cyber hygiene across all of our endpoints rather than remediate them once they are deployed across production and exposed to attackers. 

So why isn’t cyber hygiene a good habit all enterprises can stick to?

The cultural challenges of preventative measures

Work for a few years in any decent-sized company that leans heavily on its digital backbone, and you will find preventative processes that get in the way of progress.

Maybe it’s a draconian unit testing requirement that churns out thousands of meaningless results and fails builds. Or a tedious change approvals process. Or a mandatory code freeze that causes development teams to regularly miss delivery windows.

DevSecOps teams that have experienced such entanglements are likewise worried that too much security oversight can block releases and stymie innovative improvements for customers when time-to-market means everything. 

Maybe if cyber hygiene was an executive-level priority, prevention would improve. Unfortunately, a recent cybersecurity study by Tanium found that 63% of respondents said leadership is only concerned about cybersecurity following an incident, while 79% said executives are more likely to sign off on more cybersecurity spending following a breach. Yikes.

Cybersecurity practices and tools are often concerned with protection from outside attacks – setting up secure network perimeters, creating access, authorization and authentication policies, detecting attacks, and monitoring networks and systems for the telltale signs of threat behaviors and data breaches in progress.

By contrast, cyber hygiene takes a holistic inside-out approach to prevention. This may start with a diagnostic solution such as a risk assessment, but good hygiene also represents the management plans, employee policies and the security posture of the entire organization around maintaining secure technology practices across all IT assets of the enterprise.

If done well, it should become a lightweight part of the way the company operates. Making cyber hygiene second nature might require a little evangelism and up-front planning, but once in place, it will actually make software releases, migrations and updates of on-premises and cloud-based software and infrastructure easier.

Good habits that drive cyber hygiene success

Most security breaches (anywhere from 8895%, depending on which research you find) involve some degree of human causation. 

Therefore, organizations with a strong cyber hygiene posture exhibit several common practices that incorporate changes across people, processes and technology – in that order:

Education and behavior change. The most successful cyberattacks walk through the front door, using some combination of phishing, credential theft, rogue downloads and social engineering rather than brute force to gain entry. 

Cyber hygiene and security awareness should be part of the core training of every employee, and educational resources should be provided for customers as well to help them recognize and avoid potential threats. Education is the best way to mitigate human fallibility and prevent malicious payloads from compromising your systems.

Continuous discovery and inventory management. The first run of an automated discovery will undoubtedly turn up lots of unexpected surprises and vulnerabilities. But discovery isn’t a one-time compliance check, especially in today’s constantly changing cloud and hybrid IT environments. New ephemeral cloud instances, device endpoints and software can be introduced to the operating environment at any moment. 

Once every IT asset is exposed to the light of day, security and departmental leaders need an inventory of the current environment, with a view toward regular maintenance, updates and end-of-life decommissioning of any asset that remains past its shelf life.

Triage and prioritization. Even with the best vulnerability scanning and threat detection setup, no company will ever have enough skilled security and SRE professionals to respond to 100% of the potential issues.

Organizations must prioritize issues that are detected, using a risk scoring system that takes into account the asset’s criticality to ongoing business, the value of the data it handles, as well as its level of integration with other systems, or exposure to the outside world. An old system that is no longer connected to anything can wait for decommissioning, while a critical data store with private information demands immediate attention.

Zero-trust policies mean every user is considered untrusted by default and is therefore blocked from access without explicitly defined authorization in IAM (identity and access management) systems. 

Zero trust policies shouldn’t just cover users. They need to be extended to every device endpoint as well. API calls from a medical device on a hospital network, or a query from a microservice in AWS or GCP shouldn’t be able to set off a chain reaction. In practice, this policy often includes a least access privilegemodel, where each of the endpoints can only access the minimum resources necessary to support a business function.

The Intellyx take

One thing is certain: cybercriminals and hackers haven’t overlooked the expansion of the enterprise attack surface so much change has created.

In a modern application world where cloud instances and endpoints come and go in an instant, security and resiliency can often get overlooked in favor of speed to market, scalability and interoperability concerns. 

Don’t get tunnel vision racing your organization past the preventative warning signs and guardrails a robust cyber hygiene practice can offer. 

Learn how Tanium is bringing together teams, tools, and workflows with a Converged Endpoint Management platform. 


You’ve invested in state-of-the-art, end-to-end security solutions. You’ve implemented robust security and privacy policies and outlined best practices. You’ve got monitoring and detection in place at every level and you apply updates as soon as they’re available. You’ve done everything a smart and responsible organization needs to do to safeguard your systems, networks, data, and other assets from cyber threats.

The question is: are you prepared to recover from a cyber event?

As cyber threats increase in frequency and sophistication, most businesses will eventually fall prey to a cyber event, despite their best efforts. The longer it takes to recover, the more it will cost. Swift recovery is paramount to minimizing damage. Simply put, organizations must prepare to recover from a cyber event before it occurs.

Why a disaster recovery plan may not be good enough

Many organizations have disaster recovery plans and assume the concept of disaster recovery and cyber recovery are the same: a system or location goes down, you shift operations, complete recovery efforts, and return to normal. However, the two scenarios have some vital differences.

When a disaster happens — such as a data center fire or server hardware dying —you get alerted right away. You know when and where the disaster occurred and have a predictable recovery point objective (RPO).

On the other hand, with a cyber event, you’re sure of only one thing: there’s been an attack. You don’t know when it began, where it happened, the scope of the damage, or how to mitigate the intrusion. Although you may have been alerted on a Tuesday at 8 AM, the cyber event may have occurred days, weeks, or even months earlier — which means that the initial damages you’re aware of may only scratch the surface of a much bigger problem.

Furthermore, cyber-attacks have become increasingly sophisticated and commonplace, and a 5-year-old disaster recovery plan may not cover modern scenarios. If you don’t have a cyber event recovery plan, it could take days or even weeks to recover, costing time, money, customer trust, and lost business.

Store secondary copies of information offsite or off-network

In addition to the potential for natural disasters, storing data solely onsite exposes your business to risks such as backup file corruption should your local network suffer an attack. As part of your cyber event recovery plan, ensure you’re storing secondary copies of information offsite or off-network. Keep these copies readily available, so you can begin recovery efforts immediately to limit damage and costs.

Secondary data storage solutions range from offsite servers or tape storage to private or public cloud backups. Cloud storage is your best bet when it comes to accelerating the time to recovery. Data is easily accessible and does not require manual intervention, meaning recovery work can start quickly.   

Determine data classification and order of recovery

Data classification involves categorizing information based on sensitivity and business value. Organizations have many reasons to perform data classification, ranging from security and data compliance to risk management and storage cost control.

When recovering from a cyber event, data classification makes it easier to identify what data has been lost, the scope of the damage, and, ultimately, the event’s cause. When organizations do not understand data classification, recovery efforts take far longer and far more work. And in some cases, they may not be able to recover fully.

Another critical and related piece of the puzzle is understanding the order in which your environment needs to be recovered. While many organizations are aware of this need, many are not prepared. Data classification enables you to identify codependences within your IT topology. If your most critical application relies on lesser or noncritical systems to function, those applications need to be labeled as critical.

Have a failback plan

Once you have mitigated damages from the cyber event, you need to return operations from the secondary location to your original location. Having a failback plan in place — whether moving back to infrastructure that’s on-premises or in the cloud — enables your company to resume business as soon as possible with minimal downtime or data loss. Unfortunately, very few companies are positioned to do this quickly, costing additional time and money.

Your failback plan should incorporate all data and data changes as well as workflows. The failback plan should include your data classifications and order of recovery as well as testing to verify data accuracy, primary systems, and network quality. Ideally, the failback process should be automated.

Get industry-leading cyber recovery as a service.

No matter how secure you’ve made your infrastructure, the likelihood of a cyber event impacting your organization eventually is relatively high. With cyber recovery plans in place, you can minimize damages and costs and accelerate the time to recovery.

One of the best things your organization can do is to take advantage of data or cyber recovery as a service (CRaaS). Based in the cloud, CRaaS saves time and money when a cyber event happens because it streamlines information recovery.

Zerto on HPE GreenLake makes cyber recovery faster and easier and frees up your organization to mitigate the threat and stop the intrusion, reducing the overall cost and damages caused by the cyber event. Benefits include down-to-the-second RPOs via continuous data protection and journal-based recovery. Zerto also offers the industry’s fastest recovery time objectives.

To learn more about how you can improve your readiness for a cyber event, talk to one of GDT’s cyber recovery specialists.


The cyber-attacks on Optus and Medibank recently have brought into focus the devastating impact breaches can have on the reputation of any organisation.

The Optus attack, which was the largest and most high profile in Australian history, has left almost 10 million customers understandably livid that their personal information was stolen.

It is believed that the Medibank attack began when an individual with high-level access to the health insurer’s systems had their credentials stolen by a hacker, who then put them up for sale. Optus had an application programming interface (API) online that did not need authorisation or authentication to access customer data.

The reputational impact of both cyber-attacks will be felt for some time to come. They are a warning shot to Australian businesses that simply can’t be ignored.

Many CISOs will now be taking a closer look at their internal cyber education programs, among other things, to give staff the best chance of not falling victim to cyber-attacks that can severely damage their organisations.

Sarah Sloan, head of government affairs and public policy at Palo Alto Networks, and Matt Warren, director of RMIT’s Cyber Security and Innovation Research Centre joined CIO Australia’s Byron Connolly for a discussion recently on how Australian organisations can improve their cyber education programs. The panel discussion was held during the launch of Palo Alto CyberFit Nation program.

The cyber challenges that businesses face are widely known, a lot of them focused around human and organisational issues. The human aspect of cyber security awareness is such as a complex issue that hackers are looking to exploit from scam attacks to the spreading of malware such as ransomware, says RMIT’s Warren.

“We live in the new cyber normal that organisations are facing as they become greater targets for cyber-attacks. One of the key reasons for this challenge is that organisations cannot manage their increasingly complex systems and it is taking time for them to accept cyber security as a business risk rather than a technical one,” says Warren.

Palo Alto Networks’ Sloan says organisations across Australia are becoming more aware of cyber risks and the importance of educating staff, their customers and even students on how to mitigate these risks.

“Many companies are incorporating cyber security as part of their workplace curriculum and regularly test the effectiveness of that training, for example, via phishing email testing,” she says.

While doing this, organisations should ensure their cyber education programs also incentivise good behaviour, says Sloan.

“This could include rewarding individuals who identify all the phishing attempts and report them to the organisation’s security operations team. These simple measures can go a long way to creating a security culture and environment where people feel comfortable to come forward if and when they may click on that link,” she says.

When creating training programs, enterprises may also want to look beyond the ‘click’ to identify why an individual has taken certain actions and adjust their responses/training for those people accordingly, says Sloan.

“For example, did they click on the link because the content of the email has elicited a particular response or because they have been pressured by a sense of urgency?” she asks.

Governments across the world have behavioural policy areas – such as Australia’s Behavioural Economics Team within the Department of Prime Minister and Cabinet – to research why individuals do or do not take certain actions or respond to certain messages, says Sloan.

“Some of this thinking could be applied to the cyber security training and education space to help tailor messaging to particular individuals and ensure better security outcomes,” she says.

But Sloan points out that it’s important to remember that we are all human, we all make mistakes and it only takes one click.

“So if your organisation’s corporate cyber strategy is that all users will behave in a certain way or comply with certain policies, you really don’t have a corporate cyber strategy.

“Every organisation must look at preventative measures, ensure they can respond to threats in real-time and leverage automation, as well as understand their cyber security posture through the eyes of the adversary,” says Sloan.

Filling the gaps in cyber training

Cyber safety and cyber security awareness is something that should be taught from school level, says RMIT’s Warren.

He says the Office of the eSafety Commissioner does great work at schools raising awareness around cyber safety and maybe cyber security could be combined with that messaging.

Palo Alto Networks’ Sloan adds that the industry is certainly heading in the right direction with several programs helping to raise awareness of cyber issues while providing students with tools to protect themselves.

But more needs to be done to embed cyber security and technology across the school and university curriculums, she says.

“In the digital era, it’s important that all of our graduates – our lawyers, accountants, doctors and economists – understand cyber security risks, mitigations and how they are relevant to their professions.

“Raising awareness across faculties and disciplines will not only lead to better security outcomes, it may also lead to an interest in further study in cyber. This may help us with our cyber security skills shortage,” says Sloan.

However, there is a ‘pipeline problem’ at the school level, says RMIT’s Warren. If an undergraduate student starts studying cyber security in 2023, they will complete their degree in 2026, he says.

“The issue is that not all universities offer cyber security and it means that alternative courses such as micro-credentials, and other alternative pipelines need to be developed.”

Creating a cyber aware board

From a policy and legislative point of view, Australia has some great foundations to support and enhance cyber security awareness at the board level, says Palo Alto Networks’ Sloan.

There is a range of directors’ responsibilities when it comes to duty of care and diligence around cyber security, as captured in the Corporations Act. The Australian Government has also elevated cyber security risk to the board through a series of reforms to the Security of Critical Infrastructure Act 2018.

These reforms aim to enhance Australia’s national resilience by introducing varying security obligations across 11 regulated critical infrastructure sectors, says Sloan.

“One of the relevant obligations for directors under this Act is that regulated critical infrastructure assets may be required to report to the government annually as part of their risk management programs, which must address cyber security risks.

“This new obligation is expected to elevate cyber security to boards across Australia,” says Sloan.

From a guidance and education point of view, the Australian Securities and Investment Commission has issued statements on cyber guidance, emphasising the importance of active engagement by the board in managing cyber risk. The Australian Cyber Security Centre (ACSC) has also released guidance on questions that board members can ask about cyber security risk management.

RMIT’s Warren adds CEOs need to be aware of what cyber security is and why it should be viewed as a business risk.

“It is coming to the stage that lack of awareness is no longer an issue. CEOs and their boards also have to understand the complexity of the systems that their organisations are operating, and the risks associated with that complexity,” he says.


Work has changed dramatically thanks to the global COVID pandemic. Workers across every market sector in Australia are now spending their workdays alternating between offices and other locations such as their homes. It’s a hybrid work model that is certainly here to stay.

But moving workers outside the network perimeter presents cyber security challenges for every organisation. It provides an expanded attack surface as enterprises ramp up their use of cloud services and enable staff to access key systems and applications from just about anywhere.  

Senior technology leaders gathered in Melbourne recently to discuss the cyber security implications of a more permanently distributed workforce as their organisations move more services to the cloud. The conversation was sponsored by Palo Alto Networks.

Sean Duca, vice-president, regional chief security officer, Asia-Pacific & Japan at Palo Alto Networks, says with the primary focus now on safety and securely delivering work to staff, irrespective of where they are, organisations need to think about where data resides, how it is protected, who has access to it and how it is accessed.

“With many applications consumed ‘as a service’ or running outside the traditional network perimeter, the need to do access, authorisation and inspection is paramount,” Duca says.

“Attackers target the employee’s laptops and applications they use, which means we need to inspect the traffic for each application. The attack surface will continue to grow and also be a target for cybercriminals. This means that we must stay vigilant and have the ability to continuously identify when changes to our workforce happen, while watching our cloud estates at all times,” he says.

Brenden Smyth from Palo Alto Networks adds the main impact of this more flexible workforce on organisations is that they no longer have one or two points of entry that are well controlled and managed.

“Since 2020, organisations have created many hundreds if not tens of thousands of points of entry with the forced introduction of remote working,” he says.

“On top of that, company boards need to consider the personal and financial impacts [of a breach] that they are responsible for in the business they run. They need to make sure users are protected within the office, as well as those users connecting from any location,” he says.

Gus D’Onofrio, chief information technology officer at the United Workers Union, believes that there will come a time when physical devices will be distributed among the workforce to ensure their secure connectivity.

“This will be the new standard,” he says.

Iain Lyon, executive director, information technology at IFM Investors, says the key to securing distributed workforces is to ensure the home environment is suitably secure so the employee can do the work they need to do.

“It may be that for certain classifications of data or user activity, we will need to set up additional technology in the home to ensure compliance with security policy. That challenge is both technical and requires careful human resource thought,” he says.

Meeting the demands of remote workers

During the discussion, attendees were asked if security capabilities are adequate to meet the new demands of connecting remote workers to onsite premises, infrastructure-as-a-service and software-as-a-service applications.

Palo Alto Networks’ Duca says existing cyber capabilities are only adequate if they do more than connectivity (access and authorisation).

“It’s analogous to an airport; we check where passengers go based on their ID and boarding pass and inspect their person and belongings. If the crown jewel in an airport is the planes, we do everything to protect what and who gets on.

“Why should organisations do anything less?” he asks. “If you can’t do continuous validation and enforcement, what is the security efficacy of the security capability?”

Meanwhile, Suhel Khan, data practice manager at superannuation organisation, Cbus, adds that distributed workforces need stronger perimeter security and edge security systems, fine-grained ‘joiner-mover-leaver’ access control and entitlements, as well as geography-sensitive content management and distribution paradigms.

“We have reached a certain baseline in regard to the cyber security capabilities that are available in the market. The bigger challenge is procuring and integrating the right suite of applications that work across respective ecosystems,” he says.

Held back by legacy systems

Many enterprises are still running legacy systems and applications that can’t meet the demands of a borderless workforce.

Palo Alto Networks’ Smyth says cyber impacts of sticking with older systems and applications are endless.

“Directly connected to SaaS and IaaS apps without security, patch management, vendor support – the list goes on – means organisations will not have full control of their environment,” he says.

Duca adds that organisations running legacy platforms could see an impact on productivity from their employees, and the solution may not be able to deal with modern-day threats.

“Every organisation should use this as a point in time to reassess and rearchitect what the world looks like today and what it may look like tomorrow. In a dynamic and ever-changing world, businesses should look to a software-driven model as it will allow them to pivot and change according to their needs,” he says.

Cbus has challenges around optimally integrating software suites for end-to-end seamless process flow, like most enterprises that have built technical systems for core business functions over the past 10 years, says Cbus’ Khan.

“There are several app modernisation transformation programs to help us move forward. I believe that there will always be ‘heritage systems’ to take care of and transition away from.

“The only difference is that in the near future, these older systems will be built on the cloud rather than [run] on-premise and we would be replacing such cloud-native legacy applications with autonomous intelligent apps,” Khan says.

Meanwhile, IFM Investor’s Lyon says that like very firm, IFM has several key applications that are mature and do an excellent job.

“We are not being held back. Our use of the Citrix platform to encapsulate the stable and resilient core applications has allowed us to be agnostic to the borderless nature of work,” he says.

Centralising security in the cloud

The advent of secure access service edge (SASE) and SD-WAN technologies has seen many organisations centralise security services in the cloud rather than keep them at remote sites.

Palo Alto Networks’ Duca says that for many years, gaps will continue to appear from inconsistent policies and enforcement. With the majority of apps and data that sit in the cloud, centralising cyber services allows for consistent security close to the crown jewels.

“There’s no point sending the traffic back to the corporate HQ to send it back out again,” he says.

The decision about whether or not to centralise security services in the cloud or keep them at remote sites is based on the risk appetite of the organisation.

“In superannuation, a good proportion of cyber security programs are geared towards being compliant and dealing with threats due to an uncertain global political outlook. Organisations that can afford to run their own backup/failsafe system on premise should consider [moving this function] to the cloud. Cloud-first is the dominant approach in a very dynamic market,” he says.

United Workers Union’s D’Onofrio, adds that the pros of centralising security services at remote sites are faster access and response times, which is ideal for geographically distributed workforces and customer bases. A con, he says, is that a distributed footprint implies stretched security domains.

On the flipside, security domains are easier to manage if they are centralised in the cloud but will deliver slower response times for customers and staff who are based geographically afar, he says.


Your challenge: managing millions of dynamic, distributed, and diverse IT assets. 

With globally distributed workforces and assets hiding in the shadows growing exponentially, maintaining a complete and accurate inventory of every IT asset and achieving real-time visibility at scale is more challenging than ever before. After all, to keep our doors and windows locked, we need to know how many there are and where they are at. 

Yet the industry has failed to deliver a viable solution to the visibility problem, offering hub-and-spoke models, slow and saturate networks, that instead limit visibility in modern and complex environments.  

It’s no wonder many organizations can’t accurately report essential details about their environment. Solving this problem requires you to get back to basics.

To preserve and improve cyber hygiene, you first need to know what IT assets you have. Do you have 50,000, 100,000 or 500,000 computers and servers in your organization? Where are they? What are they? What’s running on them? What services do they provide? 

Answering those questions is what developing asset visibility—and following an asset discovery and inventory process—is all about. It’s the foundation for creating and maintaining cyber hygiene.

Why cyber hygiene depends on asset visibility

To manage your endpoints, you need three levels of knowledge:

What assets do you have, and where are they?
What software is running on them, and are they licensed? You need more than a hostname or an IP address.
How do the machines on your network relate to one another, and what is their purpose? In the world of servers, for example, you may have a group of servers that exist solely to host a service, like a company website.

All companies need this information, which in modern IT changes constantly. Network assets come and go, especially with bring-your-own-device (BYOD) policies and more companies encouraging employees to work from home (WFH).

And as networks become more complex and change faster, it becomes harder to maintain visibility into them. The consequences of losing sight of what assets there are and what those assets are doing become greater and greater. 

Why organizations struggle to create asset visibility

There are two primary reasons why organizations struggle to answer basic questions about their assets to maintain cyber hygiene.

1. Endpoint discovery has become a constantly moving target. 

Not every endpoint on a network is a desktop computer, laptop, or server. There are printers, phones, tablets, and a growing number of consumer and industrial internet of things (IoT) devices. Mobile device management (MDM) is a growing application field. 

But why should you have to worry about a consumer IoT device compromising the corporate network? Consider an employee working from home and the company’s security team is receiving alerts that someone is trying to break into her laptop. The source is a refrigerator with malware scanning her home network and trying to get into her device, which was temporarily on the corporate network. The same thing could occur with a smart light switch, thermostat, security camera—you name it.

Every device type can create operational and/or security risks, and the number of these types will only continue to increase in the coming years. 

2. Legacy tools struggle to create visibility in this new environment. 

Asset discovery tools built 10 years ago preceded many of the things modern IT environments operate with daily. Two examples: containers and hybrid clouds. 

These tools can’t handle the rate of change we see now. Yet organizations often remain attached to the tools they’re comfortable with, many of which are not easy to use. They may take pride in mastering hard-to-use tools. Maybe they wrote custom scripts to make them work more effectively. 

The unintended—and unfortunate—consequences of that are IT policies and processes crafted not because they’re the best way to address an issue, but because they fit the capabilities of the tools in use. It’s the IT version of “if you have a hammer, everything must be a nail”, with the policies being “we must nail things.” Entrenched tools become part of the IT ecosystem. But the best IT policies should be tool-agnostic. A tool built in 1993 or 2010 can’t offer that flexibility.

Next step: zero trust

Cyber hygiene is just the first step toward creating a more secure organization. The right asset visibility capability will also lay the foundation for nearly any zero-trust strategy or solution you choose to bring to life. 

When everything is a network device, everything is a potential security vulnerability. You need policies and procedures that break endpoints into three categories: managed, unmanaged, and unmanageable. 

Endpoint discovery is the first crucial step in the trend toward zero-trust solutions. CSO Online describes zero trust as “a security concept centered on the belief that organizations should not automatically trust anything inside or outside its perimeters and instead must verify anything and everything trying to connect to its systems before granting access.”

Threat response and remediation tools are only as good as the breadth of endpoints they’re running on. And with the endpoint acting as the new perimeter, endpoint discovery really is where cyber hygiene and security begin. Implementing a zero-trust practice thus becomes the next meaningful step on that journey.

Learn how to migrate to a zero-trust architecture with real-time visibility and control of your endpoints here.


Many Australian enterprises are getting their cloud security strategies wrong. While they are lowering infrastructure costs and introducing efficiencies by moving to flexible multi-cloud platforms, building the right level of security throughout their agile software development lifecycles is becoming difficult.

Almost two-thirds (61 per cent) of respondents to research questions posed by Cybersecurity Insiders, on behalf of Check Point, had integrated their DevOps toolchain into cloud deployments, but are still struggling with a lack of expertise that bridges security and DevOps. Only 16 per cent have comprehensive DevSecOps environments in place.

Senior technology executives gathered for a roundtable luncheon in Sydney recently to discuss why enterprises are often getting their cloud adoption strategies wrong, particularly when it comes to securing their infrastructure, as well as challenges around cloud compliance. The conversation, ‘Cloud tales: Lessons from a cyber incident response team’ was sponsored by Check Point Software Technologies.


Ashwin Ram, cyber security evangelist Office of the Chief Technology Officer at Check Point Software Technologies, says there are multiple factors at play when it comes to getting cloud strategies right.

Firstly, many organisations don’t understand or appreciate how dynamic cloud ecosystems are – a simple misconfiguration or security oversight can expose an organisation, he says.

“Cloud providers are innovating extremely rapidly and as such, it is difficult for cloud security teams to keep pace. The current cyber skills shortage is also a contributing factor as organisations struggle to find the right expertise to address the steep learning curve to bridge security and DevOps,” Ram says.

Further, he says, COVID-19 forced many organisations to rush their remote working and cloud projects in order to be more agile. This has resulted in many cloud projects being rushed through without proper assurance processes.

“Check Point’s Cloud Security Report 2022, 76 per cent of organisations have a multi-cloud strategy, which makes it difficult to implement consistent security. Organisations are struggling to implement the same security settings and policies on all clouds and ensure this is maintained to provide continuous consistency,” he says.

John Powell, principal security consultant at Telstra Purple, adds that it’s very easy to think of the cloud as reducing administration and providing more flexibility.

“But the truth is that there is a lot more to get right up front so that ‘business-as-usual’ is smooth as well as secure. The responsibility for security is shared according to what is outsourced to the cloud provider.

“This means that contractual arrangements are extremely important to make sure the boundary in the shared model is crystal clear. The need for legal expertise and even a cyber/legal mix of expertise is not often considered when moving systems and services to the cloud,” Powell says.

Meanwhile, John Boyd, group chief information officer at The Entertainment and Education Group (TEEG), says the organisation has adopted a hybrid cloud approach, which has provided the best of both worlds.

On-premise infrastructure provides stability for its venues, especially those in very remote locations. But when the business demands agility, the organisation turns to the cloud to meet these demand requirements, Boyd says.

“As for security, our team are testing at every stage of the software development lifecycle. Security is always at the forefront of our team’s mind and during application development, we adopt best practices such as educating staff, and outlining requirements clearly so [they] can focus on the most important issues,” he says.

Why cloud misconfigurations happen and what to do

The misconfiguration of cloud resources remains the most prevalent cloud vulnerability that can be exploited by criminals to access cloud data and services.

Check Point’s Ram says these misconfigurations happen because cloud teams are pushing out incredible amounts of code and building infrastructure at a rapid pace so mistakes are bound to happen.

Ram says that organisations with mature cloud security capabilities are using cloud security posture management tools to gain situation awareness of their cloud ecosystems in real time to automatically remediate misconfiguration.

“In addition to misconfiguration, organisations should also be aware of identity and access management role assumption attacks, which look to elevate privileges after initial entry. These attacks continue to be a significant concern,” he says.

Ram recommends that organisations invest in a tool that can visualise and assess cloud security posture by detecting misconfigurations, while automatically and actively enforcing gold standard policies to protect against attacks and insider threats.

Telstra’s Powell adds that exploiting the poor configuration of cloud resources is often much easier than exploiting software or hardware vulnerabilities or running a phishing campaign against privileged users.

“Misconfiguration is the most prevalent cloud vulnerability because it is often the lowest hanging fruit,” he says.

According to Powell, the configuration of cloud environments provides several technical security controls. He says that measuring technical security controls is best achieved by using technology tools.

“To this end, a cloud security assessment, with an associated tool, can be used to achieve this goal either as a once off or better still, as a regular check.”

TEEG’s Boyd says that the organisation’s resources are hosted exclusively within Azure and the team use Microsoft Cloud Service to proactively manage the security posture of the entire platform.

“Conducting regular assessments and reviewing any new recommendations help to strengthen the security configuration of our cloud resources,” he says.

Getting cloud compliance right

The ongoing technology skills shortage has made it difficult for organisations to find the right staff with skills to complete cloud-related audits and risk assessments.

Telstra’s Powell suggests that first up, organisations should “let machines do what they are good at and let people do what people are good at.’

“Technology controls can be tested and assessed with technical solutions and if this process is automated, then the compliance of the technical controls can be checked with high regularity so that any movement away from compliance is noticed and amended quickly.

“Assessing the actions of people or the flow of process is best assessed by a skilled security auditor and when human resources are scarce, they need to be used where they are most effective,” Powell says.

Secondly, if enterprises don’t have resources available internally to audit security controls or to design and build monitoring systems required to constantly test and assess these controls, then they should reach out to a partner, he says.

“It’s very difficult to retain specialised cyber security skills, so rather than continuing to train new cyber security staff, rely on the people who are already specialists and can provide that service,” he says.

TEEG’s Boyd says that compliance is an ongoing focus for his team, which is operating a business in seven regions, all with their own set of unique regulatory requirements. This requires the organisation to be aligned on its approach to compliance and execution.

“We rely on the expertise of our internal team in conjunction with key vendors that provide us with subject matter advice on risk assessment and establishment of clear policies and controls,” Boyd says.

Who is responsible with a breach occurs?

Attendees at the roundtable also discussed what enterprises need to be aware of when negotiating cloud contracts, particularly who is responsible for what when a breach does occur.

Telstra’s Powell says organisations need to make sure that the clauses of a contract with a cloud service provider defines the scope of what the provider is responsible for and what they are not.

Powell adds that this doesn’t apply only to a breach situation, but to everything that goes before a breach and the recovery from the breach.

“Be sure to include a clause of what can and can’t be tested within the cloud environment. Ask, ‘can we view the cloud service provider’s threat profile, risk assessments and risk register?’

“Most importantly, you cannot outsource accountability so don’t be too quick to believe that your risk is reduced because you are not responsible for the infrastructure that underpins your systems and services.”

Check Point’s Ram adds that most organisations will do well to understand the shared responsibility model as a first step.

“It’s important to note that the responsibility changes depending on the type of cloud resource you consume from infrastructure-as-a-service to platform-as-a-service to software-as-a-service offerings.

“The shared responsibility model is very specific on who is responsible for what as we saw with the Capital One breach.”

Cloud Architecture

Pandemic-era ransomware attacks have highlighted the need for robust cybersecurity safeguards. Now, leading organizations are going further, embracing a cyberresilience paradigm designed to bring agility to incident response while ensuring sustainable business operations, whatever the event or impact.

Cyberresilience, as defined by the Ponemon Institute, is an enterprise’s capacity for maintaining its core business in the face of cyberattacks. NIST defines cyberresilience as “the ability to anticipate, withstand, recover from, and adapt to adverse conditions, stresses, attacks, or compromises on systems that use or are enabled by cyber resources.”

The practice brings together formerly separate disciplines of information security, business continuity, and disaster response (BC/DR) deployed to meet common goals. Although traditional cybersecurity practices were designed to keep cybercriminals out and BC/DR focused on recoverability, cyberresilience aligns the strategies, tactics, and planning of these traditionally siloed disciplines. The goal: a more holistic approach than what’s possible by addressing each individually.

At the same time, improving cyberresilience challenges organizations to think differently about their approach to cybersecurity. Instead of focusing efforts solely on protection, enterprises must assume that cyberevents will occur. Adopting practices and frameworks designed to sustain IT capabilities as well as system-wide business operations is essential.

“The traditional approach to cybersecurity was about having a good lock on the front door and locks on all the windows, with the idea that if my security controls were strong enough, it would keep hackers out,” says Simon Leech, HPE’s deputy director, Global Security Center of Excellence. Pandemic-era changes, including the shift to remote work and accelerated use of cloud, coupled with new and evolving threat vectors, mean that traditional approaches are no longer sufficient.

“Cyberresilience is about being able to anticipate an unforeseen event, withstand that event, recover, and adapt to what we’ve learned,” Leech says. “What cyberresilience really focuses us on is protecting critical services so we can deal with business risks in the most effective way. It’s about making sure there are regular test exercises that ensure that the data backup is going to be useful if worse comes to worst.”

A Cyberresilience Road Map

With a risk-based approach to cyberresilience, organizations evolve practices and design security to be business-aware. The first step is to perform a holistic risk assessment across the IT estate to understand where risk exists and to identify and prioritize the most critical systems based on business intelligence. “The only way to ensure 100% security is to give business users the confidence they can perform business securely and allow them to take risks, but do so in a secure manner,” Leech explains.

Adopting a cybersecurity architecture that embraces modern constructs such as zero trust and that incorporates agile concepts such as continuous improvement is another requisite. It is also necessary to formulate and institute time-tested incident response plans that detail the roles and responsibilities of all stakeholders, so they are adequately prepared to respond to a cyberincident.

Leech outlines several other recommended actions:

Be a partner to the business. IT needs to fully understand business requirements and work in conjunction with key business stakeholders, not serve primarily as a cybersecurity enforcer. “Enable the business to take risk; don’t prevent them from being efficient,” he advises.Remember that preparation is everything. Cyberresilience teams need to evaluate existing architecture documentation and assess the environment, either by scanning the environment for vulnerabilities, performing penetration tests, or running tabletop exercises. This checks that systems have the appropriate levels of protections to remain operational in the event of a cyberincident. As part of this exercise, organizations need to prepare adequate response plans and enforce the requisite best practices to bring the business back online.Shore up a data protection strategy. Different applications have different recovery-time-objective (RTO) and recovery-point-objective (RPO) requirements, both of which will impact backup and cyberresilience strategies. “It’s not a one-size-fits-all approach,” Leech says. “Organizations can’t just think about backup but [also about] how to do recovery as well. It’s about making sure you have the right strategy for the right application.”

The HPE GreenLake Advantage

The HPE GreenLake edge-to-cloud platform is designed with zero-trust principles and scalable security as a cornerstone of its architecture. The platform leverages common security building blocks, from silicon to the cloud, to continuously protect infrastructure, workloads, and data while adapting to increasingly complex threats.

HPE GreenLake for Data Protection delivers a family of services that reduces cybersecurity risks across distributed multicloud environments, helping prevent ransomware attacks, ensure recovery from disruption, and protect data and virtual machine (VM) workloads across on-premises and hybrid cloud environments. As part of the HPE GreenLake for Data Protection portfolio, HPE offers access to next-generation as-a-service data protection cloud services, including a disaster recovery service based on Zerto and HPE Backup and Recovery Service. This offering enables customers to easily manage hybrid cloud backup through a SaaS console along with providing policy-based orchestration and automation functionality.

To help organizations transition from traditional cybersecurity to more robust and holistic cyberresilience practices, HPE’s cybersecurity consulting team offers a variety of advisory and professional services. Among them are access to workshops, road maps, and architectural design advisory services, all focused on promoting organizational resilience and delivering on zero-trust security practices.

HPE GreenLake for Data Protection also aids in the cyberresilience journey because it removes up-front costs and overprovisioning risks. “Because you’re paying for use, HPE GreenLake for Data Protection will scale with the business and you don’t have to worry [about whether] you have enough backup capacity to deal with an application that is growing at a rate that wasn’t forecasted,” Leech says.

For more information, click here.

Cloud Security

In 2020, research found that nearly 90% of CISOs considered themselves under moderate or high levels of stress. Similarly, a 2021 survey by ClubCISO revealed that stress levels significantly increased among 21% of respondents over the last 12 months, adding to mental health issues.

Kerissa Varma

Two years on since the start of the pandemic, stress levels of tech and security executives are still elevated as global skills shortages, budget limitations and an ever faster and expanding security threat landscape test resilience. “In every cyber security team I’ve worked in, stress management is a common concern, says Vodacom group managing executive for cyber security, Kerissa Varma. “Some manage this better than others, but one of the most common questions I get asked about my job is how I’ve done it for so long, considering everything that it involves.”

Helen Constantinides, CIO at AVBOB Mutual Assurance Society, also understands these cyber stress and burnout trends all too well. “We need to remember that it’s not just about technology,” she says. “It involves people too.”

According to CIISec’s 2020/21 State of the Profession report, which surveyed 557 security professionals, stress and burnout have become major issues, with almost half (47%) working more than 41 hours a week, and some up to 90.

So what can CIOs do to mitigate against the long hours, heavy workloads and uncertainty in understaffed and underfunded environments? The experts share their four top tips below. 

1. Encourage your teams to slow things down

Seeing that hackers don’t work 9 to 5, IT and information security professionals generally don’t get enough rest, says Itumeleng Makgati, group information security executive at Standard Bank. “Our roles require us to be alert, productive and energized,” she says. “You can’t do all this if you don’t get enough rest,” adding that CIOs must be deliberate about helping people to pause, take breaks and recharge, which may sound counter-intuitive but greater demands require greater efforts to look after mental health. This can take the form of hosting team events, meet-ups or just enabling staff to take personal time off during down cycles. “I try to have in person meetings as ‘walking meetings’ in a nearby park, which ensure that I get my daily nature fix and also stimulate creative thoughts,” says Anna Collard, SVP content strategy and evangelist at KnowBe4 Africa, the world’s largest security awareness training and simulated phishing platform. 

Helen Constantinides

2. Encourage collaboration

Look to extend and complement your team by bringing in trusted partners like managed security services, recommends Constantinides. “It’s about collaborating locally and globally to create new thinking, expanding the talent pool and coming at things a little bit differently,” she says. As part of this, CIOs must ensure the right technologies are in place to protect their most critical vulnerabilities, and assess, rank and respond to risks in real time to alleviate stress across IT teams. Automation can help too considering the skills shortage burden for under-resourced teams, says Varma. “Automation is a great enabler to use limited resources in areas that add the biggest benefit,” she says. “It also greatly improves staff morale, as they are able to focus on more interesting work.”

3. Discourage multitasking

According to Makgati, CIOs and IT leaders need to encourage their teams to embrace “monotasking.” Clear, one-at-a-time task prioritization and defining milestones that don’t overlap can help teams minimize stress. Avoiding the trap of mistaking the urgent for the important is also a great way to mitigate unnecessary stress, she says.

Anna Collard

And according to Collard, multitasking and not being fully present actually makes a business more susceptible to social engineering. “I realised this when I failed one of our internal phishing simulation tests,” she says. “I fell for the phishing email, not because I didn’t know the dangers of social engineering or because I didn’t know how to spot red flags, but because I was distracted. I was multi-tasking and slightly anxious in that moment.” It’s critical for leaders to communicate what the most important items that need to be delivered are, says Varma.

Itumeleng Makgati

Failing to do so can cause confusion and lead to teams skimming the surface in a number of areas but never truly resolving things effectively. “Be clear to your teams and business on what you’re prioritizing within a time frame,” she says. “This is critical to allow your team to focus and execute in the fastest manner possible and for your business to understand any potential risks.”

4. Exercise empathy and compassion

“Having the right cyber thinking and decision making in a board room can have immense impact on preventing stressful situations down the road,” says Varma. Collard adds that building a security culture is more about human psychology and behavioral science than technology. So CIOs and IT leaders must understand people’s motivations, expectations and struggles, and create a support mechanism to maximize individual and team potential. “It’s clear that we’re all going through a lot and a little understanding will go a long way in helping our teams feel supported,” says Makgati.

Change Management, Identity Management Solutions

Cyber hygiene describes a set of practices, behaviors and tools designed to keep the entire IT environment healthy and at peak performance—and more importantly, it is a critical line of defense. Your cyber hygiene tools, as with all other IT tools, should fit the purpose for which they’re intended, but ideally should deliver the scale, speed, and simplicity you need to keep your IT environment clean.

What works best is dependent on the organization. A Fortune 100 company will have a much bigger IT group than a firm with 1,000 employees, hence the emphasis on scalability. Conversely, a smaller company with a lean IT team would prioritize simplicity.

It’s also important to classify your systems. Which ones are business critical? And which ones are external versus internal facing? External facing systems will be subject to greater scrutiny.

In many cases, budget or habit will prevent you from updating certain tools. If you’re stuck with a tool you can’t get rid of, you need to understand how your ideal workflow can be supported. Any platform or tool can be evaluated against the scale, speed and simplicity criteria.

An anecdote about scale, speed and complexity

Imagine a large telecom company with millions of customers and a presence in nearly every business and consumer-facing digital service imaginable. If your organization is offering an IT tool or platform to customers like that, no question you’d love to get your foot in the door.

But look at it from the perspective of the telecom company. No tool they’ve ever purchased can handle the scale of their business. They’re always having to apply their existing tools to a subset of a subset of a subset of their environment. 

Any tool can look great when it’s dealing with 200 systems. But when you get to the enterprise size, those three pillars are even more important. The tool must work at the scale, speed, and simplicity that meets your needs.

The danger of complacency

With all the thought leadership put into IT operations and security best practices, why is it that many organizations are content with having only 75% visibility into their endpoint environment? Or 75% of endpoints under management? 

It’s because they’ve accepted failure as built into the tools and processes they’ve used over the years. If an organization wants to stick with the tools it has, it must:

Realize their flaws and limitationsMeasure them on the scale, speed and simplicity criteriaDetermine the headcount required to do things properly

Organizations cannot remain attached to the way they’ve always done things. Technology changes too fast. The cliché of “future proof” is misleading. There’s no future proof. There’s only future adaptable.

Old data lies

To stay with the three criteria of strong cyber hygiene—scale, speed and simplicity—nothing is more critical than the currency of your data. Any software or practice that supports making decisions on old data should be suspect. 

Analytics help IT and security teams make better decisions. When they don’t, the reason is usually a lack of quality data. And the quality issue is often around data freshness. In IT, old data is almost never accurate. So decisions based on it are very likely to be wrong. Regardless of the data set, whether it’s about patching, compliance, device configuration, vulnerabilities or threats, old data is unreliable.

The old data problem is compounded by the number of systems a typical large organization relies on today. Many tools we still use were made for a decades-old IT environment that no longer exists. Nevertheless, today tools are available to give us real-time data for IT analytics.

IT hygiene and network data capacity

Whether you’re a 1,000-endpoint or 100,000-endpoint organization, streaming huge quantities of real-time data will require network bandwidth to carry it. You may not have the infrastructure to handle real-time data from every system you’re operating. So, focus on the basics. 

That means you need to understand and identify the core business services and applications that are most in need of fresh data. Those are the services that keep a business running. With that data, you can see what your IT operations and security posture look like for those systems. Prioritize. Use what you have wisely.

To simplify gathering the right data, streamline workflows

Once you’ve identified your core services, getting back to basics means streamlining workflows. Most organizations are in the mindset of “my tools dictate my workflow.” And that’s backward.

You want a high-performance network that has low vulnerability and strong threat response.  You want tools that can service your core systems, do efficient patching, perform antivirus protection and manage recovery should there be a breach. That’s what your tooling should support. Your workflows should help you weed out the tools that are not a good operational fit for your business.

Looking ahead

It’s clear the “new normal” will consist of remote, on-premises, and hybrid workforces. IT teams now have the experience to determine how to update and align processes and infrastructure without additional disruption.

Part of this evaluation process will center on the evaluation and procurement of tools that provide the scale, speed and simplicity necessary to manage operations in a hyper converged world while:

Maintaining superior IT hygiene as a foundational best practiceAssessing risk posture to inform technology and operational decisions Strengthening cybersecurity programs without impeding worker productivity

Dive deeper into cyber hygiene with this eBook.


These are challenging times to be a CIO. It was all talk about digital transformation to drive post-pandemic business recovery a few months ago. Now, the goalposts have shifted thanks to rising inflation, geopolitical uncertainty and the Great Resignation. Meeting these challenges requires IT leaders to ruthlessly prioritize: taking action to mitigate escalating cyber and compliance risks by managing their attack surface more effectively amidst continued skills shortages.

For many, the key lies in choosing the right platform to drive visibility and control across the endpoint estate.

The ever-growing attack surface 

That pandemic-era digital spending was certainly necessary to support hybrid working, drive process efficiencies and create new customer experiences. But it also left behind an unwelcomed legacy as corporate attack surfaces expanded significantly. 

An explosion in potentially unmanaged home working endpoints and distributed cloud assets have added opacity at a time when CIOs desperately need visibility. Two-fifths of global organizations admit that their digital attack surface is “spiraling out of control.” Some organizations also exacerbate their challenges in this regard by rushing products to market, incurring heavy technical debt in the process. 

Attack surface challenges are especially acute in industries like manufacturing, which became the most targeted sector in 2021. The convergence of IT and OT in smart factories is helping these organizations to become more efficient and productive, but it’s also exposing them to increased risk as legacy equipment is made to be connected. 

Nearly half (47%) of all attacks on the sector last year were caused by vulnerabilities that the victim had yet to or could not patch. Like their counterparts in almost every sector, manufacturing CIOs are also kept awake at night by supply chain risk. An October 2021 report claimed that 93% of global organizations have suffered a direct breach due to weaknesses in their supply chains over the previous year.

Managing this risk effectively will require rigorous and continuous third-party auditing based on asset visibility and best practice cyber hygiene checks. The same approach can also help drive visibility at a time when supply chains are still under tremendous strain from the continued impact of COVID-19 in Asia and new geopolitical uncertainty.

Threat actors are ruthlessly exploiting visibility and control gaps wherever they can find them, most notably via ransomware. The average ransom payment rose 78% year-on-year in 2021, with some vendors detecting a record-breaking volume of attacks. Most are down to a combination of phishing, exploited software vulnerabilities, and misconfigured endpoints, particularly RDP servers left exposed without strong authentication.

Missing talent

In fact, misconfiguration is one of the biggest sources of cyber risk today perpetuated by talent shortages and digital transformation, the latter creating new and complex IT environments which become more challenging to manage securely. The talent shortfall cuts across multiple sectors and is most acute in cyber with a gap of over 2.7 million professionals globally, including 402,000 in North America. The Great Resignation and workplace stress continue to take their toll. Nearly two-thirds (64%) of SOC analysts claim they’ll change jobs next year.

With talent in such short supplies and commanding such a high price, it becomes even more important to deploy it as efficiently as possible. Technology should be the CIO’s friend, yet a proliferation of IT and security point solutions is undermining productivity, not enhancing it. Our research shows that the average organization runs over 40 discrete IT security and management tools. They not only add licensing costs and significant administrative overheads but can also create visibility gaps that threat actors are primed to exploit. 

Tool bloat is even more likely in the public sector, where CIOs often lack a common security governance framework to guide purchasing strategies. Government IT leaders are also weighed down by the significant financial burden of license under utilization as they often lack the ability to discover, manage and measure their software assets.

The regulatory landscape continues to evolve

As if these challenges weren’t enough, CIOs must also prioritize compliance risk management. The EU’s GDPR set in motion a domino effect of copycat legislation around the world, which has raised the stakes for corporate data protection and privacy. But the landscape is also shifting in other ways. 

No longer is regulation solely for large organizations in healthcare, manufacturing or financial services sectors. New rules and policies are being drawn up and older ones are expanding in scope. Once the preserve of financial institutions, Sarbanes-Oxley will apply to all businesses that handle credit, beginning in December 2022. That means organizations as diverse as car dealerships, furniture sellers and retail stores will need to get SOX-compliant or face potentially significant financial consequences.

Start with visibility and control

As CIOs look to prioritize while economic headwinds gather strength, managing IT risk becomes even more critical. This is where best practice cyber hygiene can play an important role. It sounds simple in theory but can be challenging to achieve in practice.

Cyber hygiene is built on comprehensive visibility of the endpoint IT estate. That means understanding everything the organization is running and what is running on those endpoints at all times—whether it’s an on-prem server, a cloud container, a virtual machine or a home working laptop. 

It’s especially challenging, and critical, in dynamic and ephemeral cloud environments, which change second by second. Once this visibility has been achieved, organizations need technology that empowers them to run continuous scans and automated remediation activities to find and fix any vulnerabilities or misconfigurations—and to rapidly detect and investigate emerging threats.

This endpoint insight will not just help to mitigate risk but also optimize software license utilization and enhance regulatory compliance. Delivered from a single platform, it should help stretched IT teams do more with less and maximize their productivity. 

The hard work starts now.

Learn how to get complete endpoint visibility and control here.

IT Leadership