Tips to Harness Ai to Bolster Organisational Defences


Written by Neil Lappage, Security Advisor, ITC Secure and member of ISACA Emerging Trends Working Group.

The landscape of generative AI adoption in organisations is both promising and fraught with challenges, as employees increasingly embrace the technology’s allure and potential. A new survey from global digital trust association ISACA, Generative AI: The Risks, Opportunities and Outlook, shows 63% of employees in ANZ are using generative AI even when only 36% of organisations expressly permit its usage.

And even fewer companies have official policies in place, with only 11% reporting a formal comprehensive policy, and 21% reporting no policy exists. It’s clear that employees see value in AI for tasks like content creation, improving productivity and decision-making, however this unsanctioned use also hints at a lack of clarity in how to proceed.

The ethical considerations surrounding AI are another area of concern. Only 38% of ANZ respondents indicate adequate attention is being given to ethical standards, which raises questions about the preparedness of organisations in navigating the complex moral landscape of AI. The high percentages of respondents worried about misinformation, privacy breaches and other risks, further underscore the need for a more robust ethical framework.

The training, or lack thereof, is perhaps the most alarming revelation in ISACA’s survey. In an age where AI platforms are becoming increasingly accessible and popular, the fact that the majority of digital trust professionals (57% in ANZ) report no AI training is provided by their organisation is a glaring gap. This lack of knowledge can lead to misuse or misunderstanding, which can have far-reaching consequences.

Lastly, the perception that cybercriminals are leveraging AI more effectively than digital trust professionals, by nearing a 3-to-1 margin in the ISACA survey, is a sobering thought. It’s a stark reminder that while AI can be a force for good, it can also be weaponized, if not approached with caution.

In my view, while the survey highlights the transformative potential of generative AI, it also underscores the urgent need for organisations to address the challenges head-on.  

In today’s hyper-connected world, where cyber threats are constantly evolving, it’s crucial for organisations to bolster their defences. Here are some key strategies: 

  • Advanced Threat Detection: A digital trust professional can harness AI algorithms to enhance their organisation’s threat detection capabilities. Machine learning models can analyse vast amounts of data to identify anomalies and potential security breaches in real-time. This proactive approach allows organisations to stay one step ahead of cybercriminals.

  • Behavioural Analysis: AI can be used to create user behaviour profiles within an organisation’s network. By monitoring deviations from these profiles, cybersecurity teams can quickly detect suspicious activities and potential insider threats. AI-driven behavioural analysis provides a proactive defence against internal risks.
  • Automated Incident Response: AI-powered tools can automate incident response processes. This includes isolating compromised systems, gathering forensic evidence, and initiating remediation measures. By reducing response time, organisations can minimise the impact of security incidents. 
  • Continuous Learning: To effectively combat cybercriminals, digital trust professionals must stay updated with the latest AI-driven attack methods. Regular training and workshops focused on AI in cybersecurity can bridge the knowledge gap and empower professionals to counter evolving threats effectively.
  • Collaborative Information Sharing: Establishing collaborative platforms and networks for sharing threat intelligence among organisations can help create a united front against cybercriminals. AI can play a vital role in processing and analysing shared data to identify global threat patterns.

By adopting these strategies, digital trust and cybersecurity professionals can enhance their ability to leverage AI effectively. This not only levels the playing field but also strengthens the overall security posture of the organisation.