This report analyzes recent trends in the use of genAI applications, data policy violations, and malware distribution across Australian organizations, highlighting the significant challenges posed by the evolving cybersecurity landscape as generative AI becomes more embedded in daily operations and cloud application adoption increases.
GenAI usage: AI adoption in Australia continues to grow, with ChatGPT, Gemini, and Copilot leading the way in usage. Personal account use spiked late last year, but is now falling as organizations shift to safer, approved platforms. DLP policies are on the rise to reduce data leaks, especially for source code and intellectual property.
Agentic AI and custom apps: Organizations are moving toward privacy-first genAI setups, using Azure OpenAI, Bedrock, and on-prem tools like Ollama. Custom agents and interfaces are gaining traction, offering better control and flexibility over data handling, but introducing new shadow AI risks.
Phishing threats: Phishing campaigns are evolving, often mimicking trusted cloud services. Google and Microsoft remain top targets, with gaming platforms also commonly abused. Attackers are after credentials, tokens, and access grants. On average, 121 out of every 10,000 users click on phishing links each month, highlighting the continued effectiveness of these attacks despite awareness efforts.
Malware delivery: Attackers abuse trusted platforms like GitHub, OneDrive, and S3 to host malware, because users are more likely to download from familiar sources. On average, 22 out of every 10,000 users encounter malicious content each month.
Personal cloud app risk: Personal apps like LinkedIn, OneDrive, and Google Drive are heavily used, and heavily blocked. Most data policy violations involve regulated data or intellectual property, prompting organizations to restrict risky personal usage.