Hackers Exploit ChatGPT for Malware and Phishing Campaigns
A China-aligned hacker known as UTA0388 is using OpenAI’s ChatGPT platform to develop malware and craft multilingual spear-phishing lures. The incident highlights how hackers are leveraging AI to scale cyber operations and how the cybersecurity community is adapting to counter it.
Risk Level
Read Time
“What is happening and why does it matter?”
Since June 2025, security firm Volexity has been tracking a China-aligned hacker group, UTA0388, leveraging Large Language Models (LLMs), including OpenAI’s ChatGPT, to enhance both its malware development and phishing operations targeting organizations across North America, Asia, and Europe.
The campaigns began as spear-phishing emails impersonating senior researchers from fabricated, but legitimate-sounding, think tanks. The goal was to trick recipients into clicking a link that ultimately loaded a backdoor named GOVERSHELL. This backdoor granted persistent, unauthorized remote access, enabling hackers to execute commands, move laterally, steal data, and deploy further malware.
Over time, UTA0388 expanded its efforts to include rapport-building phishing, where they first engage a target in a benign conversation before sending a malicious link. The payload relied on DLL search order hijacking to load the malicious code, granting the hackers remote command execution and persistence through scheduled tasks.
Researchers have since identified five distinct GOVERSHELL variants, shifting from C++ to Golang, using new encryption methods, and demonstrating a pattern of non-iterative rewriting that points to automated AI assistance in code generation.
“How is ChatGPT being used?”
OpenAI’s October 2025 threat report confirmed the connection between UTA0388’s activity and a cluster of banned ChatGPT accounts linked to Chinese-speaking hackers. These accounts were used to prompt the model for:
Encrypted command-and-control (C2) code and HTTPS/WebSocket communication loops;
PowerShell scripts for process enumeration and antivirus evasion;
Polished phishing templates in English, Chinese, and Japanese with cultural nuance; and,
Automated reconnaissance scripts using open-source scanners and APIs.
While all clearly malicious prompts were blocked by ChatGPT’s safety filters, the hackers exploited dual-use requests to generate legitimate code snippets that could later be repurposed for malicious use.
Volexity’s analysis reinforces this finding. Many phishing emails contained “hallucinations” common to LLMs such as fabricated institutions like the Copenhagen Governance Institute, inconsistent sender personas, and mixed-language text (e.g., Mandarin subject lines with German message bodies). These irregularities indicate automated drafting without human quality control.
“What does this mean for defenders?”
OpenAI has permanently disabled all UTA0388-related accounts. While hackers used ChatGPT for incremental gains, like faster code iteration and multilingual phishing templates, the platform did not enable new offensive capabilities or generate undisclosed exploits.
This response underscores a key reality: AI is now part of the cyber threat landscape. OpenAI continues strengthening its safeguards through robust policy enforcement, intelligence sharing, and public threat reporting to help defenders stay informed.
For security teams, the main takeaway is to stay proactive. Monitor for AI-generated phishing artifacts (mixed languages, incoherent formatting), validate code provenance for emerging malware families like GOVERSHELL, and leverage AI responsibly to detect synthetic content and automate threat analysis.
UTA0388’s activity shows how quickly attackers can scale with AI assistance, making speed, collaboration, and transparency the new cornerstones of effective defense.
“How can Hive Systems help?”
Hive Systems helps organizations assess, prepare, and respond to emerging AI-enabled threats. Our team tracks nation-state activity, evaluates AI risks, and integrates detection strategies aligned with evolving hacker capabilities.
Whether it’s building AI-resistant phishing defenses, analyzing malware leveraging automated code generation, or hardening response workflows against synthetic threats, Hive Systems empowers clients to stay ahead of the curve - defending not just against today’s attackers, but tomorrow’s as well.
Train your team with our managed phishing simulations.
Follow us - stay ahead.