Recently, Twitter user @r_cky0 shared a shocking experience: while using ChatGPT to generate code for developing a blockchain-based trading bot, the code recommended by GPT contained a backdoor that sent his private key to a phishing website, resulting in a loss of approximately $2,500. This incident has sparked widespread concern, especially in the blockchain and security communities.
Potential Risks of AI-Generated Code
AI-generated code undoubtedly brings convenience to developers, especially when it comes to quickly writing complex programs. Tools like ChatGPT can provide developers with efficient code samples, automate function completions, and offer debugging suggestions. However, as demonstrated by @r_cky0’s experience, the code generated by AI may contain severe security vulnerabilities or even malicious backdoors.
In this case, the generated code included malicious code that exploited the trading bot’s permissions to send the private key to an external phishing website. Such attacks may not be accidental but rather reflect patterns that the AI has learned from unsafe sources (such as phishing websites or malicious open-source projects).
The Source of Malicious Patterns: AI’s Training Data
The capabilities of AI-generated code stem from large datasets used for training, which include code from open-source repositories, online forums, and other sources. However, because these datasets may be incomplete or contain insecure content, AI can unintentionally learn and reproduce dangerous programming patterns. Especially in sensitive fields like blockchain and cryptocurrency, even a small mistake can result in massive financial losses.
Expert Analysis: Current AI models struggle to accurately detect whether generated code contains potential backdoors. Since these models learn from vast code snippets rather than directly understanding the security or legality of the code, the AI tools may unknowingly reproduce hidden security vulnerabilities or malicious backdoors.
Raising Security Awareness: How to Protect Yourself
In light of the security risks posed by AI-generated code, developers and users should remain vigilant and avoid blindly trusting AI-generated code. Below are some practical security tips:
-
Code Review: When using AI-generated code, it is essential to conduct a thorough review. Especially for code snippets involving sensitive information (such as private keys or API keys), developers should independently verify and test the security of the code.
-
Code Auditing Tools: Use professional code auditing tools to check AI-generated code for hidden vulnerabilities or backdoors. Many open-source tools and platforms can help detect common security issues in the code.
-
Avoid Using Generated Code in Production Environments Directly: Before deploying AI-generated code to production, it should first be tested in an isolated environment to ensure no security risks are present.
-
Content Moderation for AI Platforms: Industry experts suggest that AI platforms should enhance their content moderation mechanisms to identify and alert users about potential security risks. This would not only increase users’ trust in AI-generated code but also reduce the risk of malicious code spreading.
Future Outlook: AI and Security Integration
As AI technology continues to evolve, more AI tools focused on security may emerge, specifically designed to detect security vulnerabilities while generating code. These tools could automatically identify security issues and provide warnings or suggestions for fixes when generating code. Additionally, AI could be integrated with existing security technologies (such as blockchain auditing tools) to further enhance the security of generated code.
However, until then, users need to stay highly alert and implement appropriate security measures when using AI-generated code. Only by ensuring code security can the full potential of AI-generated code be realized, helping developers accomplish tasks efficiently and rapidly.