- Blockchain security company OpenZeppelin has observed that AI can be used to improve the efficiency of web3 builders
- The company also believes that AI tools are yet to match the human eye when editing smart contracts
- OpenZeppelin ran an experiment to establish the accuracy of AI tools like ChapGPT-4 when editing smart contracts
Blockchain security company OpenZeppelin has observed that artificial intelligence (AI) can be used to improve the efficiency of web3 builders. While conducting an experiment to establish the accuracy of AI tools like ChatGPT-4 when editing smart contracts, the company noted that such tools are yet to match the attention of human auditors. However, the firm clarified that AI can only be used to boost efficiency if the buidler knows “what [they] are doing.”
AI Passes 20 of 28 Levels
During the experiment, the firm’s Felix Wegener and Mariko Wakabayashi noted that ChatGPT-4 performed fairly on most tasks but encountered a hard time when handling tasks involving newer information introduced outside its training data.
However, its accuracy wasn’t badly off since it passed 20 of the test’s 28 levels although it received extra help to crack some levels, further establishing that the current breed of AI-based smart contract auditors are yet to match the accuracy of human web3 builders and smart contract auditors.
According to Wakabayashi, AI tools like ChatGPT-4 aren’t currently ideal for security auditing since they are created around text and text-based conversations.
Use More Targeted Vulnerability Data for Better Results
However, the OpenZeppelin staff believes that this shortcoming can be fixed by training an AI model “with more targeted vulnerability data and specific output goals.” The observations come as blockchain-focused firms like Find Satoshi Labs create tools to showcase the intersection of web3 and AI.
What does this mean for AI in web3 security?
If we train an AI model with more targeted vulnerability data and specific output goals, we can build more accurate and reliable solutions than powerful LLMs trained on vast amounts of data.
— Mariko (@mwkby) May 31, 2023
Although ChatGTP-4 failed on some levels, passing 20/28 levels is evidence that AI tools are likely to match human accuracy in the near future.