Etherscan launches AI-powered Code Reader
Technology News

Etherscan launches AI-powered Code Reader

The tool allows users to retrieve and interpret the source code of a specific contract address via an AI prompt.

Etherscan launches AI-powered Code Reader

NEWS

On June 19, Ethereum block explorer and analytics platform Etherscan launched a new tool dubbed “Code Reader” that utilizes artificial intelligence to retrieve and interpret the source code of a specific contract address. After a user inputs a prompt, Code Reader generates a response via OpenAI’s large language model, providing insight into the contract’s source code files. The tool’s tutorial page reads

“To use the tool, you need a valid OpenAI API Key and sufficient OpenAI usage limits. This tool does not store your API keys.”

Code Reader’s use cases include gaining deeper insight into contracts’ code via AI-generated explanations, obtaining comprehensive lists of smart contract functions related to Ethereum data, and understanding how the underlying contract interacts with decentralized applications. “Once the contract files are retrieved, you can choose a specific source code file to read through. Additionally, you may modify the source code directly inside the UI before sharing it with the AI,” the page says.

A demonstration of the Code Reader tool. Source: Etherscan

Amid an AI boom, some experts have cautioned on the feasibility of current AI models. According to a recent report published by Singaporean venture capital firm Foresight Ventures, “computing power resources will be the next big battlefield for the coming decade.” That said, despite growing demand for training large AI models in decentralized distributed computing power networks, researchers say current prototypes face significant constraints such as complex data synchronization, network optimization, data privacy and security concerns.

Advertisement

Stay safe in Web3. Learn more about Web3 Antivirus →

Ad

In one example, the Foresight researchers noted that the training of a large model with 175 billion parameters with single-precision floating-point representation would require around 700 gigabytes. However, distributed training requires these parameters to be frequently transmitted and updated between computing nodes. In the case of 100 computing nodes and each node needing to update all parameters at each unit step, the model would require transmitting 70 terabytes of data per second, far exceeding the capacity of most networks. The researchers summarized:

“In most scenarios, small AI models are still a more feasible choice, and should not be overlooked too early in the tide of FOMO [fear of missing out] on large models.”

Leave feedback about this

  • Quality
  • Price
  • Service

PROS

+
Add Field

CONS

+
Add Field
Choose Image
Choose Video
X