Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
10 views

Using LLMs for Smart Contract Programming

Uploaded by

mzmindykkyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

Using LLMs for Smart Contract Programming

Uploaded by

mzmindykkyan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Using LLMs for Smart

Contract Programming
Contents
1. Level 1 – Basic usage
2. Level 2 – Prompt engineering
3. Level 3 – Back-and-forth to correct errors
4. Level 4 – With local LLMs
5. Level 5 – With agents automating code generation and
fixing errors
Level 1 – Basic Usage
• We will use Poe as an example LLM based chatbot
• There are plenty others like ChatGPT, Claude, Gemini, etc

• These are all examples of closed-source LLM chatbots


• Meaning you don’t have access to the underlying model
weights, the training data or the training and fine-tuning
process

• There are open source LLMs as well like Llama (from


Meta) and many others (see Huggingface)
Level 2 – Prompt Engineering
• Give detailed prompts, as much as possible
• This helps the LLM to give you better and more accurate
responses
• Some things to include in the prompt would be:
• The specifications
• Example inputs and/or outputs
• Names of certain functions, if relevant
• Give it external context (can be a RAG application as well)
• Etc
• Here is an example of what a detailed prompt for the
previous example might look like
Level 3 – Back-and-forth to correct
errors
• Sometimes (most times in fact, in the case of complex
applications) you won’t get the desired result, or get
compilation errors from running the LLM generated
code.
• Copy-paste the error into the LLM, or explain what is not
working in the result, and ask it to fix it.
• Again, the more you describe the error, the better the
chances of getting the desired result.
• Here’s a demo in Poe…
Level 4 – With local LLMs
• Several reasons for using local LLMs:
• Data Privacy and Security
• Customization and Fine-Tuning
• Performance and Latency
• Cost Control
• Offline Capabilities
• Greater Transparency and Control
• Flexibility in Deployment
• Several tools exist to run local LLMs:
• LM Studio, Llamafile, GPT4ALL, Ollama, Llama.cpp, and many
others
GUI Based Local LLM Tool Example –
LM Studio
• Download and install it from here.
• Step 1: Search for a LLM to download locally and use with
LM Studio
• Step 2 – Select the model you want to download install
• Step 3 – Go to “AI Chat” on the left panel, and create a
New Chat
• Step 4 – Select the model you just downloaded from the
top panel. All the LLMs you have downloaded will show
here.
• Step 5 – Chat with the LLM. You can see the RAM and
CPU (and GPU, if your computer has one) usage go up.
• Step 6 – You can tweak the model parameters and
behavior on the Settings panel on the right
Command Line Based Local LLM Tool
Example – Ollama
• Read its documentation here to learn more, as well as
how to use it as an API endpoint.
• It gives a command line interface as well as can be used
as an API programmatically in your web apps.
• Use the syntax “ollama pull <model_name>” to
download the LLM you want (model library here), and
then “ollama run <model_name>” to run the LLM in CLI.
Level 5 – With agents automating code generation and
fixing errors

You might also like