Unlocking AI Coding Assistants Part 1: Real-World Use Cases
Dive into some AI coding assistant use cases and their effectiveness, highlighting their capabilities and limitations.
Join the DZone community and get the full member experience.
Join For FreeDo you think that AI coding assistants are not working for you? Do you constantly get the wrong responses and now you have given up using them? This blog will cover some use cases where AI coding assistants are helpful and will help you during your daily work. Enjoy!
Introduction
Nowadays, many AI coding assistants are available. These are demonstrated during conferences, in videos, described in blogs, etc. The demos are often impressive and it seems that AI is able to generate almost all of the source code for you, and all you need to do is review it. However, when you start using AI coding assistants at work, it just seems that it is not working for you and it only costs you more time. The truth lies somewhere in between. AI coding assistants can save you a lot of time for certain tasks, but they also have some limitations. It is important to learn which tasks will help you and how to recognize when you hit the limits of AI. Be aware that AI is evolving at a fast pace, so the limitations of today may be resolved in the near future.
In the remainder of this blog, some tasks are executed with the help of an AI coding assistant. The responses are evaluated and different techniques are applied which can be used to improve the responses when necessary. This blog is the first of a series, and more complex tasks will be covered in next blogs.
The tasks are executed with the IntelliJ IDEA DevoxxGenie AI coding assistant.
Two setups are used in this blog, and these used setups will be clearly mentioned:
- Ollama as inference engine and
qwen2.5-coder:7b
runs as a model. This runs on CPU only. - LMStudio as inference engine and
qwen2.5-coder:7b
runs as a model. This runs on GPU only.
As you can see, local running models are used. Th reason for doing so is that you will hit the limits of a model earlier.
The reason for using two setups is because the tasks were started with setup 1. The assumption was that the only difference between CPU and GPU would be performance. However, after a while, it was discovered that a model running on a GPU also provides better responses.
The sources used in this blog are available at GitHub.
Prerequisites
Prerequisites for reading this blog are:
- Basic coding knowledge
- Basic knowledge of AI coding assistants,
- Basic knowledge of DevoxxGenie
Task: Explain Kubernetes Yaml
The goal is to see whether AI can be helpful in explaining a Kubernetes yaml file or sections of it.
The setup: Ollama, qwen2.5-coder
, CPU is used.
The Kubernetes yaml file contains a resources section which is not very clear to you.
...
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
...
Prompt 1
Open the file, select the resources section and open a new chat window. Enter the following prompt:
/explain
DevoxxGenie will expand this to the following prompt:
Break down the code in simple terms to help a junior developer grasp its functionality.
Response
The response can be viewed here.
Response Analysis
This response is correct. The response talks about Mebibytes, but what are Mebibytes?
Prompt 2
Remain in the same chat window and enter the following prompt.
how much RAM is 64Mi and what is a mebibyte
Response
The response can be viewed here.
Response Analysis
The response is correct and the explanation accurately defines a Mebibyte.
Task: Explain Java Code
The goal is to see whether AI can be helpful in explaining Java code.
The setup LMStudio, qwen2.5-coder
, GPU is used.
The Refactor class contains some complex logic and makes use of several other classes all located in the same package.
Prompt
Open the Refactor file, and add the entire source directory to the Prompt Context (right mouse click > context menu, and you are able to add an entire directory to the Prompt Context). Enter the following prompt:
/explain
DevoxxGenie will expand this to the following prompt:
Break down the code in simple terms to help a junior developer grasp its functionality.
Response
The response can be viewed here.
Response Analysis
This is a fairly good explanation of what this class is doing.
Task: Explain Regex
The goal of this task is to see whether or not AI can be helpful in explaining regular expressions.
The setup Ollama, qwen2.5-coder
, CPU is used.
A common regex for password validation is available in file regex-1.txt.
^(?=.*[A-Z])(?=.*[a-z])(?=.*\d)(?=.*[!@#$%^&*])[A-Za-z\d!@#$%^&*]{8,20}$
Prompt 1
Open the file and enter the prompt:
/explain
DevoxxGenie will expand this to the following prompt.
Break down the code in simple terms to help a junior developer grasp its functionality.
Response
The response can be viewed here.
Response Analysis
The response is correct.
Maybe this was too obvious, let's try with a made up example.
Prompt 2
Open a new chat window.
Open file regex-2.txt. This file contains the following regular expression.
^\d{3}-[A-Z][a-z]{4}-\d{2}[a-z]{2}$
Enter the following prompt.
/explain
DevoxxGenie will expand this to the following prompt.
Break down the code in simple terms to help a junior developer grasp its functionality.
Response
The response can be viewed here.
Response Analysis
The response is correct.
Task: Explain Cron Expression
The goal of this task is to see whether AI can be helpful in explaining cron expressions. In the example, a cron expression for Spring Scheduling will be used. The important thing to notice is that Spring Scheduling cron expressions have an additional component to define the seconds. This is different from a regular crontab cron expression.
The setup Ollama, qwen2.5-coder
, CPU is used.
The cron expression is located in a properties file.
application.cron-schedule=0 */15 * * * *
Prompt 1
Open the file, select the cron expression and enter the prompt:
/explain
DevoxxGenie will expand this to the following prompt:
Break down the code in simple terms to help a junior developer grasp its functionality.
Response
The response can be viewed here.
Response Analysis
It is recognized that it is a cron expression, but the explanation is not correct because this is a valid Spring Scheduling cron expression. However, the LLM assumed this was a crontab cron expression.
Prompt 2
Let's give the LLM a bit more context.
Remain in the same chat window in order that the chat history is sent to the LLM and enter the prompt.
explain the selected Spring Scheduling cron expression
Response
The response can be viewed here.
Response Analysis
The response is correct this time aside from the summary. So, by adding the information that it is about a Spring Scheduling cron expression, the LLM was able to explain it correctly.
Always be aware that you should provide enough context to the LLM, otherwise responses will not be what you would expect.
Task: Review Code
The goal with this task is to see whether or not AI can be helpful in reviewing code. Take a look at the following db.yml which is a Kubernetes deployment file taken from the Spring Petclinic.
When you take a look at this file, you will notice some issues with it:
- the secret is hardcoded
- no resource limits are added,\
- the liveness and readiness probes are not quite ok
The setup Ollama, qwen2.5-coder
, CPU is used.
Prompt 1
Open the db.yml
file and use the following prompt:
/review
DevoxxGenie will expand this to the following prompt:
Review the selected code, can it be improved or are there any bugs?
Response
The response can be viewed here.
Response Analysis
The response is correct. However, an additional question arises: A comment is made about Secret Management, but not how this can be solved.
So, apply the suggested changes, they are located in file db-review.yml.
Prompt 2
Open this db-review.yml
and enter a follow-up prompt in the same chat window (this way, the chat memory is sent to the LLM).
the secrets are hardcoded, I want to change this in order to adhere security best practices, what should be changed
Response
The response can be viewed here.
Response Analysis
The response is correct, although the db.yml
file still contains a Secret section with the hardcoded values. This section should have been left out.
So, apply the suggested changes, they are located in file db-secret.yml and remove the Secrets section.
Conclusion
From the examples used in this blog, the following conclusions can be drawn:
- AI coding assistants are quite good at explaining different types of code, like Kubernetes yaml files, Java source code, regular expressions, cron expressions, etc.
- It is important to provide enough context to the LLM. For example:
- When explaining Java code, it is very helpful when all used classes are sent in the Prompt Context.
- When you want to explain a Spring Scheduling cron expression, you probably should tell the LLM this, otherwise it might assume that a crontab cron expression is used.
- AI coding assistants are quite good at reviewing code. This actually is quite similar as explaining code. However, you should always review the response before applying it and try to understand the response.
Published at DZone with permission of Gunter Rotsaert, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments