Team Project: AI Code Assistant
In teams, write a program that implements at least four code assistants that can be used to suggest improvements for given code to be implemented by an AI agent following these steps:
- Scan given code to locate issues (via AI or a standard tool)
- Generate suggestions for improvement (the format will depend on how you intend to apply them)
- Validate the suggestions (automatically apply all kinds of tests no matter how small the suggestion)
- Apply the suggestions (optionally, require human approval)
The program should be usable on the command-line and through a GUI of your choice.
Your grade will be determined primarily by your programming process rather than the exact functionality you choose to implement:
You are expected to build your own AI Agents rather than using pre-built ones or low-code tools to build one.
Submitting Your Work
Use GIT to push your team's implementation to the main branch of the provided, shared, ai_assistant_teamNN repository hosted in the course's Gitlab group for each Phase:
- Your code should follow your language's coding conventions, be reasonably commented, use logging (rather than
print statements)
- Submit gitignore and gitattribute files, as well as any properly attributed resources (images, sounds, configuration files, etc.) your program needs
- Submit tests that follow these naming standards that achieve at least 70% line test coverage overall
- Submit an updated project README file at the top-level (no separate folder)
Additionally, provide the VCM_ID of your team's primary (production) and secondary (development) servers.
- Tag the commit representing the submitted version of the code as
PhaseN_Complete
- Submit a 2-4 minute video using the team's program, with voice over explaining how the code works.
In addition to simply using Zoom, there are many free screen capture tools available for any platform or free trials of some pretty powerful tools as well.
- Submit the log output produced from the program's run that you recorded in your video.
Note, name it phaseN_log.txt so it is not recognized by your gitignore file.
You are responsible for ensuring that all files are correctly pushed to the repository on time.
Individual Responsibilities when Working as a Team
Although this is a team project, everyone has individual responsibilities to the team that can be summed up as follows:
- actively participate in all team meetings
- regularly contribute clean code to the shared repository in reasonably sized chunks
- solving and coding at least one "interesting" design problem
- helping teammates at least once by going above and beyond
Your team's project GIT repository should reflect this by with many small purposeful commits using commonly formatted commit messages from all team members rather than just one or two large "kitchen sink" commits and marathon merging or redesign sessions. Specifically, we will be looking for deliberate attempts to regularly:
- close assigned Issues
- integrate each other's branches via Merge Requests
- refactor code to improve its design
- test your functionality
Unfortunately conflicts are likely to occur, so please let the Teaching Team know as soon as possible — even if it has happened just once or twice since it is better to deal with the situation early rather than having a disaster at the end when little can be done to get back on track.
Plan your priorities (expressed as Gitlab Issues) to determine during which Phases the features should be implemented.
Specification
Implements at least four code assistants that can be used to suggest improvements for given code, such as:
- Code style, perhaps based on a linter or static analyzer
- Clean code principles
- Logic errors or runtime crashes with error messages
- Security issues, perhaps based on a static analyzer
- Additional or improved tests
- Efficiency and performance concerns
- Adherence to programming language idioms
- Design flaws
Internally, each AI Agent needs to keep track of information specific to its purpose, such as:
- system prompt detailing its role, expected input and output, and any other characteristics the response should have
- the criteria by which it should be chosen to provide a response
- specific APIs, databases, or data sets needed to provide a response
- data needed across queries or runs
In addition to allowing direct command-line access to your agents, you need to implement one of the following frontend options:
- Web page: allows users to upload code and error messages, uses REST API to access server agents, and displays resulting improvements
on server
- IDE: implements Extension API to gather code content and integrate resulting improvements
as zip file for local install
- Gitlab: implements Merge Request Pipeline API to gather code content, access server agents, and integrate resulting improvements
on DockerHub to be used in CI Pipeline
Start simply: accept entire code files, analyze with a tool or LLM prompt, apply the suggested changes, and return the updated code. Once you have that working for one tool, then consider how to enhance each step and generalize the process for a second tool. Each step of the process contains a range of issues for you to consider and tailor to your chosen frontend.
You will not be able to test your AI Agents precisely since each response may be unique, so instead focus on testing that the parts work together correctly, such as:
- are inputs given in the correct format?
- are data returned in the correct format?
- are the "calculations" performed correctly (API or tool call completes, LLM gets correct input and produces a response, etc.)?
- are error cases and unexpected inputs properly reported and handled?
Test your backend in a Gitlab Pipeline, with coverage, but testing your chosen frontend is not required.
Resources
Follow these directions to acquire your Duke AI Gateway API Key, set up your Duke VCM Server, install Docker, and deploy your code to your Server.