|Educational Technology Practicum
This project implements LLM-powered assistant(s) that can be activated to enhance an education-oriented task or app.
You are expected to implement a working LLM "agent" (no-code options are provided so this requirement may not require actually writing any code) supported by a carefully crafted prompt. However, your user interface for interacting with your agent does not need to be fully implemented, it can again be:
Fork the assignment repository to make your own personal area to work in and use GIT to
push your agent to it. Minimally, you must push your progress before both the in-class demo and final submission deadlines.
In addition to the basic information included in the README file:
Finally, you must record a video, no more than 5 minutes long, going through your solution. You can use Zoom or there are many free screen capture tools available or free trials of some pretty powerful tools as well.
While OpenAI's "chat" interface has proven to be a revolutionary way to introduce many people to LLMs, there are advantages to embedding LLM agent(s) within a broader app that supports more targeted educational goals:
Design embedded LLM assistant(s) to improve an existing learning experience or make up your own app to tackle a learning scenario.
You will most likely need to engineering specific prompt(s) to submit to the LLM to tailor its response to your specific task(s). Consider what other text would be appropriate to pass on to the LLM, such as:
You can use any tool you want to build your assistant. I recommend the following:
If you do not have any specific ideas, feel free to ask your friends or current students.
Here is a presentation about Duolingo Max to give you an example (you are certainly not expected to do something this sophisticated). You are also welcome to look at any other examples online (such as those given in class), but your version should be distinctly different from any you find (i.e., create your own, do not simply copy one).