Loading...
bolt
worked and since it was open source so I looked into the repo.This is a simple NextJs App with a FE to render the UI and a BE route to integrate with the LLM.
Let's look at FE and BE individually:
Backend
/api/ask
that takes in the user question and returns a list of component code.convertUIMessageToLangChainMessage
and convertLangChainMessageToUIMessage
that converts the messages from LangChain format to what UI can understand and vice versa.// UI format const uiMessages = [ { content: "Hello", role: user, }, ]; // LangChain format message const langchaninMessages = [new HumanMessage("Hello")];
// JSON format I want the LLM output to be in type File = { content: string; language: string; name: string; }; // JSON parser to format AI output in certain JSON format const parser = new JsonOutputParser<File[]>(); const llm = new ChatOpenAI({ model: "gpt-4o-mini", temperature: 0, }); const chain = promptTemplate.pipe(llm).pipe(parser);
promptTemplate
is where we provide instruction to the LLM on what role we want it to take and how to respond.FrontEnd
form
that contains the button and input element.@webcontainer
by amazing stackblitz
to render and compile the content and for the terminal I have used @xterm
.@webcontainer
library uses WebContainer API
, which is a browser-based runtime for executing Node.js applications and operating system commands.files
format that gets mounted into the container. That's how we tell the container about our file system.index
file to run a node serverexport const files = { "index.js": { file: { contents: ` import express from 'express'; const app = express(); const port = 3111; app.get('/', (req, res) => { res.send('Welcome to a WebContainers app!!!! 🥳'); }); app.listen(port, () => { console.log(\`App is live at http://localhost:\${port}\`); });`, }, }, };
files
object.vite
based skeleton to compile and render the content returned by the LLM.files.js
in my case has certain predefined files that are needed for a vite
app to bootstrap and run.files
object and create the required textArea to render them.// sample code // loop over the ids extracted from the llm response ids.forEach((id) => { // find the object from the resposne to extract content const fileData = filesNeeded.filter((d) => d.name.toLocaleLowerCase().includes(id) ); // append the content to the files object under src directory files["src"]["directory"][fileData[0].name] = { file: { contents: id == "main" ? ` import "./main.css"; \n ${fileData[0].content} ` : fileData[0].content, }, }; if (id !== "main") { const textareaEl = document.getElementById(id)! as HTMLTextAreaElement; // load content in textarea textareaEl.value = ` // ${id} ${fileData[0].content} `; // add event listener to change for text area const filePath = `src/${fileData[0].name}`; textareaEl.addEventListener("input", (e: Event) => { writeIndexJS( webContainer, filePath, (e.currentTarget as HTMLInputElement).value ); }); } });
_ Note _ please update the OPEN AI key in the .env.local
file
npm install
- this will install the required deps.npm run dev
- this will start the local dev server.