OpenAI
import { OpenAI, serviceContextFromDefaults } from "llamaindex";
const openaiLLM = new OpenAI({ model: "gpt-3.5-turbo", temperature: 0, apiKey: <YOUR_API_KEY> });
const serviceContext = serviceContextFromDefaults({ llm: openaiLLM });
You can setup the apiKey on the environment variables, like:
export OPENAI_API_KEY="<YOUR_API_KEY>"
Load and index documents
For this example, we will use a single document. In a real-world scenario, you would have multiple documents to index.
const document = new Document({ text: essay, id_: "essay" });
const index = await VectorStoreIndex.fromDocuments([document], {
serviceContext,
});
Query
const queryEngine = index.asQueryEngine();
const query = "What is the meaning of life?";
const results = await queryEngine.query({
query,
});
Full Example
import {
OpenAI,
Document,
VectorStoreIndex,
serviceContextFromDefaults,
} from "llamaindex";
async function main() {
// Create an instance of the LLM
const openaiLLM = new OpenAI({ model: "gpt-3.5-turbo", temperature: 0 });
// Create a service context
const serviceContext = serviceContextFromDefaults({ llm: openaiLLM });
const document = new Document({ text: essay, id_: "essay" });
// Load and index documents
const index = await VectorStoreIndex.fromDocuments([document], {
serviceContext,
});
// get retriever
const retriever = index.asRetriever();
// Create a query engine
const queryEngine = index.asQueryEngine({
retriever,
});
const query = "What is the meaning of life?";
// Query
const response = await queryEngine.query({
query,
});
// Log the response
console.log(response.response);
}