Use the Mem0 AI SDK Provider with Vercel AI SDK for persistent memory in conversational AI applications.
The Mem0 AI SDK Provider is a library developed by Mem0 to integrate with the Vercel AI SDK. This library brings enhanced AI interaction capabilities to your applications by introducing persistent memory functionality.
For standalone features, such as addMemories, retrieveMemories, and getMemories, you must either set MEM0_API_KEY as an environment variable or pass it directly in the function call.
getMemories will return raw memories in the form of an array of objects, while retrieveMemories will return a response in string format with a system prompt ingested with the retrieved memories.
import { generateText } from "ai";import { createMem0 } from "@mem0/vercel-ai-provider";const mem0 = createMem0();const { text } = await generateText({ model: mem0("gpt-4-turbo", { user_id: "borat" }), prompt: "Suggest me a good car to buy!",});
import { generateText } from "ai";import { openai } from "@ai-sdk/openai";import { retrieveMemories } from "@mem0/vercel-ai-provider";const prompt = "Suggest me a good car to buy.";const memories = await retrieveMemories(prompt, { user_id: "borat" });const { text } = await generateText({ model: openai("gpt-4-turbo"), prompt: prompt, system: memories,});
import { generateText } from "ai";import { createMem0 } from "@mem0/vercel-ai-provider";const mem0 = createMem0();const { text } = await generateText({ model: mem0("gpt-4-turbo", { user_id: "borat" }), messages: [ { role: "user", content: [ { type: "text", text: "Suggest me a good car to buy." }, { type: "text", text: "Why is it better than the other cars for me?" }, ], }, ],});
import { streamText } from "ai";import { createMem0 } from "@mem0/vercel-ai-provider";const mem0 = createMem0();const { textStream } = streamText({ model: mem0("gpt-4-turbo", { user_id: "borat", }), prompt: "Suggest me a good car to buy! Why is it better than the other cars for me? Give options for every price range.",});for await (const textPart of textStream) { process.stdout.write(textPart);}
import { generateText } from "ai";import { createMem0 } from "@mem0/vercel-ai-provider";import { z } from "zod";const mem0 = createMem0({ provider: "anthropic", apiKey: "anthropic-api-key", mem0Config: { // Global User ID user_id: "borat" }});const prompt = "What the temperature in the city that I live in?"const result = await generateText({ model: mem0('claude-3-5-sonnet-20240620'), tools: { weather: tool({ description: 'Get the weather in a location', parameters: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => ({ location, temperature: 72 + Math.floor(Math.random() * 21) - 10, }), }), }, prompt: prompt,});console.log(result);
Mem0 AI SDK supports file processing with memory context. Here’s an example of analyzing a PDF file:
import { streamText } from "ai";import { createMem0 } from "@mem0/vercel-ai-provider";import { readFileSync } from 'fs';import { join } from 'path';const mem0 = createMem0({ provider: "google", mem0ApiKey: "m0-xxx", config: { apiKey: "google-api-key" }, mem0Config: { user_id: "alice", },});async function main() { // Read the PDF file const filePath = join(process.cwd(), 'my_pdf.pdf'); const fileBuffer = readFileSync(filePath); // Convert the file's arrayBuffer to a Base64 data URL const arrayBuffer = fileBuffer.buffer.slice(fileBuffer.byteOffset, fileBuffer.byteOffset + fileBuffer.byteLength); const uint8Array = new Uint8Array(arrayBuffer); // Convert Uint8Array to an array of characters const charArray = Array.from(uint8Array, byte => String.fromCharCode(byte)); const binaryString = charArray.join(''); const base64Data = Buffer.from(binaryString, 'binary').toString('base64'); const fileDataUrl = `data:application/pdf;base64,${base64Data}`; const { textStream } = streamText({ model: mem0("gemini-2.5-flash"), messages: [ { role: 'user', content: [ { type: 'text', text: 'Analyze the following PDF and generate a summary.', }, { type: 'file', data: fileDataUrl, mediaType: 'application/pdf', }, ], }, ], }); for await (const textPart of textStream) { process.stdout.write(textPart); }}main();
Note: File support is available with providers that support multimodal capabilities like Google’s Gemini models. The example shows how to process PDF files, but you can also work with images, text files, and other supported formats.