Jean MemoryJean Memory

Agent API Introduction

The Jean Memory Agent API provides a robust, isolated, and easy-to-use memory layer for your AI applications. It's designed for production use cases where multiple AI agents need to collaborate by sharing a common context.

What can you do with it?

  • Give an agent swarm a shared "scratchpad" for collaboration.
  • Persist logs and findings from autonomous agents.
  • Isolate memory between different tasks using metadata tags.
  • Build complex workflows where one agent's output is another's input.

Authentication

All endpoints are protected. You need a Bearer Token for your user account and an X-Client-Name header to scope memories to a specific application or agent swarm.

http
1Authorization: Bearer X-Client-Name:

API Endpoints

Add Tagged Memory

Adds a new memory with associated metadata tags.

POST/agent/v1/memory/add_tagged

Example Request:

bash
1curl -X POST https://api.jeanmemory.com/agent/v1/memory/add_tagged \ -H "Authorization: Bearer " \ -H "X-Client-Name: research_swarm_alpha" \ -d '{ "text": "Fact: Mars is also known as the Red Planet.", "metadata": {"task_id": "mars_research_101"} }'

Search by Tags

Searches for memories matching all key-value pairs in the filter.

POST/agent/v1/memory/search_by_tags

Example Request:

bash
1curl -X POST https://api.jeanmemory.com/agent/v1/memory/search_by_tags \ -H "Authorization: Bearer " \ -H "X-Client-Name: research_swarm_alpha" \ -d '{"filter": {"task_id": "mars_research_101"}}'

Python SDK

For easier integration, we provide a Python client. It's located in the openmemory/sdk directory of the repository.

Note on Installation

To use the SDK, you don't need to install it from a package manager. Simply copy the openmemory/sdk directory into your project or add it to your Python path.

Example Usage:

python
1from openmemory.sdk.client import JeanMemoryClient # Reads JEAN_API_TOKEN from environment variables client = JeanMemoryClient() # Add a memory, scoped to your app client.add_tagged_memory( text="This is a finding from our research agent.", metadata={"task_id": "koii_task_123"}, client_name="koii_swarm_app" ) # Retrieve memories for that task memories = client.search_by_tags( filters={"task_id": "koii_task_123"}, client_name="koii_swarm_app" ) print(memories)

Example: Agent Collaboration

This example demonstrates a common pattern where a Researcher agent gathers information, and a Summarizer agent processes it. Both agents use the same task_id to share context.

1. Researcher Agent

This agent discovers facts on a topic and stores them as individual memories tagged with the task ID.

python
1import os from openmemory.sdk.client import JeanMemoryClient def research_agent(task_id: str, topic: str): """Agent 1: Discovers facts and adds them to memory.""" print(f"--- [Research Agent] discovering facts about '{topic}' ---") # In a real scenario, this would come from a web search or document analysis facts = [ f"{topic} are a type of flowering plant in the nightshade family Solanaceae.", "The tomato is the edible berry of the plant Solanum lycopersicum.", "Tomatoes are a significant source of umami flavor.", ] client = JeanMemoryClient() for i, fact in enumerate(facts): metadata = {"task_id": task_id, "type": "fact", "step": i + 1} client.add_tagged_memory(text=fact, metadata=metadata, client_name="collaboration_app") print(f" - Added fact #{i+1}") print("--- ✅ Research complete ---")

2. Summarizer Agent

After the researcher is done, this agent retrieves all the facts for the task, uses an LLM to create a summary, and stores the final result back into memory.

python
1import os import time from openai import OpenAI from openmemory.sdk.client import JeanMemoryClient def summarizer_agent(task_id: str, topic: str): """Agent 2: Finds facts, uses an LLM to summarize, and stores the summary.""" print(f"--- [Summarizer Agent] creating a summary for '{topic}' ---") client = JeanMemoryClient() llm_client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY")) # Search for all facts related to this specific task time.sleep(1) # Give a moment for DB to be consistent facts = client.search_by_tags( filters={"task_id": task_id, "type": "fact"}, client_name="collaboration_app" ) if not facts: print("--- ❌ ERROR: No facts found to summarize. ---") return # Prepare context for the LLM fact_list = "
2".join([f"- {mem.get('content', '')}" for mem in facts]) prompt = f"Please synthesize the following facts about {topic} into a single, concise paragraph:
3
4{fact_list}" # Generate summary response = llm_client.chat.completions.create( model="gpt-4o-mini", messages=[{"role": "user", "content": prompt}] ) summary = response.choices[0].message.content # Store the final product metadata = {"task_id": task_id, "type": "summary"} client.add_tagged_memory(text=summary, metadata=metadata, client_name="collaboration_app") print("--- ✅ Summary complete and stored ---")