Open Source

Your agents have memory. Now give them a brain.

Self-learning loops that make your AI agents smarter over time. Track runs, extract patterns, inject learned rules. 5 lines of code.

Star on GitHub

The Missing Layer

Every AI agent stack has memory. None of them learn. AgentLoops fills the gap.

Frameworks
Build
LangChain, CrewAI
Observability
Observe
LangSmith, Arize
Memory
Remember
Mem0, Letta
Intelligence
Learn
AgentLoops
Evaluation
Evaluate
Braintrust, RAGAS

Five Lines. Infinite Learning.

Add self-learning to any agent in under a minute.

Python
from agentloops import AgentLoops

loops = AgentLoops("sales-outreach")

# Track every agent run
loops.track(input=task, output=result, outcome="meeting_booked")

# Agent reflects on what's working
reflection = loops.reflect()

# Inject learned rules into your prompt
enhanced = loops.enhance_prompt(base_prompt)

# Evolve conventions over time
loops.conventions.evolve()

# Forget stale patterns
loops.forget(max_age_days=21)

How Self-Learning Actually Works

A real example: a content agent that learns which posts go viral.

1

Track every run

Your content agent suggests TikTok ideas. You post them. You track the outcome.

loops.track(input="suggest tiktok", output="tutorial: how to use Claude", outcome="130 views")
loops.track(input="suggest tiktok", output="storytime: I quit my job for AI", outcome="47,000 views")
2

Agent reflects on patterns

After 10 runs, the agent looks at what worked and what didn't. It extracts rules — not opinions, evidence.

IF emotional hook + personal story THEN expect 47K+ views (confidence: 0.85)
IF tutorial walkthrough THEN expect ~130 views (confidence: 0.72)
IF trending topic + hot take within 4 hours THEN expect 10K+ views (confidence: 0.68)
3

Rules get injected into prompts

Next time the agent runs, enhance_prompt() adds the learned rules. The agent stops suggesting tutorials and starts leading with emotional hooks — without you rewriting the prompt.

enhanced = loops.enhance_prompt("You are a content strategist...")
4

Rules evolve or die

After 50 runs, stale rules get pruned. Maybe tutorials start working because the audience shifted. Old rule dies, new one replaces it. The agent adapts — forever.

Memory Remembers. Learning Improves.

They're not the same thing. Your agent needs both.

Memory (Mem0, Letta, Zep)
Remembers facts about this user
"John works at Stripe, prefers dark mode, asked about billing API last Tuesday"
Personalization
+
Learning (AgentLoops)
Learns patterns across all users
"IF user asks about billing THEN lead with code examples, not docs (72% faster resolution)"
Behavioral improvement

Memory without learning means your agent makes the same mistakes on every new user. Learning without memory means it can't personalize. Together: an agent that remembers and gets smarter. ~1,000 tokens total — compact, complementary, powerful.

Seven Learning Mechanisms

Inspired by Reflexion, cognitive memory architectures, and months of production use.

1

Self-Reflection

Agent evaluates its own output and writes patterns to conventions. Learns from every single run.

Runs: After every run
2

Spike Detection

Detects performance anomalies and triggers immediate follow-up analysis to understand what changed.

Runs: Continuous
3

Quality Gate

Scores outputs on configurable criteria before they ship. Catches regressions before your users do.

Runs: Before output
4

Decision Rules

Extracts prescriptive IF/THEN rules from performance data. Not opinions -- evidence-backed patterns.

Runs: Weekly
5

Cross-Evaluation

Compares agent predictions against actual outcomes. Validates which rules actually move the needle.

Runs: Weekly
6

Contradiction Resolution

Detects conflicting learned rules and resolves them automatically. No more contradictory conventions.

Runs: Weekly
7

Selective Forgetting

Prunes stale patterns that no longer apply. Keeps memory lean and relevant with time-decay + importance weighting.

Runs: Daily

Works Everywhere Agents Run

If your agent runs more than once, it should be learning.

📈

Sales Outreach

Agents learn which email patterns book meetings

+40% reply rate in 30 days
🎧

Customer Support

Learn which responses resolve tickets fastest

-35% resolution time
📝

Content Creation

Discover which formats drive the most engagement

+2.5x engagement rate
💻

Code Generation

Learn which patterns produce fewer bugs

-50% bug rate
🔍

Research Agents

Learn which sources yield the best insights

+60% insight quality
💰

Trading & Finance

Agents learn from market pattern outcomes

+25% signal accuracy
🚀

DevOps & SRE

Learn which incidents need which runbooks

-45% MTTR
🎓

EdTech & Tutoring

Adapt teaching style to what works per student

+30% learning outcomes
⚖️

Legal & Compliance

Learn which clauses flag real risks vs noise

-70% false positives
🩺

Healthcare

Improve triage accuracy from patient interaction data

+35% triage accuracy
🏨

Help Desk & Hospitality

Learn guest preferences, upsell timing, and escalation patterns

+3x upsell conversion

How AgentLoops Compares

Feature AgentLoops Mem0 Letta DIY
Self-reflection Yes -- -- Manual
Automatic rule extraction Yes -- -- Manual
Spike detection Yes -- -- Manual
Contradiction resolution Yes -- -- --
Selective forgetting Yes -- Partial --
Prompt enhancement Yes -- -- Manual
Convention evolution Yes -- -- --
Framework-agnostic Yes Yes No Yes
Lines of code to add ~5 ~10 ~50 ~500
Focus Learning Storage Stateful agents --

Start Learning Now

pip install agentloops. Two lines of code. Your agent learns from every run.