Self-learning loops that make your AI agents smarter over time. Track runs, extract patterns, inject learned rules. 5 lines of code.
Every AI agent stack has memory. None of them learn. AgentLoops fills the gap.
Add self-learning to any agent in under a minute.
from agentloops import AgentLoops loops = AgentLoops("sales-outreach") # Track every agent run loops.track(input=task, output=result, outcome="meeting_booked") # Agent reflects on what's working reflection = loops.reflect() # Inject learned rules into your prompt enhanced = loops.enhance_prompt(base_prompt) # Evolve conventions over time loops.conventions.evolve() # Forget stale patterns loops.forget(max_age_days=21)
A real example: a content agent that learns which posts go viral.
Your content agent suggests TikTok ideas. You post them. You track the outcome.
loops.track(input="suggest tiktok", output="tutorial: how to use Claude", outcome="130 views")loops.track(input="suggest tiktok", output="storytime: I quit my job for AI", outcome="47,000 views")After 10 runs, the agent looks at what worked and what didn't. It extracts rules — not opinions, evidence.
Next time the agent runs, enhance_prompt() adds the learned rules. The agent stops suggesting tutorials and starts leading with emotional hooks — without you rewriting the prompt.
enhanced = loops.enhance_prompt("You are a content strategist...")After 50 runs, stale rules get pruned. Maybe tutorials start working because the audience shifted. Old rule dies, new one replaces it. The agent adapts — forever.
They're not the same thing. Your agent needs both.
Memory without learning means your agent makes the same mistakes on every new user. Learning without memory means it can't personalize. Together: an agent that remembers and gets smarter. ~1,000 tokens total — compact, complementary, powerful.
Inspired by Reflexion, cognitive memory architectures, and months of production use.
Agent evaluates its own output and writes patterns to conventions. Learns from every single run.
Detects performance anomalies and triggers immediate follow-up analysis to understand what changed.
Scores outputs on configurable criteria before they ship. Catches regressions before your users do.
Extracts prescriptive IF/THEN rules from performance data. Not opinions -- evidence-backed patterns.
Compares agent predictions against actual outcomes. Validates which rules actually move the needle.
Detects conflicting learned rules and resolves them automatically. No more contradictory conventions.
Prunes stale patterns that no longer apply. Keeps memory lean and relevant with time-decay + importance weighting.
If your agent runs more than once, it should be learning.
Agents learn which email patterns book meetings
Learn which responses resolve tickets fastest
Discover which formats drive the most engagement
Learn which patterns produce fewer bugs
Learn which sources yield the best insights
Agents learn from market pattern outcomes
Learn which incidents need which runbooks
Adapt teaching style to what works per student
Learn which clauses flag real risks vs noise
Improve triage accuracy from patient interaction data
Learn guest preferences, upsell timing, and escalation patterns
| Feature | AgentLoops | Mem0 | Letta | DIY |
|---|---|---|---|---|
| Self-reflection | Yes | -- | -- | Manual |
| Automatic rule extraction | Yes | -- | -- | Manual |
| Spike detection | Yes | -- | -- | Manual |
| Contradiction resolution | Yes | -- | -- | -- |
| Selective forgetting | Yes | -- | Partial | -- |
| Prompt enhancement | Yes | -- | -- | Manual |
| Convention evolution | Yes | -- | -- | -- |
| Framework-agnostic | Yes | Yes | No | Yes |
| Lines of code to add | ~5 | ~10 | ~50 | ~500 |
| Focus | Learning | Storage | Stateful agents | -- |
pip install agentloops. Two lines of code. Your agent learns from every run.