# Dasein
**Universal memory for agentic AI.** Attach a brain to any LangChain/LangGraph agent in a single line.
Dasein learns from your agent's execution history and automatically injects learned rules to improve performance, reduce costs, and increase reliability across runs.
## Features
β¨ **Zero-friction integration** - Wrap any LangChain or LangGraph agent in one line
π§ **Automatic learning** - Agents learn from successes and failures
π **Performance tracking** - Built-in token usage, timing, and success metrics
π **Retry logic** - Intelligent retry with learned optimizations
π **Execution traces** - Detailed step-by-step visibility into agent behavior
βοΈ **Cloud-powered** - Distributed rule synthesis and storage
## Installation
```bash
pip install dasein-core
```
Or install from source:
```bash
git clone https://github.com/nickswami/dasein-core.git
cd dasein-core
pip install -e .
```
## π Try It Now in Colab
<div align="center">
**π Zero setup required! Try all three examples in your browser:**
<a href="https://colab.research.google.com/github/nickswami/dasein-core/blob/main/examples/dasein_examples.ipynb" target="_blank">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab" style="height: 60px;"/>
</a>
**Three complete examples with automatic learning:**
ποΈ **SQL Agent** β’ π **Browser Agent** β’ π **Deep Research**
*30-50% token reduction β’ Optimized navigation β’ 20-40% multi-agent savings*
</div>
---
## Quick Start
### Basic Usage
```python
from dasein import cognate
from langchain_google_genai import ChatGoogleGenerativeAI
from langchain.agents import create_sql_agent
from langchain_community.agent_toolkits import SQLDatabaseToolkit
# Create your agent as usual
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
toolkit = SQLDatabaseToolkit(db=your_database, llm=llm)
agent = create_sql_agent(llm=llm, toolkit=toolkit, agent_type="tool-calling")
# Wrap with Dasein - that's it!
agent = cognate(agent)
# Use exactly like the original
result = agent.run("Show me the top 5 customers by revenue")
```
### With Performance Tracking
```python
from dasein import cognate
# Enable automatic retry and performance comparison
agent = cognate(
your_agent,
retry=2, # Run twice to learn and improve
performance_tracking=True # Show before/after metrics
)
result = agent.run("your query")
# π― Dasein automatically shows improvement metrics
```
### Advanced: Custom Optimization Weights
```python
from dasein import cognate
# Customize what Dasein optimizes for
agent = cognate(
your_agent,
weights={
"w1": 2.0, # Heavily favor successful rules
"w2": 0.5, # Less emphasis on turn count
"w3": 1.0, # Standard uncertainty penalty
"w4": 3.0, # Heavily optimize for token efficiency
"w5": 0.1 # Minimal time emphasis
}
)
```
## Architecture
Dasein uses a **cloud-first architecture** for rule learning and synthesis:
```
βββββββββββββββββββ
β Your Agent β
β (LangChain/ β
β LangGraph) β
ββββββββββ¬βββββββββ
β
βΌ
βββββββββββββββββββ
β Dasein Wrapper β βββ cognate()
β - Trace Capture β
β - Rule Injectionβ
ββββββββββ¬βββββββββ
β
ββββββ΄βββββ
βΌ βΌ
ββββββββββ ββββββββββ
βPre-Run β βPost-Runβ
βService β βService β
β β β β
βRecalls β βLearns β
βRules β βRules β
ββββββββββ ββββββββββ
```
### How It Works
1. **Pre-Run**: Dasein queries cloud services for relevant learned rules based on the task
2. **Execution**: Rules are injected into the agent's prompts/tools at optimal injection points
3. **Trace Capture**: Every LLM call, tool invocation, and decision is captured
4. **Post-Run**: Traces are sent to cloud services for rule synthesis and learning
5. **Next Run**: Improved rules are automatically available
## API Reference
### Core Functions
#### `cognate(agent, weights=None, verbose=False, retry=0, performance_tracking=False, rule_trace=False)`
Wrap any LangChain/LangGraph agent with Dasein's learning capabilities.
**Parameters:**
- `agent` - LangChain or LangGraph agent instance
- `weights` (dict) - Custom optimization weights for rule selection (w1-w5)
- `verbose` (bool) - Enable detailed debug logging
- `retry` (int) - Number of retries with learning (0 = single run, 2 = run twice with improvement)
- `performance_tracking` (bool) - Show before/after performance metrics
- `rule_trace` (bool) - Show detailed rule application trace
**Returns:** Wrapped agent with identical interface to the original
#### `print_trace()`
Display the execution trace of the last agent run.
#### `get_trace()`
Retrieve the execution trace as a list of dictionaries.
**Returns:** `List[Dict]` - Trace steps with timestamps, tokens, and decisions
#### `clear_trace()`
Clear the current execution trace.
#### `inject_hint(hint: str)`
Manually inject a hint/rule for the next agent run.
**Parameters:**
- `hint` (str) - The hint text to inject
#### `reset_brain()`
Clear all local state and event storage.
## Supported Frameworks
- β
LangChain Agents (all agent types)
- β
LangGraph Agents (CompiledStateGraph)
- β
Custom agents implementing standard interfaces
## Examples
See the `examples/` directory for complete examples:
- **SQL Agent** - Learn query patterns for a Chinook database
- **Browser Agent** - Learn web scraping strategies
- **Research Agent** - Multi-agent research coordination
## Verbose Mode
For debugging, enable verbose logging:
```python
agent = cognate(your_agent, verbose=True)
```
This shows detailed information about:
- Rule retrieval from cloud services
- Rule injection points and content
- Trace capture steps
- Post-run learning triggers
## Requirements
- Python 3.8+
- LangChain 0.1.0+
- LangChain Community 0.1.0+
- LangChain Google GenAI 0.0.6+
See `pyproject.toml` for complete dependency list.
## Configuration
Dasein uses cloud services for rule synthesis and storage. Configure service endpoints via environment variables:
```bash
export DASEIN_PRE_RUN_URL="https://your-pre-run-service.com"
export DASEIN_POST_RUN_URL="https://your-post-run-service.com"
```
Contact the Dasein team for cloud service access.
## Performance
Dasein is designed for minimal overhead:
- **Pre-run**: ~100-200ms for rule retrieval
- **Runtime**: <1% overhead for trace capture
- **Post-run**: Async - doesn't block your code
The benefits far outweigh the costs:
- π― 30-50% token reduction on repeated tasks
- π― Fewer failed runs through learned error handling
- π― Faster execution with optimized tool usage
## Contributing
We welcome contributions! Please see `CONTRIBUTING.md` for guidelines.
## License
MIT License - see `LICENSE` file for details.
## Troubleshooting
### Common Issues in Colab/Jupyter
**Q: I see timeout warnings for `dasein-pre-run` and `dasein-post-run` services**
A: These warnings can appear on first connection while the cloud services wake up (cold start). The services are **fully public** and will work after a brief initialization period. Your agent will continue running and learning will activate automatically once the services respond.
**Q: I see dependency conflict warnings**
A: These are safe to ignore in Colab. The package will work correctly despite version mismatches with Colab's pre-installed packages.
---
## Support
- π Issues: [GitHub Issues](https://github.com/nickswami/dasein-core/issues)
## Citation
If you use Dasein in your research, please cite:
```bibtex
@software{dasein2025,
title={Dasein: Universal Memory for Agentic AI},
author={Dasein Team},
year={2025},
url={https://github.com/nickswami/dasein-core}
}
```
---
**Built with β€οΈ for the agentic AI community**
Raw data
{
"_id": null,
"home_page": null,
"name": "dasein-core",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ai, agents, langchain, langgraph, memory, learning, agentic",
"author": null,
"author_email": "Dasein Team <support@dasein.ai>",
"download_url": "https://files.pythonhosted.org/packages/ca/3d/bd4a764004ff023bd5bd3a2b2c57539f19862307d103258c3ed38b168556/dasein_core-0.2.43.tar.gz",
"platform": null,
"description": "# Dasein\r\n\r\n**Universal memory for agentic AI.** Attach a brain to any LangChain/LangGraph agent in a single line.\r\n\r\nDasein learns from your agent's execution history and automatically injects learned rules to improve performance, reduce costs, and increase reliability across runs.\r\n\r\n## Features\r\n\r\n\u2728 **Zero-friction integration** - Wrap any LangChain or LangGraph agent in one line \r\n\ud83e\udde0 **Automatic learning** - Agents learn from successes and failures \r\n\ud83d\udcca **Performance tracking** - Built-in token usage, timing, and success metrics \r\n\ud83d\udd04 **Retry logic** - Intelligent retry with learned optimizations \r\n\ud83d\udd0d **Execution traces** - Detailed step-by-step visibility into agent behavior \r\n\u2601\ufe0f **Cloud-powered** - Distributed rule synthesis and storage\r\n\r\n## Installation\r\n\r\n```bash\r\npip install dasein-core\r\n```\r\n\r\nOr install from source:\r\n\r\n```bash\r\ngit clone https://github.com/nickswami/dasein-core.git\r\ncd dasein-core\r\npip install -e .\r\n```\r\n\r\n## \ud83d\udcd3 Try It Now in Colab\r\n\r\n<div align=\"center\">\r\n\r\n**\ud83d\ude80 Zero setup required! Try all three examples in your browser:**\r\n\r\n<a href=\"https://colab.research.google.com/github/nickswami/dasein-core/blob/main/examples/dasein_examples.ipynb\" target=\"_blank\">\r\n <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\" style=\"height: 60px;\"/>\r\n</a>\r\n\r\n**Three complete examples with automatic learning:**\r\n\r\n\ud83d\uddc4\ufe0f **SQL Agent** \u2022 \ud83c\udf10 **Browser Agent** \u2022 \ud83d\udd0d **Deep Research**\r\n\r\n*30-50% token reduction \u2022 Optimized navigation \u2022 20-40% multi-agent savings*\r\n\r\n</div>\r\n\r\n---\r\n\r\n## Quick Start\r\n\r\n### Basic Usage\r\n\r\n```python\r\nfrom dasein import cognate\r\nfrom langchain_google_genai import ChatGoogleGenerativeAI\r\nfrom langchain.agents import create_sql_agent\r\nfrom langchain_community.agent_toolkits import SQLDatabaseToolkit\r\n\r\n# Create your agent as usual\r\nllm = ChatGoogleGenerativeAI(model=\"gemini-2.5-flash\")\r\ntoolkit = SQLDatabaseToolkit(db=your_database, llm=llm)\r\nagent = create_sql_agent(llm=llm, toolkit=toolkit, agent_type=\"tool-calling\")\r\n\r\n# Wrap with Dasein - that's it!\r\nagent = cognate(agent)\r\n\r\n# Use exactly like the original\r\nresult = agent.run(\"Show me the top 5 customers by revenue\")\r\n```\r\n\r\n### With Performance Tracking\r\n\r\n```python\r\nfrom dasein import cognate\r\n\r\n# Enable automatic retry and performance comparison\r\nagent = cognate(\r\n your_agent,\r\n retry=2, # Run twice to learn and improve\r\n performance_tracking=True # Show before/after metrics\r\n)\r\n\r\nresult = agent.run(\"your query\")\r\n# \ud83c\udfaf Dasein automatically shows improvement metrics\r\n```\r\n\r\n### Advanced: Custom Optimization Weights\r\n\r\n```python\r\nfrom dasein import cognate\r\n\r\n# Customize what Dasein optimizes for\r\nagent = cognate(\r\n your_agent,\r\n weights={\r\n \"w1\": 2.0, # Heavily favor successful rules\r\n \"w2\": 0.5, # Less emphasis on turn count\r\n \"w3\": 1.0, # Standard uncertainty penalty\r\n \"w4\": 3.0, # Heavily optimize for token efficiency\r\n \"w5\": 0.1 # Minimal time emphasis\r\n }\r\n)\r\n```\r\n\r\n## Architecture\r\n\r\nDasein uses a **cloud-first architecture** for rule learning and synthesis:\r\n\r\n```\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 Your Agent \u2502\r\n\u2502 (LangChain/ \u2502\r\n\u2502 LangGraph) \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n \u2502\r\n \u25bc\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 Dasein Wrapper \u2502 \u25c4\u2500\u2500 cognate()\r\n\u2502 - Trace Capture \u2502\r\n\u2502 - Rule Injection\u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n \u2502\r\n \u250c\u2500\u2500\u2500\u2500\u2534\u2500\u2500\u2500\u2500\u2510\r\n \u25bc \u25bc\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502Pre-Run \u2502 \u2502Post-Run\u2502\r\n\u2502Service \u2502 \u2502Service \u2502\r\n\u2502 \u2502 \u2502 \u2502\r\n\u2502Recalls \u2502 \u2502Learns \u2502\r\n\u2502Rules \u2502 \u2502Rules \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n```\r\n\r\n### How It Works\r\n\r\n1. **Pre-Run**: Dasein queries cloud services for relevant learned rules based on the task\r\n2. **Execution**: Rules are injected into the agent's prompts/tools at optimal injection points\r\n3. **Trace Capture**: Every LLM call, tool invocation, and decision is captured\r\n4. **Post-Run**: Traces are sent to cloud services for rule synthesis and learning\r\n5. **Next Run**: Improved rules are automatically available\r\n\r\n## API Reference\r\n\r\n### Core Functions\r\n\r\n#### `cognate(agent, weights=None, verbose=False, retry=0, performance_tracking=False, rule_trace=False)`\r\n\r\nWrap any LangChain/LangGraph agent with Dasein's learning capabilities.\r\n\r\n**Parameters:**\r\n- `agent` - LangChain or LangGraph agent instance\r\n- `weights` (dict) - Custom optimization weights for rule selection (w1-w5)\r\n- `verbose` (bool) - Enable detailed debug logging\r\n- `retry` (int) - Number of retries with learning (0 = single run, 2 = run twice with improvement)\r\n- `performance_tracking` (bool) - Show before/after performance metrics\r\n- `rule_trace` (bool) - Show detailed rule application trace\r\n\r\n**Returns:** Wrapped agent with identical interface to the original\r\n\r\n#### `print_trace()`\r\n\r\nDisplay the execution trace of the last agent run.\r\n\r\n#### `get_trace()`\r\n\r\nRetrieve the execution trace as a list of dictionaries.\r\n\r\n**Returns:** `List[Dict]` - Trace steps with timestamps, tokens, and decisions\r\n\r\n#### `clear_trace()`\r\n\r\nClear the current execution trace.\r\n\r\n#### `inject_hint(hint: str)`\r\n\r\nManually inject a hint/rule for the next agent run.\r\n\r\n**Parameters:**\r\n- `hint` (str) - The hint text to inject\r\n\r\n#### `reset_brain()`\r\n\r\nClear all local state and event storage.\r\n\r\n## Supported Frameworks\r\n\r\n- \u2705 LangChain Agents (all agent types)\r\n- \u2705 LangGraph Agents (CompiledStateGraph)\r\n- \u2705 Custom agents implementing standard interfaces\r\n\r\n## Examples\r\n\r\nSee the `examples/` directory for complete examples:\r\n\r\n- **SQL Agent** - Learn query patterns for a Chinook database\r\n- **Browser Agent** - Learn web scraping strategies\r\n- **Research Agent** - Multi-agent research coordination\r\n\r\n## Verbose Mode\r\n\r\nFor debugging, enable verbose logging:\r\n\r\n```python\r\nagent = cognate(your_agent, verbose=True)\r\n```\r\n\r\nThis shows detailed information about:\r\n- Rule retrieval from cloud services\r\n- Rule injection points and content\r\n- Trace capture steps\r\n- Post-run learning triggers\r\n\r\n## Requirements\r\n\r\n- Python 3.8+\r\n- LangChain 0.1.0+\r\n- LangChain Community 0.1.0+\r\n- LangChain Google GenAI 0.0.6+\r\n\r\nSee `pyproject.toml` for complete dependency list.\r\n\r\n## Configuration\r\n\r\nDasein uses cloud services for rule synthesis and storage. Configure service endpoints via environment variables:\r\n\r\n```bash\r\nexport DASEIN_PRE_RUN_URL=\"https://your-pre-run-service.com\"\r\nexport DASEIN_POST_RUN_URL=\"https://your-post-run-service.com\"\r\n```\r\n\r\nContact the Dasein team for cloud service access.\r\n\r\n## Performance\r\n\r\nDasein is designed for minimal overhead:\r\n- **Pre-run**: ~100-200ms for rule retrieval\r\n- **Runtime**: <1% overhead for trace capture\r\n- **Post-run**: Async - doesn't block your code\r\n\r\nThe benefits far outweigh the costs:\r\n- \ud83c\udfaf 30-50% token reduction on repeated tasks\r\n- \ud83c\udfaf Fewer failed runs through learned error handling\r\n- \ud83c\udfaf Faster execution with optimized tool usage\r\n\r\n## Contributing\r\n\r\nWe welcome contributions! Please see `CONTRIBUTING.md` for guidelines.\r\n\r\n## License\r\n\r\nMIT License - see `LICENSE` file for details.\r\n\r\n## Troubleshooting\r\n\r\n### Common Issues in Colab/Jupyter\r\n\r\n**Q: I see timeout warnings for `dasein-pre-run` and `dasein-post-run` services**\r\n\r\nA: These warnings can appear on first connection while the cloud services wake up (cold start). The services are **fully public** and will work after a brief initialization period. Your agent will continue running and learning will activate automatically once the services respond.\r\n\r\n**Q: I see dependency conflict warnings**\r\n\r\nA: These are safe to ignore in Colab. The package will work correctly despite version mismatches with Colab's pre-installed packages.\r\n\r\n---\r\n\r\n## Support\r\n\r\n- \ud83d\udc1b Issues: [GitHub Issues](https://github.com/nickswami/dasein-core/issues)\r\n\r\n## Citation\r\n\r\nIf you use Dasein in your research, please cite:\r\n\r\n```bibtex\r\n@software{dasein2025,\r\n title={Dasein: Universal Memory for Agentic AI},\r\n author={Dasein Team},\r\n year={2025},\r\n url={https://github.com/nickswami/dasein-core}\r\n}\r\n```\r\n\r\n---\r\n\r\n**Built with \u2764\ufe0f for the agentic AI community**\r\n\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Universal memory for agentic AI. Attach a brain to any LangChain/LangGraph agent in a single line.",
"version": "0.2.43",
"project_urls": {
"Bug Tracker": "https://github.com/nickswami/dasein-core/issues",
"Documentation": "https://github.com/nickswami/dasein-core#readme",
"Homepage": "https://github.com/nickswami/dasein-core",
"Repository": "https://github.com/nickswami/dasein-core"
},
"split_keywords": [
"ai",
" agents",
" langchain",
" langgraph",
" memory",
" learning",
" agentic"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "19a137266ab41801070a6b4176ffa15cf2d601fb349efd62f48556c2a77f220d",
"md5": "f10fc64cfa4fb6c32bb5c9f02ac85c2e",
"sha256": "09b1be6c7d3f66724c85835bb8a385ac135c67e453ba2539f429d7eba324c1ee"
},
"downloads": -1,
"filename": "dasein_core-0.2.43-py3-none-any.whl",
"has_sig": false,
"md5_digest": "f10fc64cfa4fb6c32bb5c9f02ac85c2e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 12935557,
"upload_time": "2025-10-24T05:11:53",
"upload_time_iso_8601": "2025-10-24T05:11:53.909357Z",
"url": "https://files.pythonhosted.org/packages/19/a1/37266ab41801070a6b4176ffa15cf2d601fb349efd62f48556c2a77f220d/dasein_core-0.2.43-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ca3dbd4a764004ff023bd5bd3a2b2c57539f19862307d103258c3ed38b168556",
"md5": "1e01d6d75658a6cf1a69bbe417c572bf",
"sha256": "61b205f11937b6b5a1482a85d7d49f66da4172f40f872497fd1fdc87d834a3c6"
},
"downloads": -1,
"filename": "dasein_core-0.2.43.tar.gz",
"has_sig": false,
"md5_digest": "1e01d6d75658a6cf1a69bbe417c572bf",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 12935309,
"upload_time": "2025-10-24T05:11:57",
"upload_time_iso_8601": "2025-10-24T05:11:57.254980Z",
"url": "https://files.pythonhosted.org/packages/ca/3d/bd4a764004ff023bd5bd3a2b2c57539f19862307d103258c3ed38b168556/dasein_core-0.2.43.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-24 05:11:57",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "nickswami",
"github_project": "dasein-core",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "dasein-core"
}