# LangGraph API (In-Memory)
This package implements a local version of the LangGraph API for rapid development and testing. Build and iterate on LangGraph agents with a tight feedback loop. The sesrver is backed by a predominently in-memory data store that is persisted to local disk when the server is restarted.
For production use, see the various [deployment options](https://langchain-ai.github.io/langgraph/concepts/deployment_options/) for the LangGraph API, which are backed by a production-grade database.
## Installation
Install the `langgraph-cli` package with the `inmem` extra. Your CLI version must be no lower than `0.1.55`.
```bash
pip install -U langgraph-cli[inmem]
```
## Quickstart
1. (Optional) Clone a starter template:
```bash
langgraph new --template new-langgraph-project-python ./my-project
cd my-project
```
(Recommended) Use a virtual environment and install dependencies:
```bash
python -m venv venv
source venv/bin/activate
python -m pip install -e .
python -m pip install -U langgraph-cli[inmem]
```
2. Start the development server:
```shell
langgraph dev --config ./langgraph.json
```
3. The server will launch, opening a browser window with the graph UI. Interact with your graph or make code edits; the server automatically reloads on changes.
## Usage
Start the development server:
```bash
langgraph dev
```
Your agent's state (threads, runs, assistants) persists in memory while the server is running - perfect for development and testing. Each run's state is tracked and can be inspected, making it easy to debug and improve your agent's behavior.
## How-To
#### Attaching a debugger
Debug mode lets you attach your IDE's debugger to the LangGraph API server to set breakpoints and step through your code line-by-line.
1. Install debugpy:
```bash
pip install debugpy
```
2. Start the server in debug mode:
```bash
langgraph dev --debug-port 5678
```
3. Configure your IDE:
- **VS Code**: Add this launch configuration:
```json
{
"name": "Attach to LangGraph",
"type": "debugpy",
"request": "attach",
"connect": {
"host": "0.0.0.0",
"port": 5678
},
}
```
- **PyCharm**: Use "Attach to Process" and select the langgraph process
4. Set breakpoints in your graph code and start debugging.
## CLI options
```bash
langgraph dev [OPTIONS]
Options:
--debug-port INTEGER Enable remote debugging on specified port
--no-browser Skip opening browser on startup
--n-jobs-per-worker INTEGER Maximum concurrent jobs per worker process
--config PATH Custom configuration file path
--no-reload Disable code hot reloading
--port INTEGER HTTP server port (default: 8000)
--host TEXT HTTP server host (default: localhost)
```
## License
This project is licensed under the Elastic License 2.0 - see the [LICENSE](./LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "langgraph-api-inmem",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0.0,>=3.11.0",
"maintainer_email": null,
"keywords": null,
"author": "Will Fu-Hinthorn",
"author_email": "will@langchain.dev",
"download_url": "https://files.pythonhosted.org/packages/27/3b/bc321c8bc4875e032e8d2b6b77040570a1a4ef9f4b14499bdaf3e72b25f0/langgraph_api_inmem-0.0.4.tar.gz",
"platform": null,
"description": "# LangGraph API (In-Memory)\n\nThis package implements a local version of the LangGraph API for rapid development and testing. Build and iterate on LangGraph agents with a tight feedback loop. The sesrver is backed by a predominently in-memory data store that is persisted to local disk when the server is restarted.\n\nFor production use, see the various [deployment options](https://langchain-ai.github.io/langgraph/concepts/deployment_options/) for the LangGraph API, which are backed by a production-grade database.\n\n## Installation\n\nInstall the `langgraph-cli` package with the `inmem` extra. Your CLI version must be no lower than `0.1.55`.\n\n```bash\npip install -U langgraph-cli[inmem]\n```\n\n## Quickstart\n\n1. (Optional) Clone a starter template:\n\n ```bash\n langgraph new --template new-langgraph-project-python ./my-project\n cd my-project\n ```\n\n (Recommended) Use a virtual environment and install dependencies:\n\n ```bash\n python -m venv venv\n source venv/bin/activate\n python -m pip install -e .\n python -m pip install -U langgraph-cli[inmem]\n ```\n\n2. Start the development server:\n\n ```shell\n langgraph dev --config ./langgraph.json\n ```\n\n3. The server will launch, opening a browser window with the graph UI. Interact with your graph or make code edits; the server automatically reloads on changes.\n\n## Usage\n\nStart the development server:\n\n```bash\nlanggraph dev\n```\n\nYour agent's state (threads, runs, assistants) persists in memory while the server is running - perfect for development and testing. Each run's state is tracked and can be inspected, making it easy to debug and improve your agent's behavior.\n\n## How-To\n\n#### Attaching a debugger\n\nDebug mode lets you attach your IDE's debugger to the LangGraph API server to set breakpoints and step through your code line-by-line.\n\n1. Install debugpy:\n\n ```bash\n pip install debugpy\n ```\n\n2. Start the server in debug mode:\n\n ```bash\n langgraph dev --debug-port 5678\n ```\n\n3. Configure your IDE:\n\n - **VS Code**: Add this launch configuration:\n ```json\n {\n \"name\": \"Attach to LangGraph\",\n \"type\": \"debugpy\",\n \"request\": \"attach\",\n \"connect\": {\n \"host\": \"0.0.0.0\",\n \"port\": 5678\n },\n }\n ```\n - **PyCharm**: Use \"Attach to Process\" and select the langgraph process\n\n4. Set breakpoints in your graph code and start debugging.\n\n## CLI options\n\n```bash\nlanggraph dev [OPTIONS]\n\nOptions:\n --debug-port INTEGER Enable remote debugging on specified port\n --no-browser Skip opening browser on startup\n --n-jobs-per-worker INTEGER Maximum concurrent jobs per worker process\n --config PATH Custom configuration file path\n --no-reload Disable code hot reloading\n --port INTEGER HTTP server port (default: 8000)\n --host TEXT HTTP server host (default: localhost)\n```\n\n## License\n\nThis project is licensed under the Elastic License 2.0 - see the [LICENSE](./LICENSE) file for details.\n\n",
"bugtrack_url": null,
"license": "Elastic-2.0",
"summary": null,
"version": "0.0.4",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "b8fd1dfc579b87eed9636fb0b554f0b33c30dfaff56585e840e5e02fcb9c3118",
"md5": "4276d034e3f72a011aa017aa16159856",
"sha256": "33c7c5e312f1191f3cb5ed1e300d8c7301c33fb0cec413e7553af698300a97c6"
},
"downloads": -1,
"filename": "langgraph_api_inmem-0.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4276d034e3f72a011aa017aa16159856",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0.0,>=3.11.0",
"size": 22432815,
"upload_time": "2024-11-19T21:11:13",
"upload_time_iso_8601": "2024-11-19T21:11:13.750208Z",
"url": "https://files.pythonhosted.org/packages/b8/fd/1dfc579b87eed9636fb0b554f0b33c30dfaff56585e840e5e02fcb9c3118/langgraph_api_inmem-0.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "273bbc321c8bc4875e032e8d2b6b77040570a1a4ef9f4b14499bdaf3e72b25f0",
"md5": "fd3988bc58bbc81f3287e7d36d84f994",
"sha256": "390f16e6cb08b81c4675f751fad8bb9c8d351ed2c6650c54ed1b34515323c1d9"
},
"downloads": -1,
"filename": "langgraph_api_inmem-0.0.4.tar.gz",
"has_sig": false,
"md5_digest": "fd3988bc58bbc81f3287e7d36d84f994",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0.0,>=3.11.0",
"size": 20801075,
"upload_time": "2024-11-19T21:11:31",
"upload_time_iso_8601": "2024-11-19T21:11:31.952039Z",
"url": "https://files.pythonhosted.org/packages/27/3b/bc321c8bc4875e032e8d2b6b77040570a1a4ef9f4b14499bdaf3e72b25f0/langgraph_api_inmem-0.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-19 21:11:31",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "langgraph-api-inmem"
}