Name | attach-dev JSON |
Version |
0.2.2
JSON |
| download |
home_page | None |
Summary | Identity & Memory side-car for every LLM engine and multi-agent framework |
upload_time | 2025-07-14 22:31:43 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | MIT License
Copyright (c) 2024 Hammad Tariq
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE. |
keywords |
agents
ai
auth
fastapi
llm
memory
sso
|
VCS |
 |
bugtrack_url |
|
requirements |
fastapi
uvicorn
httpx
python-jose
tiktoken
pytest
pytest-asyncio
weaviate-client
black
isort
temporalio
langchain
langchain-core
langgraph
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Attach Gateway
> **Identity & Memory side‑car** for every LLM engine and multi‑agent framework. Add OIDC / DID SSO, A2A hand‑off, and a pluggable memory bus (Weaviate today) – all with one process.
[](https://pypi.org/project/attach-dev/)
---
## Why it exists
LLM engines such as **Ollama** or **vLLM** ship with **zero auth**. Agent‑to‑agent protocols (Google **A2A**, MCP, OpenHands) assume a *Bearer token* is already present but don't tell you how to issue or validate it. Teams end up wiring ad‑hoc reverse proxies, leaking ports, and copy‑pasting JWT code.
**Attach Gateway** is that missing resource‑server:
* ✅ Verifies **OIDC / JWT** or **DID‑JWT**
* ✅ Stamps `X‑Attach‑User` + `X‑Attach‑Session` headers so every downstream agent/tool sees the same identity
* ✅ Implements `/a2a/tasks/send` + `/tasks/status` for Google A2A & OpenHands hand‑off
* ✅ Mirrors prompts & responses to a memory backend (Weaviate Docker container by default)
* ✅ Workflow traces (Temporal)
Run it next to any model server and get secure, shareable context in under 1 minute.
---
## 60‑second Quick‑start (local laptop)
### Option 1: Install from PyPI (Recommended)
```bash
# 0) prerequisites: Python 3.12, Ollama installed, Auth0 account or DID token
# Install the package
pip install attach-dev
# 1) start memory in Docker (background tab)
# Mac M1/M2 users: use manual Docker command (see examples/README.md)
docker run --rm -d -p 6666:8080 \
-e AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true \
semitechnologies/weaviate:1.30.5
# 2) export your short‑lived token
export JWT="<paste Auth0 or DID token>"
export OIDC_ISSUER=https://YOUR_DOMAIN.auth0.com
export OIDC_AUD=ollama-local
export MEM_BACKEND=weaviate
export WEAVIATE_URL=http://127.0.0.1:6666
# 3) run gateway
attach-gateway --port 8080 &
# 4) make a protected Ollama call via the gateway
curl -H "Authorization: Bearer $JWT" \
-d '{"model":"tinyllama","prompt":"hello"}' \
http://localhost:8080/api/chat | jq .
```
### Option 2: Install from Source
```bash
# 0) prerequisites: Python 3.12, Ollama installed, Auth0 account or DID token
git clone https://github.com/attach-dev/attach-gateway.git && cd attach-gateway
python -m venv .venv && source .venv/bin/activate
pip install -r requirements.txt
# 1) start memory in Docker (background tab)
python script/start_weaviate.py &
# 2) export your short‑lived token
export JWT="<paste Auth0 or DID token>"
export OIDC_ISSUER=https://YOUR_DOMAIN.auth0.com
export OIDC_AUD=ollama-local
export MEM_BACKEND=weaviate
export WEAVIATE_URL=http://127.0.0.1:6666
# 3) run gateway
uvicorn main:app --port 8080 &
# The gateway exposes your Auth0 credentials for the demo UI at
# `/auth/config`. The values are read from `AUTH0_DOMAIN`,
# `AUTH0_CLIENT` and `OIDC_AUD`.
# 4) make a protected Ollama call via the gateway
curl -H "Authorization: Bearer $JWT" \
-d '{"model":"tinyllama","messages":[{"role":"user","content":"hello"}]}' \
http://localhost:8080/api/chat | jq .
```
In another terminal, try the Temporal demo:
```bash
pip install temporalio # optional workflow engine
python examples/temporal_adapter/worker.py &
python examples/temporal_adapter/client.py
```
You should see a JSON response plus `X‑ATTACH‑Session‑Id` header – proof the pipeline works.
---
## Use in your project
1. Copy `.env.example` → `.env` and fill in OIDC + backend URLs
2. `pip install attach-dev python-dotenv`
3. `attach-gateway` (reads .env automatically)
→ See **[docs/configuration.md](docs/configuration.md)** for framework integration and **[examples/](examples/)** for code samples.
---
## Architecture (planner → coder hand‑off)
```mermaid
flowchart TD
%%────────────────────────────────
%% COMPONENTS
%%────────────────────────────────
subgraph Front-end
UI["Browser<br/> demo.html"]
end
subgraph Gateway
GW["Attach Gateway<br/> (OIDC SSO + A2A)"]
end
subgraph Agents
PL["Planner Agent<br/>FastAPI :8100"]
CD["Coder Agent<br/>FastAPI :8101"]
end
subgraph Memory
WV["Weaviate (Docker)\nclass MemoryEvent"]
end
subgraph Engine
OL["Ollama / vLLM<br/>:11434"]
end
%%────────────────────────────────
%% USER FLOW
%%────────────────────────────────
UI -- ① POST /a2a/tasks/send<br/>Bearer JWT, prompt --> GW
%%─ Planner hop
GW -- ② Proxy → planner<br/>(X-Attach-User, Session) --> PL
PL -- ③ Write plan doc --> WV
PL -- ④ /a2a/tasks/send\nbody:{mem_id} --> GW
%%─ Coder hop
GW -- ⑤ Proxy → coder --> CD
CD -- ⑥ GET plan by mem_id --> WV
CD -- ⑦ POST /api/chat\nprompt(plan) --> GW
GW -- ⑧ Proxy → Ollama --> OL
OL -- ⑨ JSON response --> GW
GW -- ⑩ Write response to Weaviate --> WV
GW -- ⑪ /a2a/tasks/status = done --> UI
```
**Key headers**
| Header | Meaning |
|--------|---------|
| `Authorization: Bearer <JWT>` | OIDC or DID token proved by gateway |
| `X‑Attach‑User` | stable user ID (`auth0|123` or `did:pkh:…`) |
| `X‑Attach‑Session` | deterministic hash (user + UA) for request trace |
---
## Live two‑agent demo
```bash
# pane 1 – memory (Docker)
python script/start_weaviate.py
# pane 2 – gateway
uvicorn main:app --port 8080
# pane 3 – planner agent
uvicorn examples.agents.planner:app --port 8100
# pane 4 – coder agent
uvicorn examples.agents.coder:app --port 8101
# pane 5 – static chat UI
cd examples/static && python -m http.server 9000
open http://localhost:9000/demo.html
```
Type a request like *"Write Python to sort a list."* The browser shows:
1. Planner message → logged in gateway, plan row appears in memory.
2. Coder reply → code response, second memory row, status `done`.
---
## Directory map
| Path | Purpose |
|------|---------|
| `auth/` | OIDC & DID‑JWT verifiers |
| `middleware/` | JWT middleware, session header, mirror trigger |
| `a2a/` | `/tasks/send` & `/tasks/status` routes |
| `mem/` | pluggable memory writers (`weaviate.py`, `sakana.py`) |
| `proxy/` | Engine-agnostic HTTP proxy logic |
| `examples/agents/` | *examples* – Planner & Coder FastAPI services |
| `examples/static/` | `demo.html` chat page |
---
### Auth core
`auth.verify_jwt()` accepts three token formats and routes them automatically:
1. Standard OIDC JWTs
2. `did:key` tokens
3. `did:pkh` tokens
Example DID-JWT request:
```bash
curl -X POST /v1/resource \
-H "Authorization: Bearer did:key:z6Mki...<sig>.<payload>.<sig>"
```
## 💾 Memory: logs
Send Sakana-formatted logs to the gateway and they will be stored as
`MemoryEvent` objects in Weaviate.
```bash
curl -X POST /v1/logs \
-H "Authorization: Bearer $JWT" \
-d '{"run_id":"abc","level":"info","message":"hi"}'
# => HTTP/1.1 202 Accepted
```
## Token quotas
Attach Gateway can enforce per-user token limits. Install the optional
dependency with `pip install attach-gateway[quota]` and set
`MAX_TOKENS_PER_MIN` in your environment to enable the middleware. The
counter defaults to the `cl100k_base` encoding; override with
`QUOTA_ENCODING` if your model uses a different tokenizer. The default
in-memory store works in a single process and is not shared between
workers—requests retried across processes may be double-counted. Use Redis
for production deployments.
### Enable token quotas
```bash
# Optional: Enable token quotas
export MAX_TOKENS_PER_MIN=60000
pip install tiktoken # or pip install attach-gateway[quota]
```
To customize the tokenizer:
```bash
export QUOTA_ENCODING=cl100k_base # default
```
## Roadmap
* **v0.2** — Protected‑resource metadata endpoint (OAuth 2.1), enhanced DID resolvers.
* **v0.3** — Token‑exchange (RFC 8693) for on‑behalf‑of delegation.
* **v0.4** — Attach Store v1 (Git‑style, policy guards).
---
## License
MIT
Raw data
{
"_id": null,
"home_page": null,
"name": "attach-dev",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "agents, ai, auth, fastapi, llm, memory, sso",
"author": null,
"author_email": "Hammad Tariq <hammad@attach.dev>",
"download_url": "https://files.pythonhosted.org/packages/2c/5f/df3c2ded51ebbaa1e48fa1cc3304dfc57d698cada761f05c17700edda9b7/attach_dev-0.2.2.tar.gz",
"platform": null,
"description": "# Attach Gateway\n\n> **Identity & Memory side\u2011car** for every LLM engine and multi\u2011agent framework. Add OIDC / DID SSO, A2A hand\u2011off, and a pluggable memory bus (Weaviate today) \u2013 all with one process.\n\n[](https://pypi.org/project/attach-dev/)\n\n---\n\n## Why it exists\n\nLLM engines such as **Ollama** or **vLLM** ship with **zero auth**. Agent\u2011to\u2011agent protocols (Google **A2A**, MCP, OpenHands) assume a *Bearer token* is already present but don't tell you how to issue or validate it. Teams end up wiring ad\u2011hoc reverse proxies, leaking ports, and copy\u2011pasting JWT code.\n\n**Attach Gateway** is that missing resource\u2011server:\n\n* \u2705 Verifies **OIDC / JWT** or **DID\u2011JWT**\n* \u2705 Stamps `X\u2011Attach\u2011User` + `X\u2011Attach\u2011Session` headers so every downstream agent/tool sees the same identity\n* \u2705 Implements `/a2a/tasks/send` + `/tasks/status` for Google A2A & OpenHands hand\u2011off\n* \u2705 Mirrors prompts & responses to a memory backend (Weaviate Docker container by default)\n* \u2705 Workflow traces (Temporal)\n\nRun it next to any model server and get secure, shareable context in under 1 minute.\n\n---\n\n## 60\u2011second Quick\u2011start (local laptop)\n\n### Option 1: Install from PyPI (Recommended)\n\n```bash\n# 0) prerequisites: Python 3.12, Ollama installed, Auth0 account or DID token\n\n# Install the package\npip install attach-dev\n\n# 1) start memory in Docker (background tab)\n# Mac M1/M2 users: use manual Docker command (see examples/README.md)\ndocker run --rm -d -p 6666:8080 \\\n -e AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED=true \\\n semitechnologies/weaviate:1.30.5\n\n# 2) export your short\u2011lived token\nexport JWT=\"<paste Auth0 or DID token>\"\nexport OIDC_ISSUER=https://YOUR_DOMAIN.auth0.com\nexport OIDC_AUD=ollama-local\nexport MEM_BACKEND=weaviate\nexport WEAVIATE_URL=http://127.0.0.1:6666\n\n# 3) run gateway\nattach-gateway --port 8080 &\n\n# 4) make a protected Ollama call via the gateway\ncurl -H \"Authorization: Bearer $JWT\" \\\n -d '{\"model\":\"tinyllama\",\"prompt\":\"hello\"}' \\\n http://localhost:8080/api/chat | jq .\n```\n\n### Option 2: Install from Source\n\n```bash\n# 0) prerequisites: Python 3.12, Ollama installed, Auth0 account or DID token\n\ngit clone https://github.com/attach-dev/attach-gateway.git && cd attach-gateway\npython -m venv .venv && source .venv/bin/activate\npip install -r requirements.txt\n\n\n# 1) start memory in Docker (background tab)\n\npython script/start_weaviate.py &\n\n# 2) export your short\u2011lived token\nexport JWT=\"<paste Auth0 or DID token>\"\nexport OIDC_ISSUER=https://YOUR_DOMAIN.auth0.com\nexport OIDC_AUD=ollama-local\nexport MEM_BACKEND=weaviate\nexport WEAVIATE_URL=http://127.0.0.1:6666\n\n# 3) run gateway\nuvicorn main:app --port 8080 &\n\n# The gateway exposes your Auth0 credentials for the demo UI at\n# `/auth/config`. The values are read from `AUTH0_DOMAIN`,\n# `AUTH0_CLIENT` and `OIDC_AUD`.\n\n# 4) make a protected Ollama call via the gateway\ncurl -H \"Authorization: Bearer $JWT\" \\\n -d '{\"model\":\"tinyllama\",\"messages\":[{\"role\":\"user\",\"content\":\"hello\"}]}' \\\n http://localhost:8080/api/chat | jq .\n```\n\nIn another terminal, try the Temporal demo:\n\n```bash\npip install temporalio # optional workflow engine\npython examples/temporal_adapter/worker.py &\npython examples/temporal_adapter/client.py\n```\n\nYou should see a JSON response plus `X\u2011ATTACH\u2011Session\u2011Id` header \u2013 proof the pipeline works.\n\n---\n\n## Use in your project\n\n1. Copy `.env.example` \u2192 `.env` and fill in OIDC + backend URLs \n2. `pip install attach-dev python-dotenv` \n3. `attach-gateway` (reads .env automatically)\n\n\u2192 See **[docs/configuration.md](docs/configuration.md)** for framework integration and **[examples/](examples/)** for code samples.\n\n---\n\n## Architecture (planner \u2192 coder hand\u2011off)\n\n```mermaid\nflowchart TD\n %%\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n %% COMPONENTS \n %%\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n subgraph Front-end\n UI[\"Browser<br/> demo.html\"]\n end\n\n subgraph Gateway\n GW[\"Attach Gateway<br/> (OIDC SSO + A2A)\"]\n end\n\n subgraph Agents\n PL[\"Planner Agent<br/>FastAPI :8100\"]\n CD[\"Coder Agent<br/>FastAPI :8101\"]\n end\n\n subgraph Memory\n WV[\"Weaviate (Docker)\\nclass MemoryEvent\"]\n end\n\n subgraph Engine\n OL[\"Ollama / vLLM<br/>:11434\"]\n end\n\n %%\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n %% USER FLOW\n %%\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n UI -- \u2460 POST /a2a/tasks/send<br/>Bearer JWT, prompt --> GW\n\n %%\u2500 Planner hop\n GW -- \u2461 Proxy \u2192 planner<br/>(X-Attach-User, Session) --> PL\n PL -- \u2462 Write plan doc --> WV\n PL -- \u2463 /a2a/tasks/send\\nbody:{mem_id} --> GW\n\n %%\u2500 Coder hop\n GW -- \u2464 Proxy \u2192 coder --> CD\n CD -- \u2465 GET plan by mem_id --> WV\n CD -- \u2466 POST /api/chat\\nprompt(plan) --> GW\n GW -- \u2467 Proxy \u2192 Ollama --> OL\n OL -- \u2468 JSON response --> GW\n GW -- \u2469 Write response to Weaviate --> WV\n GW -- \u246a /a2a/tasks/status = done --> UI\n```\n\n**Key headers**\n\n| Header | Meaning |\n|--------|---------|\n| `Authorization: Bearer <JWT>` | OIDC or DID token proved by gateway |\n| `X\u2011Attach\u2011User` | stable user ID (`auth0|123` or `did:pkh:\u2026`) |\n| `X\u2011Attach\u2011Session` | deterministic hash (user + UA) for request trace |\n\n---\n\n## Live two\u2011agent demo\n\n```bash\n\n# pane 1 \u2013 memory (Docker)\npython script/start_weaviate.py\n\n# pane 2 \u2013 gateway\nuvicorn main:app --port 8080\n\n# pane 3 \u2013 planner agent\nuvicorn examples.agents.planner:app --port 8100\n\n# pane 4 \u2013 coder agent\nuvicorn examples.agents.coder:app --port 8101\n\n# pane 5 \u2013 static chat UI\ncd examples/static && python -m http.server 9000\nopen http://localhost:9000/demo.html\n```\nType a request like *\"Write Python to sort a list.\"* The browser shows:\n1. Planner message \u2192 logged in gateway, plan row appears in memory.\n2. Coder reply \u2192 code response, second memory row, status `done`.\n\n---\n\n## Directory map\n\n| Path | Purpose |\n|------|---------|\n| `auth/` | OIDC & DID\u2011JWT verifiers |\n| `middleware/` | JWT middleware, session header, mirror trigger |\n| `a2a/` | `/tasks/send` & `/tasks/status` routes |\n| `mem/` | pluggable memory writers (`weaviate.py`, `sakana.py`) |\n| `proxy/` | Engine-agnostic HTTP proxy logic |\n| `examples/agents/` | *examples* \u2013 Planner & Coder FastAPI services |\n| `examples/static/` | `demo.html` chat page |\n\n---\n\n### Auth core\n\n`auth.verify_jwt()` accepts three token formats and routes them automatically:\n\n1. Standard OIDC JWTs\n2. `did:key` tokens\n3. `did:pkh` tokens\n\nExample DID-JWT request:\n```bash\ncurl -X POST /v1/resource \\\n -H \"Authorization: Bearer did:key:z6Mki...<sig>.<payload>.<sig>\"\n```\n\n## \ud83d\udcbe Memory: logs\n\nSend Sakana-formatted logs to the gateway and they will be stored as\n`MemoryEvent` objects in Weaviate.\n\n```bash\ncurl -X POST /v1/logs \\\n -H \"Authorization: Bearer $JWT\" \\\n -d '{\"run_id\":\"abc\",\"level\":\"info\",\"message\":\"hi\"}'\n# => HTTP/1.1 202 Accepted\n```\n\n## Token quotas\n\nAttach Gateway can enforce per-user token limits. Install the optional\ndependency with `pip install attach-gateway[quota]` and set\n`MAX_TOKENS_PER_MIN` in your environment to enable the middleware. The\ncounter defaults to the `cl100k_base` encoding; override with\n`QUOTA_ENCODING` if your model uses a different tokenizer. The default\nin-memory store works in a single process and is not shared between\nworkers\u2014requests retried across processes may be double-counted. Use Redis\nfor production deployments.\n\n### Enable token quotas\n\n```bash\n# Optional: Enable token quotas\nexport MAX_TOKENS_PER_MIN=60000\npip install tiktoken # or pip install attach-gateway[quota]\n```\n\nTo customize the tokenizer:\n```bash\nexport QUOTA_ENCODING=cl100k_base # default\n```\n\n## Roadmap\n\n* **v0.2** \u2014 Protected\u2011resource metadata endpoint (OAuth 2.1), enhanced DID resolvers. \n* **v0.3** \u2014 Token\u2011exchange (RFC 8693) for on\u2011behalf\u2011of delegation. \n* **v0.4** \u2014 Attach Store v1 (Git\u2011style, policy guards).\n\n---\n\n## License\n\nMIT\n",
"bugtrack_url": null,
"license": "MIT License\n \n Copyright (c) 2024 Hammad Tariq\n \n Permission is hereby granted, free of charge, to any person obtaining a copy\n of this software and associated documentation files (the \"Software\"), to deal\n in the Software without restriction, including without limitation the rights\n to use, copy, modify, merge, publish, distribute, sublicense, and/or sell\n copies of the Software, and to permit persons to whom the Software is\n furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all\n copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\n AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\n OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\n SOFTWARE.",
"summary": "Identity & Memory side-car for every LLM engine and multi-agent framework",
"version": "0.2.2",
"project_urls": {
"Homepage": "https://github.com/attach-dev/attach-gateway",
"Repository": "https://github.com/attach-dev/attach-gateway"
},
"split_keywords": [
"agents",
" ai",
" auth",
" fastapi",
" llm",
" memory",
" sso"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "b28debf2ac9e39463bcb60b94d3fd26841c86386914d4adec6fe3873edbab9eb",
"md5": "a83733ed7979156f2cb5e49f07d55705",
"sha256": "68ef629a99b580006a83f3470e43b98de48d6fa3b52abd37c1258c5d2dead61f"
},
"downloads": -1,
"filename": "attach_dev-0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a83733ed7979156f2cb5e49f07d55705",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 21547,
"upload_time": "2025-07-14T22:31:40",
"upload_time_iso_8601": "2025-07-14T22:31:40.374233Z",
"url": "https://files.pythonhosted.org/packages/b2/8d/ebf2ac9e39463bcb60b94d3fd26841c86386914d4adec6fe3873edbab9eb/attach_dev-0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "2c5fdf3c2ded51ebbaa1e48fa1cc3304dfc57d698cada761f05c17700edda9b7",
"md5": "3cb157dc29e1998db3f6d2317f671818",
"sha256": "320ec0d0902c7560ceff1f36147df3de049ff46fd1d730983a448facfc6f18f6"
},
"downloads": -1,
"filename": "attach_dev-0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "3cb157dc29e1998db3f6d2317f671818",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 60183,
"upload_time": "2025-07-14T22:31:43",
"upload_time_iso_8601": "2025-07-14T22:31:43.744264Z",
"url": "https://files.pythonhosted.org/packages/2c/5f/df3c2ded51ebbaa1e48fa1cc3304dfc57d698cada761f05c17700edda9b7/attach_dev-0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-14 22:31:43",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "attach-dev",
"github_project": "attach-gateway",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "fastapi",
"specs": [
[
">=",
"0.109.0"
]
]
},
{
"name": "uvicorn",
"specs": [
[
">=",
"0.27.0"
]
]
},
{
"name": "httpx",
"specs": [
[
">=",
"0.27.0"
]
]
},
{
"name": "python-jose",
"specs": [
[
">=",
"3.3.0"
]
]
},
{
"name": "tiktoken",
"specs": [
[
">=",
"0.5.0"
]
]
},
{
"name": "pytest",
"specs": [
[
">=",
"0.0"
]
]
},
{
"name": "pytest-asyncio",
"specs": [
[
">=",
"0.23.0"
]
]
},
{
"name": "weaviate-client",
"specs": [
[
">=",
"3.26.7"
],
[
"<",
"4.0.0"
]
]
},
{
"name": "black",
"specs": [
[
">=",
"24.1.0"
]
]
},
{
"name": "isort",
"specs": [
[
">=",
"5.13.0"
]
]
},
{
"name": "temporalio",
"specs": [
[
">=",
"1.5.0"
]
]
},
{
"name": "langchain",
"specs": [
[
">=",
"0.1.0"
]
]
},
{
"name": "langchain-core",
"specs": [
[
">=",
"0.1.0"
]
]
},
{
"name": "langgraph",
"specs": [
[
">=",
"0.0.10"
]
]
}
],
"lcname": "attach-dev"
}