agentgraph-bdemsky


Nameagentgraph-bdemsky JSON
Version 0.1 PyPI version JSON
download
home_pageNone
SummaryA library for task-based parallel programming for Python. AgentGraph primarily targets AI software that integrates LLM queries (it contains language support generating LLM queries), but can also parallelize tasks that make calls into native code that drops the GIL. Supports LLM query memoization for fast, cheap debug cycles.
upload_time2024-04-29 21:09:32
maintainerNone
docs_urlNone
authorPLRG Team
requires_python>=3.9
licenseNone
keywords llm task parallelism nested task parallelism large language models query generation language
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            ## About

AgentGraph is a library for the development of AI applications.

AgentGraph:

- Supports parallelism with a sequential programming model using a dynamic dataflow based execution model.
- Supports nested parallelism.
- Supports memoization for debugging.
- Supports prompt generation using templates and a prompting language.

## Installation

To install type:
```
pip install .
```

### Getting Started

AgentGraph uses the OpenAI API interface and requires an OpenAI API key to make LLM calls. By convention 
the key is provided as an environement variable

```
export OPENAI_API_KEY = YOUR_KEY
```

to instead use local models through services like vLLM, a fake key can be provided

```
export OPENAI_API_KEY = "fake"
```

# Examples

Try out the programs under examples/.

First, you will need to change the model endpoint in the example to
the one you are using.  Find the line of code that looks like:

```
model = agentgraph.LLMModel("https://demskygroupgpt4.openai.azure.com/", os.getenv("OPENAI_API_KEY"), "GPT4-8k", "GPT-32K", 34000)
```

Replace the endpoint https://demskygroupgpt4.openai.azure.com with
your endpoint.  Replace the model names GPT4-8k and GPT-32K with your
model names.  If you are using an OpenAI endpoint, set the parameter
useOpenAI to True (and False for an Azure endpoint).

For example, run

```
python examples/chat/example.py
```

to start two LLM agents that collaborate to write a linked list in C.

## Documentation

First, you need to import the AgentGraph package.  To do this you need:

```
import agentgraph
```

### Key Concepts

AgentGraph uses a nested task-based parallelism model.  Tasks are
invoked by either the parent thread or another task.  The invoking
thread or task does not wait for the child task to execute.  A task
executes when all of its inputs are available.

AgentGraph programs execute in parallel, but have sequential
semantics.  The parallel execution is guaranteed to produce the same
results as the sequential execution in which task are executed
immediately upon invocation (modulo ordering of screen I/O and other
effects that are not tracked by AgentGraph).

Variable objects represent the return values of tasks.  They function
effectively as futures.  But they can be passed into later tasks, and
those later tasks will wait until the task that produced the Variable
object finishes execution.

### Model

AgentGraph uses a model object to access LLMs.   
To create a model object use the following command:

```
model = agentgraph.LLMModel(endpoint, apikey, smallModel, largeModel, threshold, api_version, useOpenAI)
```

- endpoint provides the url to access the model.  It is needed for
  locally served VLLM models or Azure models.

- apikey provides the API key to access the model.

Sometimes models charge different amount per token for different
context windows.  AgentGraph support dynamic switching of models by
estimated need for a context window to reduce costs.

- smallModel gives the name of the small context version of the model
- largeModel gives the name of the large context version of the model
- threshold gives the size in bytes to switch to the large context version of the model.

- api_version allows the user to specify the api_version

- useOpenAI flag instructs the LLMModel to use the OpenAI or VLLM
  version of the model.  Set to false if you want to use an Azure
  served model.

### Query Generation

AgentGraph supports combining multiple messages into a chat
history. Messages can come from a string, a prompt, or a variable
reference.  Messages can be concatenated using the + operator.

A system message and an initial prompt message can be combined to
create a sequence of messages using the ** operator, e.g., systemmsg
** promptmsg.  A message can be appended to a sequence of messages
using the & operator, e.g., sequence & newmsg.  The appended message
will be assigned to opposite type (user vs assistant) of the previous
message in the sequence.

A sequence of messages can also come from a conversation object (or
variable reference to a conversation object).

For conversations, we have a special initializer syntax: seq >
seqincrement | seqbase, that evaluates to seqincrement is seq is not
empty and seqbase if seq is empty.

Python slice operators ([start:end]) can be applied to sequences to
remove parts of the sequence.


#### Prompts

To create a prompt object using the specified directory:

```
prompts = agentgraph.Prompts(directory)
```


To loads a prompt from the specified filename.  The file should use the
Jinja templating language syntax.

```
prompts.load_prompt(filename, dictionary)
```

- dictionary - a map of names to either a variable or an object that
  should be used for generating the prompt.  Variables will be
  resolved when the prompt is generated.

#### Conversation Objects

Conversation objects can save a conversation.

To create a Conversation mutable object.

```
conversation = agentgraph.Conversation()
```

### Query Memoization

AgentGraph supports query memoization to allow rapid debug cycles,
reproducibility of problematic executions, and regression test suites.
This is controlled by the config operation
agentgraph.config.DEBUG_PATH.  If this is set to None, memoization is
turned off.  Otherwise, memoized query results are stored in the
specified path.

### Tools

A Tool object represents a tool the LLM can call. It has two components -
a tool json object to be sent to the LLM (see the API guide
[here](https://platform.openai.com/docs/api-reference/chat/create)),
and a handler that gets executed in case the tool gets called.

There are two ways of creating a tool.

(1)
To create a toolLoader from a specified directory:

```
agentgraph.ToolPrompt(prompt, handler=function)
```

Loads the json object from the prompt, and gives it a
pythonfunction as handler(optional).

The users need to make sure Tools loaded this way adhere to the format specified in the API guide.

(2)
To create a tool from a function. The function and argument descriptions are extracted from the function docstring with the format:

```
agentgraph.ToolReflect(function)
```

Creates a tool from a python function. The function and argument descriptions
are extracted from the function docstring. The docstring format should be:

```
            FUNC_DESCPITON
            Arguments:
            ARG1 --- ARG1_DESCRIPTION
            ARG2 --- ARG2_DESCRIPTION
            ...
```

only arguments with descriptions are included as part of the json object
visible to the LLM.

A ToolList is a wrapper for a list of Tools. An LLM agent takes a ToolList
as argument instead of a single Tool. A ToolList can be created from wrapper
methods that correspond to the ways of creating Tools mentioned above.

(1)

```
tools = agentgraph.tools_from_functions([func1, func2])
```

Creates a ToolList from a list of python functions

(2)

```
tools = agentgraph.tools_from_prompts(toolLoader, {filename1: handler1, filename2: handler2})
```

Creates a ToolList from a tool loader and a dictionary mapping the files 
containing the tool json objects to their handlers. Note that the handlers
can be None.

### Top-Level Scheduler Creation

To create a schedule to run task.  The argument specifies the default
model to use.

```
scheduler = agentgraph.get_root_scheduler(model)
```

### Running Python Tasks

To run a Python task we use:

```
scheduler.run_python_agent(function, pos, kw, out, vmap)
```

- function - function to run for task
- pos - positional arguments to task
- kw - keyword based arguments to task
- out - AgentGraph variable objects to store output of task
- vmap - VarMap object to provide a set of variable object assignment to be performend before the task is started.

### Nested Parallelism 

Functions running as Python task need to take a scheduler object as the first argument. This scheduler can be used to create child tasks within the function 
```
def parent_func(scheduler, ...):
	scheduler.run_python_agent(child_func, pos, kw, out, vmap)

parent_scheduler.run_python_agent(parent_func, pos, kw, out, vmap)
```

The parent task will wait for all of its child tasks to finish before finishing itself. The level of nesting can be arbitrarily deep, only constrained by the stack size. 

### Running LLM Tasks

To run a LLM task we use:

```
scheduler.run_llm_agent(outVar, msg, conversation, model, callVar, tools, formatFunc, pos, kw, vmap)
```

- outVar - variable to hold the output string from the LLM model
- msg - MsgSeq AST to be used to construct the query
- conversation - conversation object that can be used to store the total conversation performed
- callVar - variable to hold the list of calls made by the LLM, if there is any. If a call has unparseable argument or has an unknown function name, it should have an exception object under the key "exception". If a call has a handler, it should have the handler return value under the key "return". 
- tools - list of Tool objects used to generate the tools parameter.
- formatFunc - python function that can alternatively be used to construct a query
- pos - positional arguments for formatFunc
- kw - keyword arguments for formatFunc
- model - model to use (overriding default model)
- vmap - VarMap object to provide a set of variable object assignment to be performend before the task is started.

### Shutting the root scheduler down.

The shutdown method shuts down the root scheduler and waits for the
execution of all invoked task to finish.

It is invoked by:

```
scheduler.shutdown()
```

### Sharing Mutable Objects Between Tasks

Immutable objects can be safely passed to and returned from tasks.
Mutable object must explicitly inherit from a special Mutable class.
All accessor methods of Mutable objects must first call either
wait_for_access (for methods that perform read or write accesses) or
wait_for_read_access (for methods that only perform a read) from the
Mutable base class.

Mutable objects can potentially return references to other Mutable
objects.  If one Mutable object has a reference to another Mutable
object that it could potentially return, it must call
set_owning_object method from the Mutable base class to report this
reference to AgentGraph.


### Auxilary Data Structures


To create a variable map, we use:

```
varmap = agentgraph.VarMap()
```


To get the value of a variable (stalling the parent task until the child task has finished):

```
var.get_value()
```

### Collections of Variables and Values

AgentGraph includes collection objects that Variables can be inserted.
These collection objects can be passed into task, and when the task
executes the variables in the collection will be replaced with the
corresponding values.

A VarSet is a set that can contain both values and variables.  You can
allocate a VarSet using the command:

```
varset = agentgraph.VarSet()
```

The add method of the VarSet class adds variables or values to it.

```
varset.add(variable)
```

AgentGraph also supports a VarDict, a dictionary in which the values
can be Variables or normal values.  To allocate a VarDict:

```
vardict = agentgraph.VarDict()
```

To add a key-value pair to a vardict:

```
vardict[key] = varvalue
```

Note that keys cannot be variables.
If a task takes a varset or vardict as argument, then it will wait for
the latest write to all vars in the data structure before it executes.

### Using VLLM

Start VLLM with the appropriate chat endpoint.  For example:
```
python -m vllm.entrypoints.openai.api_server --model meta-llama/Llama-2-7b-chat-hf
```

Setup agentgraph with the appropriate LLMModel object.  For example:
```
model = agentgraph.LLMModel("http://127.0.0.1:8000/v1/", os.getenv("OPENAI_API_KEY"), "meta-llama/Llama-2-7b-chat-hf", "meta-llama/Llama-2-7b-chat-hf", 34000, useOpenAI=True)
```

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "agentgraph-bdemsky",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "LLM, Task parallelism, Nested task parallelism, Large Language Models, Query generation language",
    "author": "PLRG Team",
    "author_email": "Brian Demsky <bdemsky@uci.edu>, Simon Guo <yutong4@uci.edu>, Conan Truong <cjtruong@uci.edu>",
    "download_url": "https://files.pythonhosted.org/packages/b8/b4/a43a24825fcb1b41724aedef34b7b6ff8af23825a74465b93081823b70b1/agentgraph_bdemsky-0.1.tar.gz",
    "platform": null,
    "description": "## About\n\nAgentGraph is a library for the development of AI applications.\n\nAgentGraph:\n\n- Supports parallelism with a sequential programming model using a dynamic dataflow based execution model.\n- Supports nested parallelism.\n- Supports memoization for debugging.\n- Supports prompt generation using templates and a prompting language.\n\n## Installation\n\nTo install type:\n```\npip install .\n```\n\n### Getting Started\n\nAgentGraph uses the OpenAI API interface and requires an OpenAI API key to make LLM calls. By convention \nthe key is provided as an environement variable\n\n```\nexport OPENAI_API_KEY = YOUR_KEY\n```\n\nto instead use local models through services like vLLM, a fake key can be provided\n\n```\nexport OPENAI_API_KEY = \"fake\"\n```\n\n# Examples\n\nTry out the programs under examples/.\n\nFirst, you will need to change the model endpoint in the example to\nthe one you are using.  Find the line of code that looks like:\n\n```\nmodel = agentgraph.LLMModel(\"https://demskygroupgpt4.openai.azure.com/\", os.getenv(\"OPENAI_API_KEY\"), \"GPT4-8k\", \"GPT-32K\", 34000)\n```\n\nReplace the endpoint https://demskygroupgpt4.openai.azure.com with\nyour endpoint.  Replace the model names GPT4-8k and GPT-32K with your\nmodel names.  If you are using an OpenAI endpoint, set the parameter\nuseOpenAI to True (and False for an Azure endpoint).\n\nFor example, run\n\n```\npython examples/chat/example.py\n```\n\nto start two LLM agents that collaborate to write a linked list in C.\n\n## Documentation\n\nFirst, you need to import the AgentGraph package.  To do this you need:\n\n```\nimport agentgraph\n```\n\n### Key Concepts\n\nAgentGraph uses a nested task-based parallelism model.  Tasks are\ninvoked by either the parent thread or another task.  The invoking\nthread or task does not wait for the child task to execute.  A task\nexecutes when all of its inputs are available.\n\nAgentGraph programs execute in parallel, but have sequential\nsemantics.  The parallel execution is guaranteed to produce the same\nresults as the sequential execution in which task are executed\nimmediately upon invocation (modulo ordering of screen I/O and other\neffects that are not tracked by AgentGraph).\n\nVariable objects represent the return values of tasks.  They function\neffectively as futures.  But they can be passed into later tasks, and\nthose later tasks will wait until the task that produced the Variable\nobject finishes execution.\n\n### Model\n\nAgentGraph uses a model object to access LLMs.   \nTo create a model object use the following command:\n\n```\nmodel = agentgraph.LLMModel(endpoint, apikey, smallModel, largeModel, threshold, api_version, useOpenAI)\n```\n\n- endpoint provides the url to access the model.  It is needed for\n  locally served VLLM models or Azure models.\n\n- apikey provides the API key to access the model.\n\nSometimes models charge different amount per token for different\ncontext windows.  AgentGraph support dynamic switching of models by\nestimated need for a context window to reduce costs.\n\n- smallModel gives the name of the small context version of the model\n- largeModel gives the name of the large context version of the model\n- threshold gives the size in bytes to switch to the large context version of the model.\n\n- api_version allows the user to specify the api_version\n\n- useOpenAI flag instructs the LLMModel to use the OpenAI or VLLM\n  version of the model.  Set to false if you want to use an Azure\n  served model.\n\n### Query Generation\n\nAgentGraph supports combining multiple messages into a chat\nhistory. Messages can come from a string, a prompt, or a variable\nreference.  Messages can be concatenated using the + operator.\n\nA system message and an initial prompt message can be combined to\ncreate a sequence of messages using the ** operator, e.g., systemmsg\n** promptmsg.  A message can be appended to a sequence of messages\nusing the & operator, e.g., sequence & newmsg.  The appended message\nwill be assigned to opposite type (user vs assistant) of the previous\nmessage in the sequence.\n\nA sequence of messages can also come from a conversation object (or\nvariable reference to a conversation object).\n\nFor conversations, we have a special initializer syntax: seq >\nseqincrement | seqbase, that evaluates to seqincrement is seq is not\nempty and seqbase if seq is empty.\n\nPython slice operators ([start:end]) can be applied to sequences to\nremove parts of the sequence.\n\n\n#### Prompts\n\nTo create a prompt object using the specified directory:\n\n```\nprompts = agentgraph.Prompts(directory)\n```\n\n\nTo loads a prompt from the specified filename.  The file should use the\nJinja templating language syntax.\n\n```\nprompts.load_prompt(filename, dictionary)\n```\n\n- dictionary - a map of names to either a variable or an object that\n  should be used for generating the prompt.  Variables will be\n  resolved when the prompt is generated.\n\n#### Conversation Objects\n\nConversation objects can save a conversation.\n\nTo create a Conversation mutable object.\n\n```\nconversation = agentgraph.Conversation()\n```\n\n### Query Memoization\n\nAgentGraph supports query memoization to allow rapid debug cycles,\nreproducibility of problematic executions, and regression test suites.\nThis is controlled by the config operation\nagentgraph.config.DEBUG_PATH.  If this is set to None, memoization is\nturned off.  Otherwise, memoized query results are stored in the\nspecified path.\n\n### Tools\n\nA Tool object represents a tool the LLM can call. It has two components -\na tool json object to be sent to the LLM (see the API guide\n[here](https://platform.openai.com/docs/api-reference/chat/create)),\nand a handler that gets executed in case the tool gets called.\n\nThere are two ways of creating a tool.\n\n(1)\nTo create a toolLoader from a specified directory:\n\n```\nagentgraph.ToolPrompt(prompt, handler=function)\n```\n\nLoads the json object from the prompt, and gives it a\npythonfunction as handler(optional).\n\nThe users need to make sure Tools loaded this way adhere to the format specified in the API guide.\n\n(2)\nTo create a tool from a function. The function and argument descriptions are extracted from the function docstring with the format:\n\n```\nagentgraph.ToolReflect(function)\n```\n\nCreates a tool from a python function. The function and argument descriptions\nare extracted from the function docstring. The docstring format should be:\n\n```\n            FUNC_DESCPITON\n            Arguments:\n            ARG1 --- ARG1_DESCRIPTION\n            ARG2 --- ARG2_DESCRIPTION\n            ...\n```\n\nonly arguments with descriptions are included as part of the json object\nvisible to the LLM.\n\nA ToolList is a wrapper for a list of Tools. An LLM agent takes a ToolList\nas argument instead of a single Tool. A ToolList can be created from wrapper\nmethods that correspond to the ways of creating Tools mentioned above.\n\n(1)\n\n```\ntools = agentgraph.tools_from_functions([func1, func2])\n```\n\nCreates a ToolList from a list of python functions\n\n(2)\n\n```\ntools = agentgraph.tools_from_prompts(toolLoader, {filename1: handler1, filename2: handler2})\n```\n\nCreates a ToolList from a tool loader and a dictionary mapping the files \ncontaining the tool json objects to their handlers. Note that the handlers\ncan be None.\n\n### Top-Level Scheduler Creation\n\nTo create a schedule to run task.  The argument specifies the default\nmodel to use.\n\n```\nscheduler = agentgraph.get_root_scheduler(model)\n```\n\n### Running Python Tasks\n\nTo run a Python task we use:\n\n```\nscheduler.run_python_agent(function, pos, kw, out, vmap)\n```\n\n- function - function to run for task\n- pos - positional arguments to task\n- kw - keyword based arguments to task\n- out - AgentGraph variable objects to store output of task\n- vmap - VarMap object to provide a set of variable object assignment to be performend before the task is started.\n\n### Nested Parallelism \n\nFunctions running as Python task need to take a scheduler object as the first argument. This scheduler can be used to create child tasks within the function \n```\ndef parent_func(scheduler, ...):\n\tscheduler.run_python_agent(child_func, pos, kw, out, vmap)\n\nparent_scheduler.run_python_agent(parent_func, pos, kw, out, vmap)\n```\n\nThe parent task will wait for all of its child tasks to finish before finishing itself. The level of nesting can be arbitrarily deep, only constrained by the stack size. \n\n### Running LLM Tasks\n\nTo run a LLM task we use:\n\n```\nscheduler.run_llm_agent(outVar, msg, conversation, model, callVar, tools, formatFunc, pos, kw, vmap)\n```\n\n- outVar - variable to hold the output string from the LLM model\n- msg - MsgSeq AST to be used to construct the query\n- conversation - conversation object that can be used to store the total conversation performed\n- callVar - variable to hold the list of calls made by the LLM, if there is any. If a call has unparseable argument or has an unknown function name, it should have an exception object under the key \"exception\". If a call has a handler, it should have the handler return value under the key \"return\". \n- tools - list of Tool objects used to generate the tools parameter.\n- formatFunc - python function that can alternatively be used to construct a query\n- pos - positional arguments for formatFunc\n- kw - keyword arguments for formatFunc\n- model - model to use (overriding default model)\n- vmap - VarMap object to provide a set of variable object assignment to be performend before the task is started.\n\n### Shutting the root scheduler down.\n\nThe shutdown method shuts down the root scheduler and waits for the\nexecution of all invoked task to finish.\n\nIt is invoked by:\n\n```\nscheduler.shutdown()\n```\n\n### Sharing Mutable Objects Between Tasks\n\nImmutable objects can be safely passed to and returned from tasks.\nMutable object must explicitly inherit from a special Mutable class.\nAll accessor methods of Mutable objects must first call either\nwait_for_access (for methods that perform read or write accesses) or\nwait_for_read_access (for methods that only perform a read) from the\nMutable base class.\n\nMutable objects can potentially return references to other Mutable\nobjects.  If one Mutable object has a reference to another Mutable\nobject that it could potentially return, it must call\nset_owning_object method from the Mutable base class to report this\nreference to AgentGraph.\n\n\n### Auxilary Data Structures\n\n\nTo create a variable map, we use:\n\n```\nvarmap = agentgraph.VarMap()\n```\n\n\nTo get the value of a variable (stalling the parent task until the child task has finished):\n\n```\nvar.get_value()\n```\n\n### Collections of Variables and Values\n\nAgentGraph includes collection objects that Variables can be inserted.\nThese collection objects can be passed into task, and when the task\nexecutes the variables in the collection will be replaced with the\ncorresponding values.\n\nA VarSet is a set that can contain both values and variables.  You can\nallocate a VarSet using the command:\n\n```\nvarset = agentgraph.VarSet()\n```\n\nThe add method of the VarSet class adds variables or values to it.\n\n```\nvarset.add(variable)\n```\n\nAgentGraph also supports a VarDict, a dictionary in which the values\ncan be Variables or normal values.  To allocate a VarDict:\n\n```\nvardict = agentgraph.VarDict()\n```\n\nTo add a key-value pair to a vardict:\n\n```\nvardict[key] = varvalue\n```\n\nNote that keys cannot be variables.\nIf a task takes a varset or vardict as argument, then it will wait for\nthe latest write to all vars in the data structure before it executes.\n\n### Using VLLM\n\nStart VLLM with the appropriate chat endpoint.  For example:\n```\npython -m vllm.entrypoints.openai.api_server --model meta-llama/Llama-2-7b-chat-hf\n```\n\nSetup agentgraph with the appropriate LLMModel object.  For example:\n```\nmodel = agentgraph.LLMModel(\"http://127.0.0.1:8000/v1/\", os.getenv(\"OPENAI_API_KEY\"), \"meta-llama/Llama-2-7b-chat-hf\", \"meta-llama/Llama-2-7b-chat-hf\", 34000, useOpenAI=True)\n```\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "A library for task-based parallel programming for Python.  AgentGraph primarily targets AI software that integrates LLM queries (it contains language support generating LLM queries), but can also parallelize tasks that make calls into native code that drops the GIL.  Supports LLM query memoization for fast, cheap debug cycles.",
    "version": "0.1",
    "project_urls": {
        "Homepage": "https://github.com/bdemsky/agentgraph",
        "Issues": "https://github.com/bdemsky/agentgraph/issues"
    },
    "split_keywords": [
        "llm",
        " task parallelism",
        " nested task parallelism",
        " large language models",
        " query generation language"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "ca884e8c4fee1d2556bd0553923069f03471a44ac9260a09447572f03105f8f0",
                "md5": "8e70c0c92513a920f732c1caa422d68a",
                "sha256": "f8d228526a40fe4be47418f8db4bc1eda484c5d1d6fe9820823848d46d4ec282"
            },
            "downloads": -1,
            "filename": "agentgraph_bdemsky-0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8e70c0c92513a920f732c1caa422d68a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 38782,
            "upload_time": "2024-04-29T21:09:29",
            "upload_time_iso_8601": "2024-04-29T21:09:29.660249Z",
            "url": "https://files.pythonhosted.org/packages/ca/88/4e8c4fee1d2556bd0553923069f03471a44ac9260a09447572f03105f8f0/agentgraph_bdemsky-0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "b8b4a43a24825fcb1b41724aedef34b7b6ff8af23825a74465b93081823b70b1",
                "md5": "75e7b889108203b783159d1f62417099",
                "sha256": "cb3f1d0f69e378ff6e75b198ddc35489c163631517133a0a3e6a8c5210eab0a2"
            },
            "downloads": -1,
            "filename": "agentgraph_bdemsky-0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "75e7b889108203b783159d1f62417099",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 38587,
            "upload_time": "2024-04-29T21:09:32",
            "upload_time_iso_8601": "2024-04-29T21:09:32.592662Z",
            "url": "https://files.pythonhosted.org/packages/b8/b4/a43a24825fcb1b41724aedef34b7b6ff8af23825a74465b93081823b70b1/agentgraph_bdemsky-0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-29 21:09:32",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "bdemsky",
    "github_project": "agentgraph",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "agentgraph-bdemsky"
}
        
Elapsed time: 0.31440s