django-openai-assistant


Namedjango-openai-assistant JSON
Version 0.7.8 PyPI version JSON
download
home_pageNone
SummaryDjango OpenAI Assistant
upload_time2025-02-08 18:11:09
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseNone
keywords assistants celery django openai
VCS
bugtrack_url
requirements openai
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # A Django / Celery scalable OpenAI Assistant Runner.

Assumption: you have Django up and running with Celery. 
(Tested on Redis)

1. In your terminal:
```bash
pip install django_openai_assistant
```

2. in `settings.py`
 - Add 'django_openai_assistant' to your INSTALLED_APPS [] array:
```py
INSTALLED_APPS = [
    # other apps
    'django_openai_assistant',
]
```
 - Make sure to have OPENAI_API_KEY defined in settings with your OpenAI key:
```py
OPENAI_API_KEY = "<your-key>"
```
 - Create and apply migrations for django_openai_assistant:
```py
python manage.py makemigrations django_openai_assistant
python manage.py migrate django_openai_assistant
```
3. Create a simple Assistant in https://platform.openai.com/assistants. To begin you probably want one with no functions.

4. In step (3) let's say you called it 'Test Assistant', then use the assistant in your code:

demo.py
```py
from django_openai_assistant.assistant import assistantTask
from celery import shared_task

 # Define OPENAI_API_KEY in your settings.py file
 # Add 'django_openai_assistant' to your INSTALLED_APPS in settings.py
 # run python manage.py makemigrations django_openai_assistant
 # run python manage.py migrate
 # create at least one Assistant in https://platform.openai.com/assistants

def testAssistant(request=None):
    # replace <<your appname>> with the name you django app this file is in!
    # replace <<your assistant name>> with the name of the assistant you created in the OpenAI platform
    task = assistantTask(assistantName="<<your assistant name>>", tools= [], completionCall = "<<your appname>>.test:afterRunFunction")
    task.prompt = "Who was Napoleon Bonaparte?"
    task.createRun() ## this will get everything going!

@shared_task(Name="This will be called once the run is complete")
def afterRunFunction(taskID):
    # Function is called when run is completed. MUST be annoted as @shared_task!!! 
    # start by retrieving the task
    task = assistantTask(run_id = taskID)
    if task.status == 'completed': ## check to make sure it is completed not failed or something else
        print(task.markdownresponse())
    else:
        print('run failed')
```
See https://medium.com/@jlvalorvc/building-a-scalable-openai-assistant-processor-in-django-with-celery-a61a1af722e0


## Version history:
0.7.8
- uploadFile regression fileContent not file_content!
- minimum openai 1.61.0

0.7.7
- more fixes double entry

0.7.6
- fixes

0.7.5
- fixes

0.7.4
- First version with some Devin 'help'. Working on a testable, lintable version. Stay tuned and watch along in github. 
- Temperature has been made optional (instead of default 1 when not provided) because the new o1 / o3 models don't allow it.

0.7.3
- remove injection of comboid when using class parameters because it will fail (if the class is set to strict)

0.7.2
- bug fix for calling without tools. (regression from 0.7.1)

0.7.1 
- better support for legacy where individual call with tools =[] and those tools are NOT in the set_default_tools() are automatically added.
- please note: the celery WORKER needs to do a call to set_default_tools(). Best place in django read()
 
0.7.0 
- Major update for tool calling
Added set_default_tools() to set the default tools for all assistants. 
This means you can now ommit the tools parameter when creating an assistant call
you can now add a set_default_tools() call in the beginning of your code and it will be used for any agent call.
(Note you still need to have the tools defined in the OpenAI platform for each assistant)

Another big chance is a Pydanctic class is now automatically detected and supported. The first parameter of your tools function should be call params for this. It will be checked to be subclass of BaseModal. If so, the OpenAI parameters will be converted through Pydantic.

Example:


~~~
class CreateEventParams(BaseModel):
    email: str = Field(..., description="Owner's email.")
    start: str = Field(..., description="Event start (ISO 8601).")
    end: str = Field(..., description="Event end (ISO 8601).")
    title: str = Field(..., description="Event title.")
    description: Optional[str] = Field('', description="Event description.")
    attendees: Optional[List[str]] = Field([], description="Attendee emails.")
    address: Optional[str] = Field(None, description="Event address.")
    add_google_meet_link: Optional[bool] = Field(False, description="Add Google Meet link.")
    calendar_id: Optional[str] = Field('primary', description="Calendar ID.")
    time_zone: Optional[str] = Field('America/New_York', description="Time zone identifier.")

# params is a Pydantic (sub)class 
def create_event(params: CreateEventParams) -> Dict:
    '''
    Create an event in the Google calendar 
    '''
    # Validate time_zone
    try:
        ZoneInfo(params.time_zone)
    except Exception:
        raise ValueError(f"Invalid time zone: {params.time_zone}")

    # Convert start/end to datetime
    try:
        start_datetime = datetime.fromisoformat(params.start)
    except ValueError:
        raise ValueError("Invalid ISO 8601 format for 'start'.")

    try:
        end_datetime = datetime.fromisoformat(params.end)
    except ValueError:
        raise ValueError("Invalid ISO 8601 format for 'end'.")

    # Call create_calendar_event function
    event = create_calendar_event(
        email=params.email,
        start=start_datetime,
        end=end_datetime,
        title=params.title,
        description=params.description,
        attendees=params.attendees,
        address=params.address,
        add_google_meet_link=params.add_google_meet_link,
        calendar_id=params.calendar_id,
        time_zone=params.time_zone
    )
    return event.model_dump()
~~~

This way allows to also automatically create a schema for a function call! (coming soon!)

0.6.0
- since 1.33.00 Openai properly supports the 'vision' file attachment. Now supported here as well. 
Based on filetype images are automatically marked as 'vision' and added to the thread as such.
        image_extensions = ['jpg', 'jpeg', 'png', 'gif', 'bmp', 'tiff']

- Added getallmessages() and getfullresponse() to easily support threaded responses 
- requires openai 1.33.0

0.5.4
- added getallmessages() which returns threads.messages.list( thread_id=self.thread_id) .data
0.5.3
- Added getfullresponse() that comiles all Assistant repsonses in one message
- still waiting for the openai Python library to support vision.
0.5.2
- Updated the file upload mechanism to determine file types (retrieval yes/no and prepare for vision support (image yes/no))
- When attachments are added to a thread they will always have 'tools' enabled and retrieval will only be enabled for the supported file types https://platform.openai.com/docs/assistants/tools/file-search/supported-files

0.5.1
- Remove getopenaiclient() instead use OpenAI() everywhere
- Fix getAssistant() that could fail if retrieving an Assistant by name from an org with more than 20 Assistants.

0.5.0
- Added support for Assistants 2.0. For now all files are added to a thread with support for both search and code completion. For now no support to upload files to a vectorstore to an Assistant. 

0.4.3
- Added optional parameter temperature createRun() default is 1, like the OpenAI default

0.4.2
- Added optional parameter temperature createRun() default is 1, like the OpenAI default

0.4.1
- another fix for metadata always returning {} never None

0.4.0
- redo readme.md 
- standard version numbering
- task.metadata is now initiatilized as {} to prevent task.metadata.get() from failing.

0.33
- retrieve by thread_id or run_id openaiTask(run_id='run_xxxx').   
- asmarkdown(string) available as a function outside class  
- retrievefile() to download a file by openai file ID.  
- now processing multi response with embedded image.

0.32
- small fixes.  

0.31
- made sure that file uploads to openai receive a 'file name' when uploaded. 

0.30 
- fix to properly differentiate between two functions that start with the same name like 'company'
- and 'companyfind' and throw and exception (istead of a pass) when running in Debug mode when calling tool function. 

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "django-openai-assistant",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": "Jean-Luc Vanhulst <jl@valor.vc>",
    "keywords": "assistants, celery, django, openai",
    "author": null,
    "author_email": "Jean-Luc Vanhulst <jl@valor.vc>",
    "download_url": "https://files.pythonhosted.org/packages/6e/e1/09b08c88ad2a72956f96468b3fbb6d4517370a1c98e466a13e5a5b35488d/django_openai_assistant-0.7.8.tar.gz",
    "platform": null,
    "description": "# A Django / Celery scalable OpenAI Assistant Runner.\n\nAssumption: you have Django up and running with Celery. \n(Tested on Redis)\n\n1. In your terminal:\n```bash\npip install django_openai_assistant\n```\n\n2. in `settings.py`\n - Add 'django_openai_assistant' to your INSTALLED_APPS [] array:\n```py\nINSTALLED_APPS = [\n    # other apps\n    'django_openai_assistant',\n]\n```\n - Make sure to have OPENAI_API_KEY defined in settings with your OpenAI key:\n```py\nOPENAI_API_KEY = \"<your-key>\"\n```\n - Create and apply migrations for django_openai_assistant:\n```py\npython manage.py makemigrations django_openai_assistant\npython manage.py migrate django_openai_assistant\n```\n3. Create a simple Assistant in https://platform.openai.com/assistants. To begin you probably want one with no functions.\n\n4. In step (3) let's say you called it 'Test Assistant', then use the assistant in your code:\n\ndemo.py\n```py\nfrom django_openai_assistant.assistant import assistantTask\nfrom celery import shared_task\n\n # Define OPENAI_API_KEY in your settings.py file\n # Add 'django_openai_assistant' to your INSTALLED_APPS in settings.py\n # run python manage.py makemigrations django_openai_assistant\n # run python manage.py migrate\n # create at least one Assistant in https://platform.openai.com/assistants\n\ndef testAssistant(request=None):\n    # replace <<your appname>> with the name you django app this file is in!\n    # replace <<your assistant name>> with the name of the assistant you created in the OpenAI platform\n    task = assistantTask(assistantName=\"<<your assistant name>>\", tools= [], completionCall = \"<<your appname>>.test:afterRunFunction\")\n    task.prompt = \"Who was Napoleon Bonaparte?\"\n    task.createRun() ## this will get everything going!\n\n@shared_task(Name=\"This will be called once the run is complete\")\ndef afterRunFunction(taskID):\n    # Function is called when run is completed. MUST be annoted as @shared_task!!! \n    # start by retrieving the task\n    task = assistantTask(run_id = taskID)\n    if task.status == 'completed': ## check to make sure it is completed not failed or something else\n        print(task.markdownresponse())\n    else:\n        print('run failed')\n```\nSee https://medium.com/@jlvalorvc/building-a-scalable-openai-assistant-processor-in-django-with-celery-a61a1af722e0\n\n\n## Version history:\n0.7.8\n- uploadFile regression fileContent not file_content!\n- minimum openai 1.61.0\n\n0.7.7\n- more fixes double entry\n\n0.7.6\n- fixes\n\n0.7.5\n- fixes\n\n0.7.4\n- First version with some Devin 'help'. Working on a testable, lintable version. Stay tuned and watch along in github. \n- Temperature has been made optional (instead of default 1 when not provided) because the new o1 / o3 models don't allow it.\n\n0.7.3\n- remove injection of comboid when using class parameters because it will fail (if the class is set to strict)\n\n0.7.2\n- bug fix for calling without tools. (regression from 0.7.1)\n\n0.7.1 \n- better support for legacy where individual call with tools =[] and those tools are NOT in the set_default_tools() are automatically added.\n- please note: the celery WORKER needs to do a call to set_default_tools(). Best place in django read()\n \n0.7.0 \n- Major update for tool calling\nAdded set_default_tools() to set the default tools for all assistants. \nThis means you can now ommit the tools parameter when creating an assistant call\nyou can now add a set_default_tools() call in the beginning of your code and it will be used for any agent call.\n(Note you still need to have the tools defined in the OpenAI platform for each assistant)\n\nAnother big chance is a Pydanctic class is now automatically detected and supported. The first parameter of your tools function should be call params for this. It will be checked to be subclass of BaseModal. If so, the OpenAI parameters will be converted through Pydantic.\n\nExample:\n\n\n~~~\nclass CreateEventParams(BaseModel):\n    email: str = Field(..., description=\"Owner's email.\")\n    start: str = Field(..., description=\"Event start (ISO 8601).\")\n    end: str = Field(..., description=\"Event end (ISO 8601).\")\n    title: str = Field(..., description=\"Event title.\")\n    description: Optional[str] = Field('', description=\"Event description.\")\n    attendees: Optional[List[str]] = Field([], description=\"Attendee emails.\")\n    address: Optional[str] = Field(None, description=\"Event address.\")\n    add_google_meet_link: Optional[bool] = Field(False, description=\"Add Google Meet link.\")\n    calendar_id: Optional[str] = Field('primary', description=\"Calendar ID.\")\n    time_zone: Optional[str] = Field('America/New_York', description=\"Time zone identifier.\")\n\n# params is a Pydantic (sub)class \ndef create_event(params: CreateEventParams) -> Dict:\n    '''\n    Create an event in the Google calendar \n    '''\n    # Validate time_zone\n    try:\n        ZoneInfo(params.time_zone)\n    except Exception:\n        raise ValueError(f\"Invalid time zone: {params.time_zone}\")\n\n    # Convert start/end to datetime\n    try:\n        start_datetime = datetime.fromisoformat(params.start)\n    except ValueError:\n        raise ValueError(\"Invalid ISO 8601 format for 'start'.\")\n\n    try:\n        end_datetime = datetime.fromisoformat(params.end)\n    except ValueError:\n        raise ValueError(\"Invalid ISO 8601 format for 'end'.\")\n\n    # Call create_calendar_event function\n    event = create_calendar_event(\n        email=params.email,\n        start=start_datetime,\n        end=end_datetime,\n        title=params.title,\n        description=params.description,\n        attendees=params.attendees,\n        address=params.address,\n        add_google_meet_link=params.add_google_meet_link,\n        calendar_id=params.calendar_id,\n        time_zone=params.time_zone\n    )\n    return event.model_dump()\n~~~\n\nThis way allows to also automatically create a schema for a function call! (coming soon!)\n\n0.6.0\n- since 1.33.00 Openai properly supports the 'vision' file attachment. Now supported here as well. \nBased on filetype images are automatically marked as 'vision' and added to the thread as such.\n        image_extensions = ['jpg', 'jpeg', 'png', 'gif', 'bmp', 'tiff']\n\n- Added getallmessages() and getfullresponse() to easily support threaded responses \n- requires openai 1.33.0\n\n0.5.4\n- added getallmessages() which returns threads.messages.list( thread_id=self.thread_id) .data\n0.5.3\n- Added getfullresponse() that comiles all Assistant repsonses in one message\n- still waiting for the openai Python library to support vision.\n0.5.2\n- Updated the file upload mechanism to determine file types (retrieval yes/no and prepare for vision support (image yes/no))\n- When attachments are added to a thread they will always have 'tools' enabled and retrieval will only be enabled for the supported file types https://platform.openai.com/docs/assistants/tools/file-search/supported-files\n\n0.5.1\n- Remove getopenaiclient() instead use OpenAI() everywhere\n- Fix getAssistant() that could fail if retrieving an Assistant by name from an org with more than 20 Assistants.\n\n0.5.0\n- Added support for Assistants 2.0. For now all files are added to a thread with support for both search and code completion. For now no support to upload files to a vectorstore to an Assistant. \n\n0.4.3\n- Added optional parameter temperature createRun() default is 1, like the OpenAI default\n\n0.4.2\n- Added optional parameter temperature createRun() default is 1, like the OpenAI default\n\n0.4.1\n- another fix for metadata always returning {} never None\n\n0.4.0\n- redo readme.md \n- standard version numbering\n- task.metadata is now initiatilized as {} to prevent task.metadata.get() from failing.\n\n0.33\n- retrieve by thread_id or run_id openaiTask(run_id='run_xxxx').   \n- asmarkdown(string) available as a function outside class  \n- retrievefile() to download a file by openai file ID.  \n- now processing multi response with embedded image.\n\n0.32\n- small fixes.  \n\n0.31\n- made sure that file uploads to openai receive a 'file name' when uploaded. \n\n0.30 \n- fix to properly differentiate between two functions that start with the same name like 'company'\n- and 'companyfind' and throw and exception (istead of a pass) when running in Debug mode when calling tool function. \n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Django OpenAI Assistant",
    "version": "0.7.8",
    "project_urls": {
        "Homepage": "https://github.com/jlvanhulst/django-openai"
    },
    "split_keywords": [
        "assistants",
        " celery",
        " django",
        " openai"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2ee3840e5f7dcd817564ab168c1fa7c3b0728c1360e56afee00e848f3e4a7179",
                "md5": "0892ed7cb3fbcc55fce176bee2a9765a",
                "sha256": "3293ee0c5a06bb4ed21f79bba28d03b28ed617bc45a63a6f292d5009c04b3f2d"
            },
            "downloads": -1,
            "filename": "django_openai_assistant-0.7.8-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0892ed7cb3fbcc55fce176bee2a9765a",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 14242,
            "upload_time": "2025-02-08T18:11:07",
            "upload_time_iso_8601": "2025-02-08T18:11:07.979106Z",
            "url": "https://files.pythonhosted.org/packages/2e/e3/840e5f7dcd817564ab168c1fa7c3b0728c1360e56afee00e848f3e4a7179/django_openai_assistant-0.7.8-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6ee109b08c88ad2a72956f96468b3fbb6d4517370a1c98e466a13e5a5b35488d",
                "md5": "28d75f2d0962d4b0107ac9fed1a0fe9d",
                "sha256": "6924511f82d0f97f16c46499922e5669d70154c927cc4b6c7968ee8fa8a9e244"
            },
            "downloads": -1,
            "filename": "django_openai_assistant-0.7.8.tar.gz",
            "has_sig": false,
            "md5_digest": "28d75f2d0962d4b0107ac9fed1a0fe9d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 12575,
            "upload_time": "2025-02-08T18:11:09",
            "upload_time_iso_8601": "2025-02-08T18:11:09.829388Z",
            "url": "https://files.pythonhosted.org/packages/6e/e1/09b08c88ad2a72956f96468b3fbb6d4517370a1c98e466a13e5a5b35488d/django_openai_assistant-0.7.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-02-08 18:11:09",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "jlvanhulst",
    "github_project": "django-openai",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "openai",
            "specs": [
                [
                    ">=",
                    "1.61.0"
                ]
            ]
        }
    ],
    "lcname": "django-openai-assistant"
}
        
Elapsed time: 1.86366s