logyca-ai


Namelogyca-ai JSON
Version 0.2.4 PyPI version JSON
download
home_pagehttps://github.com/logyca/python-libraries/tree/main/logyca-ai
SummaryAn integration package created by the company LOGYCA to interact with ChatGPT and analyze documents, files and other functionality of the OpenAI library.
upload_time2024-11-01 23:42:03
maintainerNone
docs_urlNone
authorJaime Andres Cardona Carrillo
requires_python>=3.8
licenseMIT License
keywords artificial-intelligence machine-learning deep-learning chatgpt nlp language-models openai transformers neural-networks ai-tools mlops data-science python data-analysis automation
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <a href="https://logyca.com/"><img src="https://logyca.com/sites/default/files/logyca.png" alt="Logyca"></a>
</p>
<p align="center">
    <em>LOGYCA public libraries</em>
</p>

<p align="center">
<a href="https://pypi.org/project/logyca-ai" target="_blank">
    <img src="https://img.shields.io/pypi/v/logyca-ai?color=orange&label=PyPI%20Package" alt="Package version">
</a>
<a href="(https://www.python.org" target="_blank">
    <img src="https://img.shields.io/badge/Python-%5B%3E%3D3.8%2C%3C%3D3.11%5D-orange" alt="Python">
</a>
</p>


---

# About us

* <a href="http://logyca.com" target="_blank">LOGYCA Company</a>
* <a href="https://www.youtube.com/channel/UCzcJtxfScoAtwFbxaLNnEtA" target="_blank">LOGYCA Youtube Channel</a>
* <a href="https://www.linkedin.com/company/logyca" target="_blank"><img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white" alt="Linkedin"></a>
* <a href="https://twitter.com/LOGYCA_Org" target="_blank"><img src="https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white" alt="Twitter"></a>
* <a href="https://www.facebook.com/OrganizacionLOGYCA/" target="_blank"><img src="https://img.shields.io/badge/Facebook-1877F2?style=for-the-badge&logo=facebook&logoColor=white" alt="Facebook"></a>

---

# LOGYCA public libraries: To interact with ChatGPT and analyze documents, files and other functionality of the OpenAI library.

[Source code](https://github.com/logyca/python-libraries/tree/main/logyca-ai)
| [Package (PyPI)](https://pypi.org/project/logyca-ai/)
| [Samples](https://github.com/logyca/python-libraries/tree/main/logyca-ai/samples)


## To interact with the examples, keep the following in mind

FastAPI example. Through Swagger, you can:
- https://github.com/logyca/python-libraries/tree/main/logyca-ai/samples/fastapi_async
- Use the example endpoints to obtain the input schemas for the post method and interact with the available parameters.
- Endpoint publishing is asynchronous of openai SDK.
- The model currently used is ChatGPT-4o, no other models have been tested so far.
- Currently the formats supported to receive files and extract the text to interact with artificial intelligence are: txt, csv, pdf, images, Microsoft (docx, xlsx).

Script example. Through of code, you can:
- https://github.com/logyca/python-libraries/tree/main/logyca-ai/samples/script_app_sync
- Examples shared with the example written in FastAPI.
- The examples use synchronous functionality of openai SDK.
- The model used is ChatGPT-4o for testing.

## Environment variables documentation for example: fastapi_async

The examples are built in the Microsoft Azure OpenAI environment, and the variables to use are the following:

.env.sample
```console
# Environment variables documentation:

# API_KEY:
# The general API key used for authentication with services. This key is typically used for accessing cloud-based or other API-driven platforms. Replace '***' with the actual key.

# AZURE_OPENAI_DEPLOYMENT:
# The name or identifier of the OpenAI deployment within Azure. This defines the specific model version and configuration you are using in Azure OpenAI Service. Set this to the name of the deployed model, such as 'chatgpt3.5-turbo-1106'.

# AZURE_OPENAI_ENDPOINT:
# The base URL of the Azure OpenAI Service endpoint. This is the URL where API requests are sent, typically formatted like 'https://<your-endpoint>.openai.azure.com/'.

# AZURE_OPENAI_MODEL_NAME:
# The name of the specific OpenAI model being used in Azure, for example, 'gpt-35-turbo'. This identifies which model variant will be used for processing requests.

# AZURE_OPENAI_MODEL_VERSION:
# The version of the OpenAI model deployed in Azure. This typically reflects updates or optimizations to the model, such as '1106' to indicate a version from November 6th.

# OPENAI_API_KEY:
# The API key provided by OpenAI directly (not through Azure). This is used to authenticate and access OpenAI services outside of Azure.

# OPENAI_API_VERSION:
# The version of the OpenAI API being used. This specifies the version of the API and its capabilities, for example, '2023-03-15-preview'. It dictates the available features and request format.

API_KEY=***
AZURE_OPENAI_DEPLOYMENT=***
AZURE_OPENAI_ENDPOINT=***
AZURE_OPENAI_MODEL_NAME=***
AZURE_OPENAI_MODEL_VERSION=***
OPENAI_API_KEY=***
OPENAI_API_VERSION=***

# Example
# API_KEY=CUSTOM_ABC
# AZURE_OPENAI_DEPLOYMENT=chat4omni
# AZURE_OPENAI_ENDPOINT=azurenameforendpoint
# AZURE_OPENAI_MODEL_NAME=gpt-4o
# AZURE_OPENAI_MODEL_VERSION=2024-05-13
# OPENAI_API_KEY=AZURE_ABC
# OPENAI_API_VERSION=2024-07-01-preview
```
---

# OCR engine to extract images.

- Tesseract is an optical character recognition engine for various operating systems.
  It is free software, released under the Apache License. Originally developed by Hewlett-Packard as proprietary software in the 1980s,
  it was released as open source in 2005 and development was sponsored by Google in 2006

## Install

- (Source Code) https://tesseract-ocr.github.io/tessdoc/Downloads.html
- (Windows Binaries) https://github.com/UB-Mannheim/tesseract/wiki
- (Linux/Docker) apt-get -y install tesseract-ocr

# Example for simple conversation.

```json
{
  "system": "Voy a definirte tu personalidad, contexto y proposito.\nActua como un experto en venta de frutas.\nSe muy positivo.\nTrata a las personas de usted, nunca tutees sin importar como te escriban.",
  "messages": [
    {
      "additional_content": "",
      "type": "text",
      "user": "Dime 5 frutas amarillas"
    },
    {
      "assistant": "\n¡Claro! Aquí te van 5 frutas amarillas:\n\n1. Plátano\n2. Piña\n3. Mango\n4. Melón\n5. Papaya\n"
    },
    {
      "additional_content": "",
      "type": "text",
      "user": "Dame los nombres en ingles."
    }
  ]
}
```

---

# Example for image conversation.

## Using public published URL for image
```json
{
  "system": "Actua como una maquina lectora de imagenes.\nDevuelve la información sin lenguaje natural, sólo responde lo que se está solicitando.\nEl dispositivo que va a interactuar contigo es una api, y necesita la información sin markdown u otros caracteres especiales.",
  "messages": [
    {
      "additional_content": {
        "base64_content_or_url": "https://raw.githubusercontent.com/logyca/python-libraries/main/logyca-ai/logyca_ai/assets_for_examples/file_or_documents/image.png",
        "image_format": "image_url",
        "image_resolution": "auto"
      },
      "type": "image_url",
      "user": "Extrae el texto que recibas en la imagen y devuelvelo en formato json."
    }
  ]
}
```

## Using image content in base64
```json
{
  "system": "Actua como una maquina lectora de imagenes.\nDevuelve la información sin lenguaje natural, sólo responde lo que se está solicitando.\nEl dispositivo que va a interactuar contigo es una api, y necesita la información sin markdown u otros caracteres especiales.",
  "messages": [
    {
      "additional_content": {
        "base64_content_or_url": "<base64 image png content>",
        "image_format": "png",
        "image_resolution": "auto"
      },
      "type": "image_base64",
      "user": "Extrae el texto que recibas en la imagen y devuelvelo en formato json."
    }
  ]
}
```

---

# Example for pdf conversation.

## Using public published URL for pdf
```json
{
  "system": "No uses lenguaje natural para la respuesta.\nDame la información que puedas extraer de la imagen en formato JSON.\nSolo devuelve la información, no formatees con caracteres adicionales la respuesta.",
  "messages": [
    {
      "additional_content": {
        "base64_content_or_url": "https://raw.githubusercontent.com/logyca/python-libraries/main/logyca-ai/logyca_ai/assets_for_examples/file_or_documents/pdf.pdf",
        "pdf_format": "pdf_url"
      },
      "type": "pdf_url",
      "user": "Dame los siguientes datos: Expediente, radicación, Fecha, Numero de registro, Vigencia."
    }
  ]
}
```

## Using pdf content in base64
```json
{
  "system": "No uses lenguaje natural para la respuesta.\nDame la información que puedas extraer de la imagen en formato JSON.\nSolo devuelve la información, no formatees con caracteres adicionales la respuesta.",
  "messages": [
    {
      "additional_content": {
        "base64_content_or_url": "<base64 pdf content>",
        "pdf_format": "pdf"
      },
      "type": "pdf_base64",
      "user": "Dame los siguientes datos: Expediente, radicación, Fecha, Numero de registro, Vigencia."
    }
  ]
}
```

# Example for plain_text conversation.

## Using public published URL for plain_text
```json
{
  "system": "No uses lenguaje natural para la respuesta.\n                Dame la información que puedas extraer en formato JSON.\n                Solo devuelve la información, no formatees con caracteres adicionales la respuesta.\n                Te voy a enviar un texto que representa información en formato csv.",
  "messages": [
    {
      "additional_content": {
        "base64_content_or_url": "https://raw.githubusercontent.com/logyca/python-libraries/main/logyca-ai/logyca_ai/assets_for_examples/file_or_documents/plain_text.csv",
        "file_format": "plain_text_url"
      },
      "type": "plain_text_url",
      "user": "Dame los siguientes datos de la primera fila del documento: Expediente, radicación, Fecha, Numero de registro, Vigencia.\n                A partir de la fila 2 del documento, suma los valores de la columna Valores_A.\n                A partir de la fila 2 del documento, Suma los valores de la columna Valores_B."
    }
  ]
}
```

## Using plain_text content in base64
```json
{
  "system": "No uses lenguaje natural para la respuesta.\n                Dame la información que puedas extraer en formato JSON.\n                Solo devuelve la información, no formatees con caracteres adicionales la respuesta.\n                Te voy a enviar un texto que representa información en formato csv.",
  "messages": [
    {
      "additional_content": {
        "base64_content_or_url": "<base64 pdf content>",
        "file_format": "csv"
      },
      "type": "plain_text_base64",
      "user": "Dame los siguientes datos de la primera fila del documento: Expediente, radicación, Fecha, Numero de registro, Vigencia.\n                A partir de la fila 2 del documento, suma los valores de la columna Valores_A.\n                A partir de la fila 2 del documento, Suma los valores de la columna Valores_B."
    }
  ]
}
```

# Example for Microsoft files conversation (Word, Excel).

## Using public published URL for Excel file
```json
{
  "system": "No uses lenguaje natural para la respuesta.\n                Dame la información que puedas extraer de la imagen en formato JSON.\n                Solo devuelve la información, no formatees con caracteres adicionales la respuesta.",
  "messages": [
    {
      "additional_content": {
        "base64_content_or_url": "https://raw.githubusercontent.com/logyca/python-libraries/main/logyca-ai/logyca_ai/assets_for_examples/file_or_documents/ms_excel.xlsx",
        "file_format": "ms_url"
      },
      "type": "ms_url",
      "user": "Dame los siguientes datos: Expediente, radicación, Fecha, Numero de registro, Vigencia."
    }
  ]
}
```

## Using Excel file content in base64
```json
{
    "system": "No uses lenguaje natural para la respuesta.\n                Dame la información que puedas extraer de la imagen en formato JSON.\n                Solo devuelve la información, no formatees con caracteres adicionales la respuesta.",
    "messages": [
      {
        "additional_content": {
          "base64_content_or_url": "<base64 pdf content>",
          "file_format": "xlsx"
        },
        "type": "ms_base64",
        "user": "Dame los siguientes datos: Expediente, radicación, Fecha, Numero de registro, Vigencia."
      }
    ]
}
```


---

# Semantic Versioning

logyca_ai < MAJOR >.< MINOR >.< PATCH >

* **MAJOR**: version when you make incompatible API changes
* **MINOR**: version when you add functionality in a backwards compatible manner
* **PATCH**: version when you make backwards compatible bug fixes

## Definitions for releasing versions
* https://peps.python.org/pep-0440/

    - X.YaN (Alpha release): Identify and fix early-stage bugs. Not suitable for production use.
    - X.YbN (Beta release): Stabilize and refine features. Address reported bugs. Prepare for official release.
    - X.YrcN (Release candidate): Final version before official release. Assumes all major features are complete and stable. Recommended for testing in non-critical environments.
    - X.Y (Final release/Stable/Production): Completed, stable version ready for use in production. Full release for public use.

---

# Changelog

All notable changes to this project will be documented in this file.

The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## Types of changes

- Added for new features.
- Changed for changes in existing functionality.
- Deprecated for soon-to-be removed features.
- Removed for now removed features.
- Fixed for any bug fixes.
- Security in case of vulnerabilities.

## [0.0.1aX] - 2024-08-02
### Added
- First tests using pypi.org in develop environment.

## [0.1.0] - 2024-08-02
### Added
- Completion of testing and launch into production.

## [0.1.1] - 2024-08-16
### Added
- The functions of extracting text from PDF files are refactored, using disk to optimize the use of ram memory and methods are added to extract text from images within the pages of the PDF files.

## [0.2.0] - 2024-08-30
### Added
- New feature of attaching documents with txt, csv, docx, xlsx extension

## [0.2.1] - 2024-09-16
### Added
- New tiktoken function to count tokens and check model capacity, returning if it meets the maximum_request_tokens requirements for both input and output.
### Fixed
- Extract excel files to output formats json, csv and list.

## [0.2.2] - 2024-10-22
### Added
- New functionalities are added to extract images from documents in base64 lists: extract_images_from_pdf_file, extract_images_from_docx_file
- The Swagger documentation is improved in the FastAPI example, adding the parameter: just_extract_images to the POST method to use the new document image extraction features.

## [0.2.3] - 2024-10-31
### Added
- new functionality when extracting text in Excel, you can select only extraction of visible sheets or all sheets.

## [0.2.4] - 2024-11-01
### Fixed
- Minimum adjustment when extracting images from an Excel file, leaving the extension in lowercase in the result.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/logyca/python-libraries/tree/main/logyca-ai",
    "name": "logyca-ai",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "artificial-intelligence, machine-learning, deep-learning, chatgpt, nlp, language-models, openai, transformers, neural-networks, ai-tools, mlops, data-science, python, data-analysis, automation",
    "author": "Jaime Andres Cardona Carrillo",
    "author_email": "jacardona@outlook.com",
    "download_url": "https://files.pythonhosted.org/packages/fd/38/98e9d46dd159de18bf5271c6ab7d5e8ad5e2788307d8418ea0b25aa46216/logyca_ai-0.2.4.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\r\n  <a href=\"https://logyca.com/\"><img src=\"https://logyca.com/sites/default/files/logyca.png\" alt=\"Logyca\"></a>\r\n</p>\r\n<p align=\"center\">\r\n    <em>LOGYCA public libraries</em>\r\n</p>\r\n\r\n<p align=\"center\">\r\n<a href=\"https://pypi.org/project/logyca-ai\" target=\"_blank\">\r\n    <img src=\"https://img.shields.io/pypi/v/logyca-ai?color=orange&label=PyPI%20Package\" alt=\"Package version\">\r\n</a>\r\n<a href=\"(https://www.python.org\" target=\"_blank\">\r\n    <img src=\"https://img.shields.io/badge/Python-%5B%3E%3D3.8%2C%3C%3D3.11%5D-orange\" alt=\"Python\">\r\n</a>\r\n</p>\r\n\r\n\r\n---\r\n\r\n# About us\r\n\r\n* <a href=\"http://logyca.com\" target=\"_blank\">LOGYCA Company</a>\r\n* <a href=\"https://www.youtube.com/channel/UCzcJtxfScoAtwFbxaLNnEtA\" target=\"_blank\">LOGYCA Youtube Channel</a>\r\n* <a href=\"https://www.linkedin.com/company/logyca\" target=\"_blank\"><img src=\"https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white\" alt=\"Linkedin\"></a>\r\n* <a href=\"https://twitter.com/LOGYCA_Org\" target=\"_blank\"><img src=\"https://img.shields.io/badge/Twitter-1DA1F2?style=for-the-badge&logo=twitter&logoColor=white\" alt=\"Twitter\"></a>\r\n* <a href=\"https://www.facebook.com/OrganizacionLOGYCA/\" target=\"_blank\"><img src=\"https://img.shields.io/badge/Facebook-1877F2?style=for-the-badge&logo=facebook&logoColor=white\" alt=\"Facebook\"></a>\r\n\r\n---\r\n\r\n# LOGYCA public libraries: To interact with ChatGPT and analyze documents, files and other functionality of the OpenAI library.\r\n\r\n[Source code](https://github.com/logyca/python-libraries/tree/main/logyca-ai)\r\n| [Package (PyPI)](https://pypi.org/project/logyca-ai/)\r\n| [Samples](https://github.com/logyca/python-libraries/tree/main/logyca-ai/samples)\r\n\r\n\r\n## To interact with the examples, keep the following in mind\r\n\r\nFastAPI example. Through Swagger, you can:\r\n- https://github.com/logyca/python-libraries/tree/main/logyca-ai/samples/fastapi_async\r\n- Use the example endpoints to obtain the input schemas for the post method and interact with the available parameters.\r\n- Endpoint publishing is asynchronous of openai SDK.\r\n- The model currently used is ChatGPT-4o, no other models have been tested so far.\r\n- Currently the formats supported to receive files and extract the text to interact with artificial intelligence are: txt, csv, pdf, images, Microsoft (docx, xlsx).\r\n\r\nScript example. Through of code, you can:\r\n- https://github.com/logyca/python-libraries/tree/main/logyca-ai/samples/script_app_sync\r\n- Examples shared with the example written in FastAPI.\r\n- The examples use synchronous functionality of openai SDK.\r\n- The model used is ChatGPT-4o for testing.\r\n\r\n## Environment variables documentation for example: fastapi_async\r\n\r\nThe examples are built in the Microsoft Azure OpenAI environment, and the variables to use are the following:\r\n\r\n.env.sample\r\n```console\r\n# Environment variables documentation:\r\n\r\n# API_KEY:\r\n# The general API key used for authentication with services. This key is typically used for accessing cloud-based or other API-driven platforms. Replace '***' with the actual key.\r\n\r\n# AZURE_OPENAI_DEPLOYMENT:\r\n# The name or identifier of the OpenAI deployment within Azure. This defines the specific model version and configuration you are using in Azure OpenAI Service. Set this to the name of the deployed model, such as 'chatgpt3.5-turbo-1106'.\r\n\r\n# AZURE_OPENAI_ENDPOINT:\r\n# The base URL of the Azure OpenAI Service endpoint. This is the URL where API requests are sent, typically formatted like 'https://<your-endpoint>.openai.azure.com/'.\r\n\r\n# AZURE_OPENAI_MODEL_NAME:\r\n# The name of the specific OpenAI model being used in Azure, for example, 'gpt-35-turbo'. This identifies which model variant will be used for processing requests.\r\n\r\n# AZURE_OPENAI_MODEL_VERSION:\r\n# The version of the OpenAI model deployed in Azure. This typically reflects updates or optimizations to the model, such as '1106' to indicate a version from November 6th.\r\n\r\n# OPENAI_API_KEY:\r\n# The API key provided by OpenAI directly (not through Azure). This is used to authenticate and access OpenAI services outside of Azure.\r\n\r\n# OPENAI_API_VERSION:\r\n# The version of the OpenAI API being used. This specifies the version of the API and its capabilities, for example, '2023-03-15-preview'. It dictates the available features and request format.\r\n\r\nAPI_KEY=***\r\nAZURE_OPENAI_DEPLOYMENT=***\r\nAZURE_OPENAI_ENDPOINT=***\r\nAZURE_OPENAI_MODEL_NAME=***\r\nAZURE_OPENAI_MODEL_VERSION=***\r\nOPENAI_API_KEY=***\r\nOPENAI_API_VERSION=***\r\n\r\n# Example\r\n# API_KEY=CUSTOM_ABC\r\n# AZURE_OPENAI_DEPLOYMENT=chat4omni\r\n# AZURE_OPENAI_ENDPOINT=azurenameforendpoint\r\n# AZURE_OPENAI_MODEL_NAME=gpt-4o\r\n# AZURE_OPENAI_MODEL_VERSION=2024-05-13\r\n# OPENAI_API_KEY=AZURE_ABC\r\n# OPENAI_API_VERSION=2024-07-01-preview\r\n```\r\n---\r\n\r\n# OCR engine to extract images.\r\n\r\n- Tesseract is an optical character recognition engine for various operating systems.\r\n  It is free software, released under the Apache License. Originally developed by Hewlett-Packard as proprietary software in the 1980s,\r\n  it was released as open source in 2005 and development was sponsored by Google in 2006\r\n\r\n## Install\r\n\r\n- (Source Code) https://tesseract-ocr.github.io/tessdoc/Downloads.html\r\n- (Windows Binaries) https://github.com/UB-Mannheim/tesseract/wiki\r\n- (Linux/Docker) apt-get -y install tesseract-ocr\r\n\r\n# Example for simple conversation.\r\n\r\n```json\r\n{\r\n  \"system\": \"Voy a definirte tu personalidad, contexto y proposito.\\nActua como un experto en venta de frutas.\\nSe muy positivo.\\nTrata a las personas de usted, nunca tutees sin importar como te escriban.\",\r\n  \"messages\": [\r\n    {\r\n      \"additional_content\": \"\",\r\n      \"type\": \"text\",\r\n      \"user\": \"Dime 5 frutas amarillas\"\r\n    },\r\n    {\r\n      \"assistant\": \"\\n\u00c2\u00a1Claro! Aqu\u00c3\u00ad te van 5 frutas amarillas:\\n\\n1. Pl\u00c3\u00a1tano\\n2. Pi\u00c3\u00b1a\\n3. Mango\\n4. Mel\u00c3\u00b3n\\n5. Papaya\\n\"\r\n    },\r\n    {\r\n      \"additional_content\": \"\",\r\n      \"type\": \"text\",\r\n      \"user\": \"Dame los nombres en ingles.\"\r\n    }\r\n  ]\r\n}\r\n```\r\n\r\n---\r\n\r\n# Example for image conversation.\r\n\r\n## Using public published URL for image\r\n```json\r\n{\r\n  \"system\": \"Actua como una maquina lectora de imagenes.\\nDevuelve la informaci\u00c3\u00b3n sin lenguaje natural, s\u00c3\u00b3lo responde lo que se est\u00c3\u00a1 solicitando.\\nEl dispositivo que va a interactuar contigo es una api, y necesita la informaci\u00c3\u00b3n sin markdown u otros caracteres especiales.\",\r\n  \"messages\": [\r\n    {\r\n      \"additional_content\": {\r\n        \"base64_content_or_url\": \"https://raw.githubusercontent.com/logyca/python-libraries/main/logyca-ai/logyca_ai/assets_for_examples/file_or_documents/image.png\",\r\n        \"image_format\": \"image_url\",\r\n        \"image_resolution\": \"auto\"\r\n      },\r\n      \"type\": \"image_url\",\r\n      \"user\": \"Extrae el texto que recibas en la imagen y devuelvelo en formato json.\"\r\n    }\r\n  ]\r\n}\r\n```\r\n\r\n## Using image content in base64\r\n```json\r\n{\r\n  \"system\": \"Actua como una maquina lectora de imagenes.\\nDevuelve la informaci\u00c3\u00b3n sin lenguaje natural, s\u00c3\u00b3lo responde lo que se est\u00c3\u00a1 solicitando.\\nEl dispositivo que va a interactuar contigo es una api, y necesita la informaci\u00c3\u00b3n sin markdown u otros caracteres especiales.\",\r\n  \"messages\": [\r\n    {\r\n      \"additional_content\": {\r\n        \"base64_content_or_url\": \"<base64 image png content>\",\r\n        \"image_format\": \"png\",\r\n        \"image_resolution\": \"auto\"\r\n      },\r\n      \"type\": \"image_base64\",\r\n      \"user\": \"Extrae el texto que recibas en la imagen y devuelvelo en formato json.\"\r\n    }\r\n  ]\r\n}\r\n```\r\n\r\n---\r\n\r\n# Example for pdf conversation.\r\n\r\n## Using public published URL for pdf\r\n```json\r\n{\r\n  \"system\": \"No uses lenguaje natural para la respuesta.\\nDame la informaci\u00c3\u00b3n que puedas extraer de la imagen en formato JSON.\\nSolo devuelve la informaci\u00c3\u00b3n, no formatees con caracteres adicionales la respuesta.\",\r\n  \"messages\": [\r\n    {\r\n      \"additional_content\": {\r\n        \"base64_content_or_url\": \"https://raw.githubusercontent.com/logyca/python-libraries/main/logyca-ai/logyca_ai/assets_for_examples/file_or_documents/pdf.pdf\",\r\n        \"pdf_format\": \"pdf_url\"\r\n      },\r\n      \"type\": \"pdf_url\",\r\n      \"user\": \"Dame los siguientes datos: Expediente, radicaci\u00c3\u00b3n, Fecha, Numero de registro, Vigencia.\"\r\n    }\r\n  ]\r\n}\r\n```\r\n\r\n## Using pdf content in base64\r\n```json\r\n{\r\n  \"system\": \"No uses lenguaje natural para la respuesta.\\nDame la informaci\u00c3\u00b3n que puedas extraer de la imagen en formato JSON.\\nSolo devuelve la informaci\u00c3\u00b3n, no formatees con caracteres adicionales la respuesta.\",\r\n  \"messages\": [\r\n    {\r\n      \"additional_content\": {\r\n        \"base64_content_or_url\": \"<base64 pdf content>\",\r\n        \"pdf_format\": \"pdf\"\r\n      },\r\n      \"type\": \"pdf_base64\",\r\n      \"user\": \"Dame los siguientes datos: Expediente, radicaci\u00c3\u00b3n, Fecha, Numero de registro, Vigencia.\"\r\n    }\r\n  ]\r\n}\r\n```\r\n\r\n# Example for plain_text conversation.\r\n\r\n## Using public published URL for plain_text\r\n```json\r\n{\r\n  \"system\": \"No uses lenguaje natural para la respuesta.\\n                Dame la informaci\u00c3\u00b3n que puedas extraer en formato JSON.\\n                Solo devuelve la informaci\u00c3\u00b3n, no formatees con caracteres adicionales la respuesta.\\n                Te voy a enviar un texto que representa informaci\u00c3\u00b3n en formato csv.\",\r\n  \"messages\": [\r\n    {\r\n      \"additional_content\": {\r\n        \"base64_content_or_url\": \"https://raw.githubusercontent.com/logyca/python-libraries/main/logyca-ai/logyca_ai/assets_for_examples/file_or_documents/plain_text.csv\",\r\n        \"file_format\": \"plain_text_url\"\r\n      },\r\n      \"type\": \"plain_text_url\",\r\n      \"user\": \"Dame los siguientes datos de la primera fila del documento: Expediente, radicaci\u00c3\u00b3n, Fecha, Numero de registro, Vigencia.\\n                A partir de la fila 2 del documento, suma los valores de la columna Valores_A.\\n                A partir de la fila 2 del documento, Suma los valores de la columna Valores_B.\"\r\n    }\r\n  ]\r\n}\r\n```\r\n\r\n## Using plain_text content in base64\r\n```json\r\n{\r\n  \"system\": \"No uses lenguaje natural para la respuesta.\\n                Dame la informaci\u00c3\u00b3n que puedas extraer en formato JSON.\\n                Solo devuelve la informaci\u00c3\u00b3n, no formatees con caracteres adicionales la respuesta.\\n                Te voy a enviar un texto que representa informaci\u00c3\u00b3n en formato csv.\",\r\n  \"messages\": [\r\n    {\r\n      \"additional_content\": {\r\n        \"base64_content_or_url\": \"<base64 pdf content>\",\r\n        \"file_format\": \"csv\"\r\n      },\r\n      \"type\": \"plain_text_base64\",\r\n      \"user\": \"Dame los siguientes datos de la primera fila del documento: Expediente, radicaci\u00c3\u00b3n, Fecha, Numero de registro, Vigencia.\\n                A partir de la fila 2 del documento, suma los valores de la columna Valores_A.\\n                A partir de la fila 2 del documento, Suma los valores de la columna Valores_B.\"\r\n    }\r\n  ]\r\n}\r\n```\r\n\r\n# Example for Microsoft files conversation (Word, Excel).\r\n\r\n## Using public published URL for Excel file\r\n```json\r\n{\r\n  \"system\": \"No uses lenguaje natural para la respuesta.\\n                Dame la informaci\u00c3\u00b3n que puedas extraer de la imagen en formato JSON.\\n                Solo devuelve la informaci\u00c3\u00b3n, no formatees con caracteres adicionales la respuesta.\",\r\n  \"messages\": [\r\n    {\r\n      \"additional_content\": {\r\n        \"base64_content_or_url\": \"https://raw.githubusercontent.com/logyca/python-libraries/main/logyca-ai/logyca_ai/assets_for_examples/file_or_documents/ms_excel.xlsx\",\r\n        \"file_format\": \"ms_url\"\r\n      },\r\n      \"type\": \"ms_url\",\r\n      \"user\": \"Dame los siguientes datos: Expediente, radicaci\u00c3\u00b3n, Fecha, Numero de registro, Vigencia.\"\r\n    }\r\n  ]\r\n}\r\n```\r\n\r\n## Using Excel file content in base64\r\n```json\r\n{\r\n    \"system\": \"No uses lenguaje natural para la respuesta.\\n                Dame la informaci\u00c3\u00b3n que puedas extraer de la imagen en formato JSON.\\n                Solo devuelve la informaci\u00c3\u00b3n, no formatees con caracteres adicionales la respuesta.\",\r\n    \"messages\": [\r\n      {\r\n        \"additional_content\": {\r\n          \"base64_content_or_url\": \"<base64 pdf content>\",\r\n          \"file_format\": \"xlsx\"\r\n        },\r\n        \"type\": \"ms_base64\",\r\n        \"user\": \"Dame los siguientes datos: Expediente, radicaci\u00c3\u00b3n, Fecha, Numero de registro, Vigencia.\"\r\n      }\r\n    ]\r\n}\r\n```\r\n\r\n\r\n---\r\n\r\n# Semantic Versioning\r\n\r\nlogyca_ai < MAJOR >.< MINOR >.< PATCH >\r\n\r\n* **MAJOR**: version when you make incompatible API changes\r\n* **MINOR**: version when you add functionality in a backwards compatible manner\r\n* **PATCH**: version when you make backwards compatible bug fixes\r\n\r\n## Definitions for releasing versions\r\n* https://peps.python.org/pep-0440/\r\n\r\n    - X.YaN (Alpha release): Identify and fix early-stage bugs. Not suitable for production use.\r\n    - X.YbN (Beta release): Stabilize and refine features. Address reported bugs. Prepare for official release.\r\n    - X.YrcN (Release candidate): Final version before official release. Assumes all major features are complete and stable. Recommended for testing in non-critical environments.\r\n    - X.Y (Final release/Stable/Production): Completed, stable version ready for use in production. Full release for public use.\r\n\r\n---\r\n\r\n# Changelog\r\n\r\nAll notable changes to this project will be documented in this file.\r\n\r\nThe format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),\r\nand this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).\r\n\r\n## Types of changes\r\n\r\n- Added for new features.\r\n- Changed for changes in existing functionality.\r\n- Deprecated for soon-to-be removed features.\r\n- Removed for now removed features.\r\n- Fixed for any bug fixes.\r\n- Security in case of vulnerabilities.\r\n\r\n## [0.0.1aX] - 2024-08-02\r\n### Added\r\n- First tests using pypi.org in develop environment.\r\n\r\n## [0.1.0] - 2024-08-02\r\n### Added\r\n- Completion of testing and launch into production.\r\n\r\n## [0.1.1] - 2024-08-16\r\n### Added\r\n- The functions of extracting text from PDF files are refactored, using disk to optimize the use of ram memory and methods are added to extract text from images within the pages of the PDF files.\r\n\r\n## [0.2.0] - 2024-08-30\r\n### Added\r\n- New feature of attaching documents with txt, csv, docx, xlsx extension\r\n\r\n## [0.2.1] - 2024-09-16\r\n### Added\r\n- New tiktoken function to count tokens and check model capacity, returning if it meets the maximum_request_tokens requirements for both input and output.\r\n### Fixed\r\n- Extract excel files to output formats json, csv and list.\r\n\r\n## [0.2.2] - 2024-10-22\r\n### Added\r\n- New functionalities are added to extract images from documents in base64 lists: extract_images_from_pdf_file, extract_images_from_docx_file\r\n- The Swagger documentation is improved in the FastAPI example, adding the parameter: just_extract_images to the POST method to use the new document image extraction features.\r\n\r\n## [0.2.3] - 2024-10-31\r\n### Added\r\n- new functionality when extracting text in Excel, you can select only extraction of visible sheets or all sheets.\r\n\r\n## [0.2.4] - 2024-11-01\r\n### Fixed\r\n- Minimum adjustment when extracting images from an Excel file, leaving the extension in lowercase in the result.\r\n\r\n\r\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "An integration package created by the company LOGYCA to interact with ChatGPT and analyze documents, files and other functionality of the OpenAI library.",
    "version": "0.2.4",
    "project_urls": {
        "Homepage": "https://github.com/logyca/python-libraries/tree/main/logyca-ai"
    },
    "split_keywords": [
        "artificial-intelligence",
        " machine-learning",
        " deep-learning",
        " chatgpt",
        " nlp",
        " language-models",
        " openai",
        " transformers",
        " neural-networks",
        " ai-tools",
        " mlops",
        " data-science",
        " python",
        " data-analysis",
        " automation"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "522c92a60560375380e6b2ad8ad8dc9f147517f61d23effdf28f4871bbb2480d",
                "md5": "6163931f20d04f3404b0631d5d953c38",
                "sha256": "34e40e13f977bffb14dfc850eca3716dd1f79a4aa0635db2dd244f4297e16861"
            },
            "downloads": -1,
            "filename": "logyca_ai-0.2.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "6163931f20d04f3404b0631d5d953c38",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 1179738,
            "upload_time": "2024-11-01T23:42:02",
            "upload_time_iso_8601": "2024-11-01T23:42:02.062912Z",
            "url": "https://files.pythonhosted.org/packages/52/2c/92a60560375380e6b2ad8ad8dc9f147517f61d23effdf28f4871bbb2480d/logyca_ai-0.2.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "fd3898e9d46dd159de18bf5271c6ab7d5e8ad5e2788307d8418ea0b25aa46216",
                "md5": "15f6a7fa07224286f727f3a38d4b186e",
                "sha256": "83f55b8e5c1644f200fc0f1a7e7371f382b25ba49116f0f8b9b059ab067dd004"
            },
            "downloads": -1,
            "filename": "logyca_ai-0.2.4.tar.gz",
            "has_sig": false,
            "md5_digest": "15f6a7fa07224286f727f3a38d4b186e",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 1178830,
            "upload_time": "2024-11-01T23:42:03",
            "upload_time_iso_8601": "2024-11-01T23:42:03.971290Z",
            "url": "https://files.pythonhosted.org/packages/fd/38/98e9d46dd159de18bf5271c6ab7d5e8ad5e2788307d8418ea0b25aa46216/logyca_ai-0.2.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-01 23:42:03",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "logyca",
    "github_project": "python-libraries",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "logyca-ai"
}
        
Elapsed time: 0.52467s