askquinta


Nameaskquinta JSON
Version 2.1.0 PyPI version JSON
download
home_pagehttps://github.com/paper-indonesia/composer
SummaryPackage for daily usage of data team at Paper.id
upload_time2024-04-29 11:22:41
maintainerNone
docs_urlNone
authorPaper Data Team
requires_pythonNone
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # 🗣️ askquinta: Data Handling and Messaging Library

`askquinta` is a versatile Python library designed to simplify data handling tasks, including connecting to databases like BigQuery, Google Spreadsheets, MySQL, and ArangoDB. It also provides the ability to send messages through email, Slack, and Telegram. This library aims to streamline your data processing and communication workflows.

## Installation 📩

You can install `askquinta` using the following command:
```bash
!pip install git+https://token:<your-access-token>@github.com/paper-indonesia/askquinta.git
```
Since the repository on GitHub is private, an access token from each GitHub account is required.
for more informations: [managing your personal access tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens)

or for public version
```bash
!pip install askquinta
```

## Features

### Connect to BigQuery 🗼
The library allows you to connect to Google BigQuery and execute SQL queries.

```python
from askquinta import About_BQ

"""
    credentials_loc (str): The path to the directory containing credentials.
        By default, the credentials_loc will be obtained from the environment variable 'bq_creds_file'|'bq_config_testing_google_cred_location'|bq_config_prod_google_cred_location.
    project_id (str, optional): The Google Cloud project ID.
        By default, the project_id will be obtained from the environment variable 'bq_projectid'|bq_config_testing_projectid|bq_config_prod_projectid.
    location (str, optional): The Google Cloud project ID.
        By default, the location will be obtained from the environment variable 'bq_location'|bq_config_location or will be 'asia-southeast1'
"""

#If environment variables are not set, you can set connection details manually
BQ = About_BQ(project_id = 'your_projectid',
                credentials_loc = '/path/to/credential_file.json',
                location = 'database_location')

#Set up the About_BQ object with environment variables if available
#if not already yet
import os
os.environ['bq_creds_file'] = '/path/to/credential_file.json'
os.environ['bq_projectid'] = 'your_projectid',
os.environ['bq_location'] = 'database_location'
BQ = About_BQ()

#Pull Data
query = '''select * from datascience_public.predicted_item_category limit 10'''
df = BQ.to_pull_data(query = query)

#Push Data
BQ.to_push_data(data = df, dataset_name = 'datascience',table_name = 'testing_aril', if_exists = 'replace')

#Update Data
update_values = {'keys': "'value'"}
condition = "key_condition = 'value_condition'"
BQ.to_update_data(dataset_name='dataset_name', table_name='table_name', update_values=update_values, condition=condition, show = True)

```

### Connect to Gsheet 📋
The library allows you to connect to Gsheet to pull, push and update data

```python
from askquinta import About_Gsheet

"""
    credentials_path (str): Path to the JSON credentials file.
        By default, the credentials_path will be obtained from the environment variable 'gsheet_config_cred_location'
"""

#If environment variables are not set, you can set connection details manually
credentials_path = '/path/to/credentials.json'
gsheet = About_Gsheet(credentials_path = '/path/to/credentials.json')

#if not already yet
import os
os.environ['gsheet_config_cred_location'] = '/path/to/credentials.json'
gsheet = About_Gsheet()

# Push data
data_to_push = pd.DataFrame({'Column1': [1, 2, 3], 'Column2': ['A', 'B', 'C']})
spreadsheet_name = 'Example Spreadsheet'
worksheet_name = 'Example Worksheet'
gsheet.to_push_data(data_to_push, spreadsheet_name, worksheet_name, append=True)

# Pull data
data_pulled = gsheet.to_pull_data(spreadsheet_name, worksheet_name)
print(data_pulled)

# Update data
data_to_update = [['New Value 1', 'New Value 2']]
cell_range = 'A1:B1'
gsheet.to_update_data(data_to_update, spreadsheet_name, cell_range, worksheet_name)
```

### Connect to MySQL 🎡
The library allows you to connect to MySQL and execute SQL queries.

```python
from askquinta import About_MySQL

"""
    host (str): The host IP or domain name for the MySQL server.
        By default, the host will be obtained from the environment variable 'mysql_config_ip_host'.
    port (int): The port number for the MySQL server.
        By default, the port will be obtained from the environment variable 'mysql_config_ip_port'.
    username (str): The username to connect to the MySQL server.
        By default, the username will be obtained from the environment variable 'mysql_config_user_name'.
    password (str): The password associated with the username.
        By default, the password will be obtained from the environment variable 'mysql_config_user_password'.
"""


# Set up the About_MySQL object with environment variables if available
MySQL = About_MySQL(database_name='database_name')

# If environment variables are not set, you can set connection details manually
MySQL = About_MySQL(
     host='host',
     port=123,  # Replace with an integer port number
     username='your_username',
     password='password',
     database_name='database_name'
 )

query = """
    SELECT *
    FROM <your_table>
    LIMIT 10
"""

result = MySQL.to_pull_data(query)
result
```

### Connect to ArangoDB 🎠
The library allows you to connect to ArangoDB and execute queries.

```python
from askquinta import About_ArangoDB

"""
    arango_url (str): The URL of the ArangoDB server.
        By default, the arango_url will be obtained from the environment variable 'arango_replicate_config_url'.
    username (str): The username to connect to the ArangoDB server.
        By default, the username will be obtained from the environment variable 'arango_replicate_config_username'.
    password (str): The password associated with the username.
        By default, the password will be obtained from the environment variable 'arango_replicate_config_password'.
"""
# Set up the About_ArangoDB object with environment variables if available

ArangoDB = About_ArangoDB()

# If environment variables are not set, you can set connection details manually
ArangoDB = About_ArangoDB(arango_url = 'https://link_arango',
                          username = 'username',
                          password = 'password')

print('query to Arango')
ArangoDB.to_pull_data(collection_name = 'collection_name',
                  query = """FOR i IN <table on arango>
                                RETURN i""",
                  batch_size = 10, max_level = None)

#push data
ArangoDB.to_push_data(data = dataframe,
                     database_name = 'database_name',
                     collection_name = 'collection_name' )
```

### Blast Message ✉️
The library allows you to send message using email and telegram

```python
from askquinta import About_Blast_Message
"""
    creds_email (str): The path to the file containing credentials in pickle format.
        By default, the creds_email will be obtained from the environment variable 'blast_message_creds_email_file'.
    token_telegram_bot (str, optional): Token Telegram Bot
        By default, the token_telegram_bot will be obtained from the environment variable 'blast_message_token_telegram_bot'.

"""
#Telegram
telegram = About_Blast_Message(token_telegram_bot = 'your_token_telegram_bot')
telegram.send_message_to_telegram(to = 'chat_id', message = "message")

#Email
email = About_Blast_Message(creds_email = '/path/to/creds_file.pickle')
email.send_message_to_email( to='recipient@example.com',
                            subject='Hello',
                            message='This is a test email.',
                            cc='cc@example.com')

#Slack
slack = About_Blast_Message(token_slack_bot = 'token_slack_bot')
slack.send_message_to_slack(to = 'channel_name or member id',message = message)

```

### API 🔥
The askquinta library provides functionalities for interacting with various APIs developed by the data team. 
Currently, the available model focuses on item classification.

```python
from askquinta import About_API

"""
    url (str): The URL of the prediction API.
      By default, the url will be obtained from the environment variable 'api_url_item_classification'.
"""

# Set up the About_API object with environment variables if available
api = About_API()

# If environment variables are not set, you can set the API URL manually
api = About_API(url="http://url_to_api/predict")

# Example input data for item classification
item_names = ['mobil honda', 'baju anak', 'pembersih lantai', 'okky jely drink']

# Call the predict_item method to get predictions
predictions = api.predict_item(item_names)
```

### NLP 🤟
```python
from askquinta import About_NLP

nlp_instance = About_NLP()

#----------Generate UUID----------
uuid_result = nlp_instance.generate_uuid("example.com")
print("Generated UUID:", uuid_result)

#----------Clean Text---------
dirty_text = "   This is Some Example Text with good 21 , the, Punctuation! And some stopwords.   "
cleaned_text = nlp_instance.clean_text(dirty_text, remove_punctuation=True,remove_number=True, remove_stopwords=True)
print("Cleaned Text:", cleaned_text)
text = "Running wolves are better than running"

##Use stemming
cleaned_text_stemmed = nlp_instance.clean_text(text, apply_stemming=True, language = 'english')
print("Steammed Text English:", cleaned_text_stemmed)

##Use lemmatization
cleaned_text_lemmatized = nlp_instance.clean_text(text, apply_lemmatization=True, language = 'english' )
print("Lemmatized Text English:", cleaned_text_lemmatized)

text = "kami berlari lari kesana kemari bermain dan berenang bersama budi"
cleaned_text_stemmed = nlp_instance.clean_text(text, apply_stemming=True,language='indonesia')
cleaned_text_lemmatized = nlp_instance.clean_text(text, apply_lemmatization=True ,language='indonesia')
print("Steammed Text indo:", cleaned_text_stemmed)
print("Lemmatized Text indo:", cleaned_text_lemmatized)

#-------Hash Value---------
original_value = "my_secret_password"
hashed_value = nlp_instance.hash_value(original_value)
print("Hashed value:", hashed_value)

verified = nlp_instance.verify_hash('my_secret_password', hashed_value)
print("verified value:", verified)


#------Encode and Decode Base64-------
encoded_value = nlp_instance.encode_base64(original_value)
print("Encoded value:", encoded_value)
decoded_value = nlp_instance.decode_base64(encoded_value)
print("Decoded value:", decoded_value)

#------Calculate Similarity Text Score-------
text1 = "what do you mean?"
text2 = "what do you need?"
for i in ['jaccard', 'edit', 'cosine', 'levenshtein', 'jarowinkler', 'tfidf_cosine']:
    score = nlp_instance.similarity_text_scoring(text1, text2, similarity_metric=i)
    print(i,score)

#------Translate Text--------
text_to_translate = "Hello, how are you?"
target_language = 'es'  # Spanish
translated_text = nlp_instance.translate(text_to_translate, target_language = target_language)
print("Translated text:", translated_text)

#------Predict Sentiment--------
sentiment_text = nlp_instance.sentiment(text_to_translate)
print("Sentiment text:", sentiment_text)

#--------Summarize Text---------

summarized_text = nlp_instance.summarize_text(text = "your long text", ratio = 0.1)
print(summarized_text)
```

## Contributing 👩🏻‍👨🏻‍👦🏻‍👧🏻

Contributions to askquinta are welcome!

If you find a bug or want to add new features, please submit a pull request on [GitHub](https://github.com/paper-indonesia/askquinta/tree/main).

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/paper-indonesia/composer",
    "name": "askquinta",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "Paper Data Team",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/51/f2/c919eb3fe7f72f916982df62cdcf0c692c23d52e8379308ab2004183fb6c/askquinta-2.1.0.tar.gz",
    "platform": null,
    "description": "# \ud83d\udde3\ufe0f askquinta: Data Handling and Messaging Library\r\n\r\n`askquinta` is a versatile Python library designed to simplify data handling tasks, including connecting to databases like BigQuery, Google Spreadsheets, MySQL, and ArangoDB. It also provides the ability to send messages through email, Slack, and Telegram. This library aims to streamline your data processing and communication workflows.\r\n\r\n## Installation \ud83d\udce9\r\n\r\nYou can install `askquinta` using the following command:\r\n```bash\r\n!pip install git+https://token:<your-access-token>@github.com/paper-indonesia/askquinta.git\r\n```\r\nSince the repository on GitHub is private, an access token from each GitHub account is required.\r\nfor more informations: [managing your personal access tokens](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens)\r\n\r\nor for public version\r\n```bash\r\n!pip install askquinta\r\n```\r\n\r\n## Features\r\n\r\n### Connect to BigQuery \ud83d\uddfc\r\nThe library allows you to connect to Google BigQuery and execute SQL queries.\r\n\r\n```python\r\nfrom askquinta import About_BQ\r\n\r\n\"\"\"\r\n    credentials_loc (str): The path to the directory containing credentials.\r\n        By default, the credentials_loc will be obtained from the environment variable 'bq_creds_file'|'bq_config_testing_google_cred_location'|bq_config_prod_google_cred_location.\r\n    project_id (str, optional): The Google Cloud project ID.\r\n        By default, the project_id will be obtained from the environment variable 'bq_projectid'|bq_config_testing_projectid|bq_config_prod_projectid.\r\n    location (str, optional): The Google Cloud project ID.\r\n        By default, the location will be obtained from the environment variable 'bq_location'|bq_config_location or will be 'asia-southeast1'\r\n\"\"\"\r\n\r\n#If environment variables are not set, you can set connection details manually\r\nBQ = About_BQ(project_id = 'your_projectid',\r\n                credentials_loc = '/path/to/credential_file.json',\r\n                location = 'database_location')\r\n\r\n#Set up the About_BQ object with environment variables if available\r\n#if not already yet\r\nimport os\r\nos.environ['bq_creds_file'] = '/path/to/credential_file.json'\r\nos.environ['bq_projectid'] = 'your_projectid',\r\nos.environ['bq_location'] = 'database_location'\r\nBQ = About_BQ()\r\n\r\n#Pull Data\r\nquery = '''select * from datascience_public.predicted_item_category limit 10'''\r\ndf = BQ.to_pull_data(query = query)\r\n\r\n#Push Data\r\nBQ.to_push_data(data = df, dataset_name = 'datascience',table_name = 'testing_aril', if_exists = 'replace')\r\n\r\n#Update Data\r\nupdate_values = {'keys': \"'value'\"}\r\ncondition = \"key_condition = 'value_condition'\"\r\nBQ.to_update_data(dataset_name='dataset_name', table_name='table_name', update_values=update_values, condition=condition, show = True)\r\n\r\n```\r\n\r\n### Connect to Gsheet \ud83d\udccb\r\nThe library allows you to connect to Gsheet to pull, push and update data\r\n\r\n```python\r\nfrom askquinta import About_Gsheet\r\n\r\n\"\"\"\r\n    credentials_path (str): Path to the JSON credentials file.\r\n        By default, the credentials_path will be obtained from the environment variable 'gsheet_config_cred_location'\r\n\"\"\"\r\n\r\n#If environment variables are not set, you can set connection details manually\r\ncredentials_path = '/path/to/credentials.json'\r\ngsheet = About_Gsheet(credentials_path = '/path/to/credentials.json')\r\n\r\n#if not already yet\r\nimport os\r\nos.environ['gsheet_config_cred_location'] = '/path/to/credentials.json'\r\ngsheet = About_Gsheet()\r\n\r\n# Push data\r\ndata_to_push = pd.DataFrame({'Column1': [1, 2, 3], 'Column2': ['A', 'B', 'C']})\r\nspreadsheet_name = 'Example Spreadsheet'\r\nworksheet_name = 'Example Worksheet'\r\ngsheet.to_push_data(data_to_push, spreadsheet_name, worksheet_name, append=True)\r\n\r\n# Pull data\r\ndata_pulled = gsheet.to_pull_data(spreadsheet_name, worksheet_name)\r\nprint(data_pulled)\r\n\r\n# Update data\r\ndata_to_update = [['New Value 1', 'New Value 2']]\r\ncell_range = 'A1:B1'\r\ngsheet.to_update_data(data_to_update, spreadsheet_name, cell_range, worksheet_name)\r\n```\r\n\r\n### Connect to MySQL \ud83c\udfa1\r\nThe library allows you to connect to MySQL and execute SQL queries.\r\n\r\n```python\r\nfrom askquinta import About_MySQL\r\n\r\n\"\"\"\r\n    host (str): The host IP or domain name for the MySQL server.\r\n        By default, the host will be obtained from the environment variable 'mysql_config_ip_host'.\r\n    port (int): The port number for the MySQL server.\r\n        By default, the port will be obtained from the environment variable 'mysql_config_ip_port'.\r\n    username (str): The username to connect to the MySQL server.\r\n        By default, the username will be obtained from the environment variable 'mysql_config_user_name'.\r\n    password (str): The password associated with the username.\r\n        By default, the password will be obtained from the environment variable 'mysql_config_user_password'.\r\n\"\"\"\r\n\r\n\r\n# Set up the About_MySQL object with environment variables if available\r\nMySQL = About_MySQL(database_name='database_name')\r\n\r\n# If environment variables are not set, you can set connection details manually\r\nMySQL = About_MySQL(\r\n     host='host',\r\n     port=123,  # Replace with an integer port number\r\n     username='your_username',\r\n     password='password',\r\n     database_name='database_name'\r\n )\r\n\r\nquery = \"\"\"\r\n    SELECT *\r\n    FROM <your_table>\r\n    LIMIT 10\r\n\"\"\"\r\n\r\nresult = MySQL.to_pull_data(query)\r\nresult\r\n```\r\n\r\n### Connect to ArangoDB \ud83c\udfa0\r\nThe library allows you to connect to ArangoDB and execute queries.\r\n\r\n```python\r\nfrom askquinta import About_ArangoDB\r\n\r\n\"\"\"\r\n    arango_url (str): The URL of the ArangoDB server.\r\n        By default, the arango_url will be obtained from the environment variable 'arango_replicate_config_url'.\r\n    username (str): The username to connect to the ArangoDB server.\r\n        By default, the username will be obtained from the environment variable 'arango_replicate_config_username'.\r\n    password (str): The password associated with the username.\r\n        By default, the password will be obtained from the environment variable 'arango_replicate_config_password'.\r\n\"\"\"\r\n# Set up the About_ArangoDB object with environment variables if available\r\n\r\nArangoDB = About_ArangoDB()\r\n\r\n# If environment variables are not set, you can set connection details manually\r\nArangoDB = About_ArangoDB(arango_url = 'https://link_arango',\r\n                          username = 'username',\r\n                          password = 'password')\r\n\r\nprint('query to Arango')\r\nArangoDB.to_pull_data(collection_name = 'collection_name',\r\n                  query = \"\"\"FOR i IN <table on arango>\r\n                                RETURN i\"\"\",\r\n                  batch_size = 10, max_level = None)\r\n\r\n#push data\r\nArangoDB.to_push_data(data = dataframe,\r\n                     database_name = 'database_name',\r\n                     collection_name = 'collection_name' )\r\n```\r\n\r\n### Blast Message \u2709\ufe0f\r\nThe library allows you to send message using email and telegram\r\n\r\n```python\r\nfrom askquinta import About_Blast_Message\r\n\"\"\"\r\n    creds_email (str): The path to the file containing credentials in pickle format.\r\n        By default, the creds_email will be obtained from the environment variable 'blast_message_creds_email_file'.\r\n    token_telegram_bot (str, optional): Token Telegram Bot\r\n        By default, the token_telegram_bot will be obtained from the environment variable 'blast_message_token_telegram_bot'.\r\n\r\n\"\"\"\r\n#Telegram\r\ntelegram = About_Blast_Message(token_telegram_bot = 'your_token_telegram_bot')\r\ntelegram.send_message_to_telegram(to = 'chat_id', message = \"message\")\r\n\r\n#Email\r\nemail = About_Blast_Message(creds_email = '/path/to/creds_file.pickle')\r\nemail.send_message_to_email( to='recipient@example.com',\r\n                            subject='Hello',\r\n                            message='This is a test email.',\r\n                            cc='cc@example.com')\r\n\r\n#Slack\r\nslack = About_Blast_Message(token_slack_bot = 'token_slack_bot')\r\nslack.send_message_to_slack(to = 'channel_name or member id',message = message)\r\n\r\n```\r\n\r\n### API \ud83d\udd25\r\nThe askquinta library provides functionalities for interacting with various APIs developed by the data team. \r\nCurrently, the available model focuses on item classification.\r\n\r\n```python\r\nfrom askquinta import About_API\r\n\r\n\"\"\"\r\n    url (str): The URL of the prediction API.\r\n      By default, the url will be obtained from the environment variable 'api_url_item_classification'.\r\n\"\"\"\r\n\r\n# Set up the About_API object with environment variables if available\r\napi = About_API()\r\n\r\n# If environment variables are not set, you can set the API URL manually\r\napi = About_API(url=\"http://url_to_api/predict\")\r\n\r\n# Example input data for item classification\r\nitem_names = ['mobil honda', 'baju anak', 'pembersih lantai', 'okky jely drink']\r\n\r\n# Call the predict_item method to get predictions\r\npredictions = api.predict_item(item_names)\r\n```\r\n\r\n### NLP \ud83e\udd1f\r\n```python\r\nfrom askquinta import About_NLP\r\n\r\nnlp_instance = About_NLP()\r\n\r\n#----------Generate UUID----------\r\nuuid_result = nlp_instance.generate_uuid(\"example.com\")\r\nprint(\"Generated UUID:\", uuid_result)\r\n\r\n#----------Clean Text---------\r\ndirty_text = \"   This is Some Example Text with good 21 , the, Punctuation! And some stopwords.   \"\r\ncleaned_text = nlp_instance.clean_text(dirty_text, remove_punctuation=True,remove_number=True, remove_stopwords=True)\r\nprint(\"Cleaned Text:\", cleaned_text)\r\ntext = \"Running wolves are better than running\"\r\n\r\n##Use stemming\r\ncleaned_text_stemmed = nlp_instance.clean_text(text, apply_stemming=True, language = 'english')\r\nprint(\"Steammed Text English:\", cleaned_text_stemmed)\r\n\r\n##Use lemmatization\r\ncleaned_text_lemmatized = nlp_instance.clean_text(text, apply_lemmatization=True, language = 'english' )\r\nprint(\"Lemmatized Text English:\", cleaned_text_lemmatized)\r\n\r\ntext = \"kami berlari lari kesana kemari bermain dan berenang bersama budi\"\r\ncleaned_text_stemmed = nlp_instance.clean_text(text, apply_stemming=True,language='indonesia')\r\ncleaned_text_lemmatized = nlp_instance.clean_text(text, apply_lemmatization=True ,language='indonesia')\r\nprint(\"Steammed Text indo:\", cleaned_text_stemmed)\r\nprint(\"Lemmatized Text indo:\", cleaned_text_lemmatized)\r\n\r\n#-------Hash Value---------\r\noriginal_value = \"my_secret_password\"\r\nhashed_value = nlp_instance.hash_value(original_value)\r\nprint(\"Hashed value:\", hashed_value)\r\n\r\nverified = nlp_instance.verify_hash('my_secret_password', hashed_value)\r\nprint(\"verified value:\", verified)\r\n\r\n\r\n#------Encode and Decode Base64-------\r\nencoded_value = nlp_instance.encode_base64(original_value)\r\nprint(\"Encoded value:\", encoded_value)\r\ndecoded_value = nlp_instance.decode_base64(encoded_value)\r\nprint(\"Decoded value:\", decoded_value)\r\n\r\n#------Calculate Similarity Text Score-------\r\ntext1 = \"what do you mean?\"\r\ntext2 = \"what do you need?\"\r\nfor i in ['jaccard', 'edit', 'cosine', 'levenshtein', 'jarowinkler', 'tfidf_cosine']:\r\n    score = nlp_instance.similarity_text_scoring(text1, text2, similarity_metric=i)\r\n    print(i,score)\r\n\r\n#------Translate Text--------\r\ntext_to_translate = \"Hello, how are you?\"\r\ntarget_language = 'es'  # Spanish\r\ntranslated_text = nlp_instance.translate(text_to_translate, target_language = target_language)\r\nprint(\"Translated text:\", translated_text)\r\n\r\n#------Predict Sentiment--------\r\nsentiment_text = nlp_instance.sentiment(text_to_translate)\r\nprint(\"Sentiment text:\", sentiment_text)\r\n\r\n#--------Summarize Text---------\r\n\r\nsummarized_text = nlp_instance.summarize_text(text = \"your long text\", ratio = 0.1)\r\nprint(summarized_text)\r\n```\r\n\r\n## Contributing \ud83d\udc69\ud83c\udffb\u200d\ud83d\udc68\ud83c\udffb\u200d\ud83d\udc66\ud83c\udffb\u200d\ud83d\udc67\ud83c\udffb\r\n\r\nContributions to askquinta are welcome!\r\n\r\nIf you find a bug or want to add new features, please submit a pull request on [GitHub](https://github.com/paper-indonesia/askquinta/tree/main).\r\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Package for daily usage of data team at Paper.id",
    "version": "2.1.0",
    "project_urls": {
        "Homepage": "https://github.com/paper-indonesia/composer"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "51f2c919eb3fe7f72f916982df62cdcf0c692c23d52e8379308ab2004183fb6c",
                "md5": "2c49a5b5bfb29af9d7adc47898d965d2",
                "sha256": "e41e75ddf61a8b0112e152b038dfd3cf9f38e3f326fe4d9ce168119c3de4961c"
            },
            "downloads": -1,
            "filename": "askquinta-2.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "2c49a5b5bfb29af9d7adc47898d965d2",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 21194,
            "upload_time": "2024-04-29T11:22:41",
            "upload_time_iso_8601": "2024-04-29T11:22:41.620213Z",
            "url": "https://files.pythonhosted.org/packages/51/f2/c919eb3fe7f72f916982df62cdcf0c692c23d52e8379308ab2004183fb6c/askquinta-2.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-29 11:22:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "paper-indonesia",
    "github_project": "composer",
    "github_not_found": true,
    "lcname": "askquinta"
}
        
Elapsed time: 0.23895s