# moral-keeper-ai
moral-keeper-ai is an open-source Python program that uses AI to evaluate input text from the following perspectives and output suggestions for text revision:
- Preventing the user's posted text from being offensive to the reader
- Avoiding potential public backlash against the poster
- Suppressing the increase in customer service workload due to ambiguous opinion posts
This helps maintain a positive and respectful online presence.
## Technology Used
- OpenAI API
## Supported API Servers
- Azure OpenAI Service
## Recommended Models
- GPT-4o mini
- GPT-4o
- GPT-3turbo
## Main Features
- Determine if a given sentence is appropriate for posting
- Suggest more appropriate expressions for problematic posts
- Can be called from Python methods
- Usable as an API server via HTTP
## Quick Start
1. Installation
```sh
pip install moral-keeper-ai
```
2. Configuration
Add various settings in .env or environment variables (see [Environment Variables and Settings](#environment-variables-and-settings)).
3. Example Usage
```python
import moral_keeper_ai
judgment, details = moral_keeper_ai.check('The sentence you want to check')
suggested_message = moral_keeper_ai.suggest('The sentence you want to make appropriate for posting')
```
### moral_keeper_ai.check()
Parameters
- content: string: Text to be censored
Return value: Tuple
- judgment: bool: True (No problem), False (Problematic)
- details: list: A list of items that were flagged as problematic if any issues were found
Overview:
This prompt is for censoring received text as if by a company's PR manager. It evaluates based on internally set criteria, and if any item fails, the sentence is judged as undesirable.
### moral_keeper_ai.suggest()
Parameters
- content: string: Text before expression change
Return value: String
Overview:
This prompt softens the expression of the received text. It returns the softened string.
4. As an API server via HTTP
```bash
moral-keeper-ai-server --port 3000 &
curl -X POST -H "Content-Type: application/json" -d '{"content": "The sentence you want to check"}' http://localhost:3000/check
curl -X POST -H "Content-Type: application/json" -d '{"content": "The sentence you want to make appropriate for posting"}' http://localhost:3000/suggest
```
### `POST /check`
Submit a text string to be judged for appropriateness.
Request:
```json
{
"content": "The sentence you want to check."
}
```
Response:
```json
{
"judgement": false,
"ng_reasons" : ["Compliance with company policies", "Use appropriate expressions for public communication"],
"status": "success"
}
```
- `judgement`: A boolean value indicating whether the submitted text is judged accepatble (true) or unaccepatble (false).
- `ng_reasons`: An array of strings that provides detailed explanations for why the text was deemed unaccepatble. Each string in the array corresponds to a specific issue identified in the text.
- `status`: A string that indicates the result of the API execution. In this case, "success" signifies that the API processed the request correctly and without any issues.
### `POST /suggest`
Submit a text string to be make its expression softer or more polite. The response includes the softened version of the submitted text.
Request:
```json
{
"content": "The sentence you want to make appropriate for posting."
}
```
Response:
```json
{
"softened": "The softened sentence the api made.",
"status": "success"
}
```
- `softened`: A string that contains the softened version of the text submitted in the request. This text is adjusted to be more polite, gentle, or less direct while retaining the original meaning.
- `status`: A string that indicates the result of the API execution. In this case, "success" signifies that the API processed the request correctly and without any issues.
## Environment Variables and Settings
### API Key
```bash
export AZURE_OPENAI_API_KEY='API Key'
```
### Endpoint
```bash
export AZURE_OPENAI_ENDPOINT='Endpoint URL'
```
### Model to Use
```bash
export AZURE_OPENAI_DEPLOY_NAME='Model name/Deployment name'
```
## For Developers
### Setup Environment
1. Clone the `moral-keeper-ai` repository from GitHub to your local environment and navigate to the cloned directory.
```sh
git clone https://github.com/c-3lab/moral-keeper-ai.git
cd moral-keeper-ai
```
2. Install poetry if it's not installed yet.
```sh
pip install poetry
```
3. Set up the linters and formatters.
```sh
poetry install
poetry run pre-commit install
```
* From now on, every time you run git commit, isort, black, and pflake8 will automatically be applied to the staged files. If these tools make any changes, the commit will be aborted.
* If you want to manually run isort, black, and pflake8, you can do so with the following command: `poetry run pre-commit`
### Running Tests
1. Run the following command to execute the tests:
```sh
poetry run pytest --cov-report=xml:/tmp/coverage.xml --cov=moral_keeper_ai --cov-branch --disable-warnings --cov-report=term-missing
```
## Directory Structure
<pre>
.
├── moral_keeper_ai: Main module
├── tests: pytest resources
├── docs: Documentation
└── benchmark: Program for benchmark verification
└── evaluate: check function
└── data: Test comment files
└── mitigation: suggest function
└── data: Test comment files
</pre>
## LICENSE
[MIT license](https://github.com/c-3lab/moral-keeper-ai#MIT-1-ov-file)
## CopyRight
Copyright (c) 2024 C3Lab
Raw data
{
"_id": null,
"home_page": "https://github.com/c-3lab/moral-keeper-ai/",
"name": "moral-keeper-ai",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.10",
"maintainer_email": null,
"keywords": null,
"author": "C3Lab",
"author_email": "info.c3lab@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/1c/3e/405bf563a1eb7bd30730ada69bb94353bfbeed09bbe97c32ef3ec1805e4a/moral_keeper_ai-0.5.0.tar.gz",
"platform": null,
"description": "\n# moral-keeper-ai\n\nmoral-keeper-ai is an open-source Python program that uses AI to evaluate input text from the following perspectives and output suggestions for text revision:\n- Preventing the user's posted text from being offensive to the reader\n- Avoiding potential public backlash against the poster\n- Suppressing the increase in customer service workload due to ambiguous opinion posts\n\nThis helps maintain a positive and respectful online presence.\n\n## Technology Used\n\n- OpenAI API\n\n## Supported API Servers\n\n- Azure OpenAI Service\n\n## Recommended Models\n\n- GPT-4o mini\n- GPT-4o\n- GPT-3turbo\n\n## Main Features\n\n- Determine if a given sentence is appropriate for posting\n- Suggest more appropriate expressions for problematic posts\n- Can be called from Python methods\n- Usable as an API server via HTTP\n\n## Quick Start\n\n1. Installation\n\n```sh\npip install moral-keeper-ai\n```\n\n2. Configuration\n\nAdd various settings in .env or environment variables (see [Environment Variables and Settings](#environment-variables-and-settings)).\n\n\n3. Example Usage\n\n```python\nimport moral_keeper_ai\njudgment, details = moral_keeper_ai.check('The sentence you want to check')\nsuggested_message = moral_keeper_ai.suggest('The sentence you want to make appropriate for posting')\n```\n\n### moral_keeper_ai.check()\n\nParameters\n\n- content: string: Text to be censored\n\nReturn value: Tuple\n\n- judgment: bool: True (No problem), False (Problematic)\n- details: list: A list of items that were flagged as problematic if any issues were found\n\nOverview:\nThis prompt is for censoring received text as if by a company's PR manager. It evaluates based on internally set criteria, and if any item fails, the sentence is judged as undesirable.\n\n### moral_keeper_ai.suggest()\n\nParameters\n\n- content: string: Text before expression change\n\n\nReturn value: String\nOverview:\nThis prompt softens the expression of the received text. It returns the softened string.\n\n4. As an API server via HTTP\n\n```bash \nmoral-keeper-ai-server --port 3000 &\ncurl -X POST -H \"Content-Type: application/json\" -d '{\"content\": \"The sentence you want to check\"}' http://localhost:3000/check\ncurl -X POST -H \"Content-Type: application/json\" -d '{\"content\": \"The sentence you want to make appropriate for posting\"}' http://localhost:3000/suggest\n```\n\n### `POST /check`\n\nSubmit a text string to be judged for appropriateness.\n\nRequest:\n```json\n{\n \"content\": \"The sentence you want to check.\"\n}\n```\n\nResponse:\n```json\n{\n \"judgement\": false,\n \"ng_reasons\" : [\"Compliance with company policies\", \"Use appropriate expressions for public communication\"],\n \"status\": \"success\"\n}\n```\n\n- `judgement`: A boolean value indicating whether the submitted text is judged accepatble (true) or unaccepatble (false).\n- `ng_reasons`: An array of strings that provides detailed explanations for why the text was deemed unaccepatble. Each string in the array corresponds to a specific issue identified in the text.\n- `status`: A string that indicates the result of the API execution. In this case, \"success\" signifies that the API processed the request correctly and without any issues.\n\n### `POST /suggest`\n\nSubmit a text string to be make its expression softer or more polite. The response includes the softened version of the submitted text.\n\nRequest:\n```json\n{\n \"content\": \"The sentence you want to make appropriate for posting.\"\n}\n```\n\nResponse:\n```json\n{\n \"softened\": \"The softened sentence the api made.\", \n \"status\": \"success\"\n}\n```\n\n- `softened`: A string that contains the softened version of the text submitted in the request. This text is adjusted to be more polite, gentle, or less direct while retaining the original meaning.\n- `status`: A string that indicates the result of the API execution. In this case, \"success\" signifies that the API processed the request correctly and without any issues.\n\n## Environment Variables and Settings\n\n### API Key\n\n```bash\nexport AZURE_OPENAI_API_KEY='API Key'\n```\n\n### Endpoint\n\n```bash\nexport AZURE_OPENAI_ENDPOINT='Endpoint URL'\n```\n\n### Model to Use\n\n```bash\nexport AZURE_OPENAI_DEPLOY_NAME='Model name/Deployment name'\n```\n\n## For Developers\n\n### Setup Environment\n\n1. Clone the `moral-keeper-ai` repository from GitHub to your local environment and navigate to the cloned directory.\n\n```sh\ngit clone https://github.com/c-3lab/moral-keeper-ai.git\ncd moral-keeper-ai\n```\n\n2. Install poetry if it's not installed yet.\n\n```sh\npip install poetry\n```\n\n3. Set up the linters and formatters.\n\n```sh\npoetry install\npoetry run pre-commit install\n```\n\n* From now on, every time you run git commit, isort, black, and pflake8 will automatically be applied to the staged files. If these tools make any changes, the commit will be aborted.\n* If you want to manually run isort, black, and pflake8, you can do so with the following command: `poetry run pre-commit`\n\n### Running Tests\n\n1. Run the following command to execute the tests:\n\n```sh\npoetry run pytest --cov-report=xml:/tmp/coverage.xml --cov=moral_keeper_ai --cov-branch --disable-warnings --cov-report=term-missing\n```\n\n## Directory Structure\n<pre>\n.\n\u251c\u2500\u2500 moral_keeper_ai: Main module\n\u251c\u2500\u2500 tests: pytest resources\n\u251c\u2500\u2500 docs: Documentation\n\u2514\u2500\u2500 benchmark: Program for benchmark verification\n \u2514\u2500\u2500 evaluate: check function\n \u2514\u2500\u2500 data: Test comment files\n \u2514\u2500\u2500 mitigation: suggest function\n \u2514\u2500\u2500 data: Test comment files\n</pre>\n\n## LICENSE\n\n[MIT license](https://github.com/c-3lab/moral-keeper-ai#MIT-1-ov-file)\n\n## CopyRight\n\nCopyright (c) 2024 C3Lab\n\n",
"bugtrack_url": null,
"license": "AGPL-3.0",
"summary": null,
"version": "0.5.0",
"project_urls": {
"Documentation": "https://github.com/c-3lab/moral-keeper-ai/README.md",
"Homepage": "https://github.com/c-3lab/moral-keeper-ai/",
"Repository": "https://github.com/c-3lab/moral-keeper-ai/"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "1439e96dd70e92095257290c7e8b7ac124051cd5f8482540330dbda816fba940",
"md5": "4558c9aa97b01b5fe2bdb3df2b531db0",
"sha256": "19a6f558c5f4186875ae356d351eb04565158d341a46ff3579b60732f9f42bda"
},
"downloads": -1,
"filename": "moral_keeper_ai-0.5.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4558c9aa97b01b5fe2bdb3df2b531db0",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.10",
"size": 12695,
"upload_time": "2024-11-11T07:44:06",
"upload_time_iso_8601": "2024-11-11T07:44:06.305033Z",
"url": "https://files.pythonhosted.org/packages/14/39/e96dd70e92095257290c7e8b7ac124051cd5f8482540330dbda816fba940/moral_keeper_ai-0.5.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "1c3e405bf563a1eb7bd30730ada69bb94353bfbeed09bbe97c32ef3ec1805e4a",
"md5": "5ec84835760ad21d15310a57ad27f282",
"sha256": "801d6dbcf49812e76822b939618a138d32f7d208809ab619de59ee9a335791f3"
},
"downloads": -1,
"filename": "moral_keeper_ai-0.5.0.tar.gz",
"has_sig": false,
"md5_digest": "5ec84835760ad21d15310a57ad27f282",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.10",
"size": 9883,
"upload_time": "2024-11-11T07:44:07",
"upload_time_iso_8601": "2024-11-11T07:44:07.623173Z",
"url": "https://files.pythonhosted.org/packages/1c/3e/405bf563a1eb7bd30730ada69bb94353bfbeed09bbe97c32ef3ec1805e4a/moral_keeper_ai-0.5.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-11 07:44:07",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "c-3lab",
"github_project": "moral-keeper-ai",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "moral-keeper-ai"
}