Name | flexible-scorer JSON |
Version |
0.1.14
JSON |
| download |
home_page | None |
Summary | A flexible scoring library using OpenAI models. |
upload_time | 2024-09-24 09:23:24 |
maintainer | None |
docs_url | None |
author | Michael |
requires_python | >=3.6 |
license | None |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Flexible Scorer Library
Flexible Scorer is a Python library that allows you to evaluate and score text content based on custom criteria using OpenAI's GPT models. It provides a systematic way to assess texts, taking advantage of AI's capabilities to interpret and analyze content according to user-defined parameters.
## Features
- **Customizable Criteria**:Score texts based on any criteria you define (e.g., humor, clarity, relevance)
- **Scalable Scoring System**: Utilizes a 1 to 10 scale for nuanced evaluations
- **OpenAI GPT Integration**: Leverages powerful language models for deep text analysis
- **Probability Analysis**: Computes weighted scores using token log probabilities
- **Visualization Tools**: Includes functions to plot and visualize scoring results using Plotly
## Installation
Installation through PIP manager
```bash
pip install flexiblescorer
```
### Dependencies
- numpy
- openai
- plotly
- scipy
# Getting Started
## Set OpenAI API key as environment
Enter OpenAI API key to use model
### a. For Windows Users
```commandline
set OPENAI_API_KEY=your-api-key-here
```
### b. For macOS and Linux Users
```
export OPENAI_API_KEY=your-api-key-here
```
Replace 'your-api-key-here' with your actual OpenAI API key, which you can obtain from your OpenAI account
## Basic Usage Example
```
from flexible_scorer import FlexibleScorer
# Define your evaluation criteria
criteria = "humor"
# Initialize the scorer with the criteria
scorer = FlexibleScorer(criteria)
# Texts to evaluate
texts = [
"Why don't scientists trust atoms? Because they make up everything!",
"This is a serious statement without any humor.",
"Why did the math book look sad? Because it had too many problems."
]
# Additional instructions (optional)
additional_instructions = "Consider clever wordplay and puns in your evaluation."
# Score the texts
scores = []
for text in texts:
score = scorer.score(text, additional_instructions)
scores.append(score)
print(f"Text: {text}\nScore: {score}\n")
# Plot the results
scorer.plot_results(texts, scores)
```
- __FlexibleScorer__: The main class used to score texts based on your criteria
- __criteria__: A String defining what aspect you want to evaluate (e.g., "humor", "clarity")
- __score()__: Method to evaluate a single text. Optionally, you can provide additional instructions to guide the evaluation
- __plot_results()__: Method to visualize the scores of multiple texts
## API Reference
### FlexibleScorer
Initialization
```
scorer = FlexibleScorer(criteria, model='gpt-4')
```
- __criteria (str)__: The criteria upon which to evaluate the text
- __model (str, optional)__: The OpenAI GPT model to use (default is 'gpt-4')
### Methods
- __score(text, additional_instructions='')__
- __text(str)__: The text content to evaluate
- __additional_instructions (str, optional)__: Extra guidelines for the evaluations
- __Returns (float)__: A normalized score between 0 and 1
- __plot_results(texts, scores)__
- __texts (list of str)__: A list of text contents evaluated
- __scores (list of float)__: Corresponding scores for the texts
- __Displays__: An interactive bar chart of the results
## OpenAI API Usage
This library uses the OpenAI API under the hood. Ensure you comply with OpenAI's Usage Policies when using this package
## License
This project is licensed under the MIT License
## Contributing
Contributions are welcome! Please open an issue or submit a pull request on Github
## Acknowledgments
- Thanks to OpenAI for providing access to their powerful language models
- Inspired by the need for flexible and customizable text evaluation tools
Raw data
{
"_id": null,
"home_page": null,
"name": "flexible-scorer",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": null,
"keywords": null,
"author": "Michael",
"author_email": "miryaboy@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/67/c2/a224f72f7e9d678f3dd9b9d49d46dd89ab98262aff0d29eaf6301eabf07f/flexible_scorer-0.1.14.tar.gz",
"platform": null,
"description": "# Flexible Scorer Library\n\nFlexible Scorer is a Python library that allows you to evaluate and score text content based on custom criteria using OpenAI's GPT models. It provides a systematic way to assess texts, taking advantage of AI's capabilities to interpret and analyze content according to user-defined parameters.\n\n## Features \n- **Customizable Criteria**:Score texts based on any criteria you define (e.g., humor, clarity, relevance)\n- **Scalable Scoring System**: Utilizes a 1 to 10 scale for nuanced evaluations\n- **OpenAI GPT Integration**: Leverages powerful language models for deep text analysis\n- **Probability Analysis**: Computes weighted scores using token log probabilities\n- **Visualization Tools**: Includes functions to plot and visualize scoring results using Plotly\n\n## Installation\n\nInstallation through PIP manager\n```bash\npip install flexiblescorer\n```\n### Dependencies\n- numpy\n- openai\n- plotly\n- scipy\n\n# Getting Started\n\n## Set OpenAI API key as environment\nEnter OpenAI API key to use model\n\n### a. For Windows Users\n```commandline\n set OPENAI_API_KEY=your-api-key-here\n```\n\n\n### b. For macOS and Linux Users\n```\n export OPENAI_API_KEY=your-api-key-here\n```\n\nReplace 'your-api-key-here' with your actual OpenAI API key, which you can obtain from your OpenAI account\n\n## Basic Usage Example\n```\nfrom flexible_scorer import FlexibleScorer\n\n# Define your evaluation criteria\ncriteria = \"humor\"\n\n# Initialize the scorer with the criteria\nscorer = FlexibleScorer(criteria)\n\n# Texts to evaluate\ntexts = [\n \"Why don't scientists trust atoms? Because they make up everything!\",\n \"This is a serious statement without any humor.\",\n \"Why did the math book look sad? Because it had too many problems.\"\n]\n\n# Additional instructions (optional)\nadditional_instructions = \"Consider clever wordplay and puns in your evaluation.\"\n\n# Score the texts\nscores = []\nfor text in texts:\n score = scorer.score(text, additional_instructions)\n scores.append(score)\n print(f\"Text: {text}\\nScore: {score}\\n\")\n\n# Plot the results\nscorer.plot_results(texts, scores)\n```\n- __FlexibleScorer__: The main class used to score texts based on your criteria\n- __criteria__: A String defining what aspect you want to evaluate (e.g., \"humor\", \"clarity\")\n- __score()__: Method to evaluate a single text. Optionally, you can provide additional instructions to guide the evaluation\n- __plot_results()__: Method to visualize the scores of multiple texts\n## API Reference\n### FlexibleScorer\nInitialization\n```\n scorer = FlexibleScorer(criteria, model='gpt-4')\n```\n- __criteria (str)__: The criteria upon which to evaluate the text\n- __model (str, optional)__: The OpenAI GPT model to use (default is 'gpt-4')\n\n### Methods\n- __score(text, additional_instructions='')__\n - __text(str)__: The text content to evaluate\n - __additional_instructions (str, optional)__: Extra guidelines for the evaluations\n - __Returns (float)__: A normalized score between 0 and 1\n- __plot_results(texts, scores)__\n - __texts (list of str)__: A list of text contents evaluated\n - __scores (list of float)__: Corresponding scores for the texts\n - __Displays__: An interactive bar chart of the results\n \n## OpenAI API Usage\nThis library uses the OpenAI API under the hood. Ensure you comply with OpenAI's Usage Policies when using this package\n\n## License\nThis project is licensed under the MIT License\n\n## Contributing \nContributions are welcome! Please open an issue or submit a pull request on Github\n\n## Acknowledgments \n- Thanks to OpenAI for providing access to their powerful language models\n- Inspired by the need for flexible and customizable text evaluation tools\n\n",
"bugtrack_url": null,
"license": null,
"summary": "A flexible scoring library using OpenAI models.",
"version": "0.1.14",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3cc37868ba6563cfd79d211bdb5f9ffce5401e8e811ff63ee32bb5d9a06c44c0",
"md5": "945ec703f65fa2bf11b29d17eabef3d8",
"sha256": "a31d9ecd56c6c9089ff3b27b620d0284fee5010ce05e47aa63151f8cddb5a871"
},
"downloads": -1,
"filename": "flexible_scorer-0.1.14-py3-none-any.whl",
"has_sig": false,
"md5_digest": "945ec703f65fa2bf11b29d17eabef3d8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.6",
"size": 5816,
"upload_time": "2024-09-24T09:23:22",
"upload_time_iso_8601": "2024-09-24T09:23:22.728939Z",
"url": "https://files.pythonhosted.org/packages/3c/c3/7868ba6563cfd79d211bdb5f9ffce5401e8e811ff63ee32bb5d9a06c44c0/flexible_scorer-0.1.14-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "67c2a224f72f7e9d678f3dd9b9d49d46dd89ab98262aff0d29eaf6301eabf07f",
"md5": "58f16ef7e1fea8cf8162994f9b6a1477",
"sha256": "4380bb9baa93ac37c0bbeff857e4e56e91635a0e87b6f885ddf4072b98c53cea"
},
"downloads": -1,
"filename": "flexible_scorer-0.1.14.tar.gz",
"has_sig": false,
"md5_digest": "58f16ef7e1fea8cf8162994f9b6a1477",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 5472,
"upload_time": "2024-09-24T09:23:24",
"upload_time_iso_8601": "2024-09-24T09:23:24.049784Z",
"url": "https://files.pythonhosted.org/packages/67/c2/a224f72f7e9d678f3dd9b9d49d46dd89ab98262aff0d29eaf6301eabf07f/flexible_scorer-0.1.14.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-24 09:23:24",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "flexible-scorer"
}