process-supervision-torch


Nameprocess-supervision-torch JSON
Version 0.0.3 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/Lets-Verify-Step-by-Step
SummaryProcess SuperVision - Pytorch
upload_time2023-11-26 05:14:57
maintainer
docs_urlNone
authorKye Gomez
requires_python>=3.8.1,<4.0.0
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # "Let’s Verify Step by Step"
Implementation of "Improving Mathematical Reasoning with Process Supervision" by OPENAI 

## Install
`pip3 install --upgrade process-supervision-torch`


## Usage:

### GPT4 without tokenizer
```python
import torch 
from process_supervision.main import GPT4

# Usage with random inputs
text = torch.randint(0, 20000, (1, 1024))

# Initiliaze the model
model = GPT4()
output = model(text)
print(output)
```


### `PRM`
```python
import torch
from process_supervision.prm import PRM
from swarms.models import OpenAIChat
from process_supervision.generator import MathDataGenerator
import os
from dotenv import load_dotenv

load_dotenv()

api_key = os.getenv("OPENAI_API_KEY")

# LLM initialization
llm = OpenAIChat(openai_api_key=api_key)

# Math data generator initialization
math_datagenerator = MathDataGenerator(llm, num_iters=10)

# Device initialization
device = 0 if torch.cuda.is_available() else "cpu"

# Model initialization
prm_model = PRM(
    model_name="lvwerra/gpt2-imdb-pos-v2",
    ref_model_name="lvwerra/gpt2-imdb",
    reward_model_name="lvwerra/distilbert-imdb",
    device=device,
)

# Generation arguments
gen_kwargs = {
    "min_length": -1,
    "top_k": 0.0,
    "top_p": 1.0,
    "do_sample": True,
    "pad_token_id": prm_model.tokenizer.eos_token_id,
}
sent_kwargs = {"top_k": None, "function_to_apply": "none", "batch_size": 16}

# Sample queries
queries = ["Sample query 1", "Sample query 2"]
queries = [math_datagenerator.generate_samples(query) for query in queries]

# Generate responses
responses = prm_model.generate_responses(
    queries, gen_len=10, gen_kwargs=gen_kwargs
)

# Score responses
scores = prm_model.score_responses(responses, sent_kwargs)

# Display results
for query, response, score in zip(queries, responses, scores):
    print(f"Query: {query}\nResponse: {response}\nScore: {score}\n")

```


### GPT4 + PRM


# Method


# Citation
```bibtex
@misc{lightman2023lets,
   title={Let's Verify Step by Step}, 
   author={Hunter Lightman and Vineet Kosaraju and Yura Burda and Harri Edwards and Bowen Baker and Teddy Lee and Jan Leike and John Schulman and Ilya Sutskever and Karl Cobbe},
   year={2023},
   eprint={2305.20050},
   archivePrefix={arXiv},
   primaryClass={cs.LG}
}

```

# Todo
- [ ] Creae the PRM reward model




# License
MIT





            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/Lets-Verify-Step-by-Step",
    "name": "process-supervision-torch",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8.1,<4.0.0",
    "maintainer_email": "",
    "keywords": "artificial intelligence,deep learning,optimizers,Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/a4/9a/f3fc0d0f76058e86fb5e423d18c1653e74bcb5ef5e20006ed8405d66f9e9/process_supervision_torch-0.0.3.tar.gz",
    "platform": null,
    "description": "# \"Let\u2019s Verify Step by Step\"\nImplementation of \"Improving Mathematical Reasoning with Process Supervision\" by OPENAI \n\n## Install\n`pip3 install --upgrade process-supervision-torch`\n\n\n## Usage:\n\n### GPT4 without tokenizer\n```python\nimport torch \nfrom process_supervision.main import GPT4\n\n# Usage with random inputs\ntext = torch.randint(0, 20000, (1, 1024))\n\n# Initiliaze the model\nmodel = GPT4()\noutput = model(text)\nprint(output)\n```\n\n\n### `PRM`\n```python\nimport torch\nfrom process_supervision.prm import PRM\nfrom swarms.models import OpenAIChat\nfrom process_supervision.generator import MathDataGenerator\nimport os\nfrom dotenv import load_dotenv\n\nload_dotenv()\n\napi_key = os.getenv(\"OPENAI_API_KEY\")\n\n# LLM initialization\nllm = OpenAIChat(openai_api_key=api_key)\n\n# Math data generator initialization\nmath_datagenerator = MathDataGenerator(llm, num_iters=10)\n\n# Device initialization\ndevice = 0 if torch.cuda.is_available() else \"cpu\"\n\n# Model initialization\nprm_model = PRM(\n    model_name=\"lvwerra/gpt2-imdb-pos-v2\",\n    ref_model_name=\"lvwerra/gpt2-imdb\",\n    reward_model_name=\"lvwerra/distilbert-imdb\",\n    device=device,\n)\n\n# Generation arguments\ngen_kwargs = {\n    \"min_length\": -1,\n    \"top_k\": 0.0,\n    \"top_p\": 1.0,\n    \"do_sample\": True,\n    \"pad_token_id\": prm_model.tokenizer.eos_token_id,\n}\nsent_kwargs = {\"top_k\": None, \"function_to_apply\": \"none\", \"batch_size\": 16}\n\n# Sample queries\nqueries = [\"Sample query 1\", \"Sample query 2\"]\nqueries = [math_datagenerator.generate_samples(query) for query in queries]\n\n# Generate responses\nresponses = prm_model.generate_responses(\n    queries, gen_len=10, gen_kwargs=gen_kwargs\n)\n\n# Score responses\nscores = prm_model.score_responses(responses, sent_kwargs)\n\n# Display results\nfor query, response, score in zip(queries, responses, scores):\n    print(f\"Query: {query}\\nResponse: {response}\\nScore: {score}\\n\")\n\n```\n\n\n### GPT4 + PRM\n\n\n# Method\n\n\n# Citation\n```bibtex\n@misc{lightman2023lets,\n   title={Let's Verify Step by Step}, \n   author={Hunter Lightman and Vineet Kosaraju and Yura Burda and Harri Edwards and Bowen Baker and Teddy Lee and Jan Leike and John Schulman and Ilya Sutskever and Karl Cobbe},\n   year={2023},\n   eprint={2305.20050},\n   archivePrefix={arXiv},\n   primaryClass={cs.LG}\n}\n\n```\n\n# Todo\n- [ ] Creae the PRM reward model\n\n\n\n\n# License\nMIT\n\n\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Process SuperVision - Pytorch",
    "version": "0.0.3",
    "project_urls": {
        "Documentation": "https://swarms.apac.ai",
        "Homepage": "https://github.com/kyegomez/Lets-Verify-Step-by-Step",
        "Repository": "https://github.com/kyegomez/Lets-Verify-Step-by-Step"
    },
    "split_keywords": [
        "artificial intelligence",
        "deep learning",
        "optimizers",
        "prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "28dbe5144d5c8c128a6e7e8016dde66bb7a94d77dfdaa8f6356124aae923f0db",
                "md5": "b8accf2dfed51a6692dac87b431acc4f",
                "sha256": "b9579a02d604375244c6b2448255dc0a46d34f3b85555b3b339909b37d24471e"
            },
            "downloads": -1,
            "filename": "process_supervision_torch-0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b8accf2dfed51a6692dac87b431acc4f",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.1,<4.0.0",
            "size": 14029,
            "upload_time": "2023-11-26T05:14:56",
            "upload_time_iso_8601": "2023-11-26T05:14:56.285325Z",
            "url": "https://files.pythonhosted.org/packages/28/db/e5144d5c8c128a6e7e8016dde66bb7a94d77dfdaa8f6356124aae923f0db/process_supervision_torch-0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a49af3fc0d0f76058e86fb5e423d18c1653e74bcb5ef5e20006ed8405d66f9e9",
                "md5": "b488edb0687152f08f314b4d14997111",
                "sha256": "6c2001938ecc0d08ed5d6ce8200dcaffb0287d1b4191156fd3f9e79e6db31cc5"
            },
            "downloads": -1,
            "filename": "process_supervision_torch-0.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "b488edb0687152f08f314b4d14997111",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.1,<4.0.0",
            "size": 13897,
            "upload_time": "2023-11-26T05:14:57",
            "upload_time_iso_8601": "2023-11-26T05:14:57.679118Z",
            "url": "https://files.pythonhosted.org/packages/a4/9a/f3fc0d0f76058e86fb5e423d18c1653e74bcb5ef5e20006ed8405d66f9e9/process_supervision_torch-0.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-26 05:14:57",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "Lets-Verify-Step-by-Step",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "process-supervision-torch"
}
        
Elapsed time: 0.13724s