| Name | debugging-benchmark JSON |
| Version |
0.0.1
JSON |
| download |
| home_page | |
| Summary | |
| upload_time | 2023-12-14 20:36:29 |
| maintainer | |
| docs_url | None |
| author | |
| requires_python | >=3.10 |
| license | |
| keywords |
debugging
benchmark
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
[](https://github.com/martineberlein/debugging-benchmark/actions/workflows/tests.yml)
[](https://github.com/martineberlein/debugging-benchmark/actions/workflows/coverage.yml)
[](https://coveralls.io/github/martineberlein/debugging-benchmark?branch=actions)
# debugging-benchmark
## Quickstart
Generating Passing and Failing Inputs:
```python
from debugging_benchmark.calculator.calculator import CalculatorBenchmarkRepository
from debugging_framework.tools import GrammarBasedEvaluationFuzzer
calc = CalculatorBenchmarkRepository().build()
param = calc.to_dict()
fuzzer = GrammarBasedEvaluationFuzzer(**param)
fuzzer.run()
gen_inps = fuzzer.get_generated_inputs()
```
Evaluation:
```python
from debugging_benchmark.calculator.calculator import CalculatorBenchmarkRepository
from debugging_framework.evaluator import Evaluation
from debugging_framework.tools import InputsFromHellEvaluationFuzzer
tools = [InputsFromHellEvaluationFuzzer]
subjects = SieveOfEratosthenesStudentAssignmentBenchmarkRepository().build()
result = Evaluation(
tools=tools,
subjects=subjects[0:1],
repetitions=1,
timeout=3600
).run()
```
## Deeper Look into the Class Structure
Check out the Class Diagram for a first overview. Further down in this section we take a look at some key functions of interest.
#### Class Diagram

`BenchmarkRepository` and `BenchmarkProgram` can be found in `debugging_framework/benchmark.py`
`StudentAssignmentBenchmarkProgram`,`StudentAssignmentRepository` and `GCDStudentAssignmentBenchmarkRepository` can be found in `debugging_benchmark/student_assignments.py`
The faulty programs can be found at `debugging_benchmark/student_assignments/problem_1_GCD` and the correct implementation at `debugging_benchmark/student_assignments/reference1.py`
#### build()
Returns a List of BenchmarkPrograms. Calls internally _construct_test_program(). This function is our interface.
#### _construct_test_program()
Returns a BenchmarkProgram. Calls internally construct_oracle() to construct a oracle for our program.
#### construct_oracle()
Where the magic happens.
Returns a Functions which loads the faulty and correct implementation, executes both with the input and checks if they are the same or not. If they are the same return OracleResult.PASSING, if not return OracleResult.FAILING
#### to_dict()
PLACEHOLDER
## Install, Development, Testing
### Install
If all external dependencies are available, a simple pip install PLACEHOLDER suffices.
We recommend installing PLACEHOLDER inside a virtual environment (virtualenv):
```
python3.10 -m venv venv
source venv/bin/activate
pip install --upgrade pip
pip install PLACEHOLDER
```
### Development and Testing
For development and testing, we recommend using PLACEHOLDER inside a virtual environment (virtualenv).
By doing the following steps in a standard shell (bash), one can run the PLACEHOLDER tests:
```
git clone https://github.com/martineberlein/debugging-benchmark
cd debugging-benchmark
python3.10 -m venv venv
source venv/bin/activate
pip install --upgrade pip
# Run tests
pip install -e .[dev]
python3 -m pytest
```
Raw data
{
"_id": null,
"home_page": "",
"name": "debugging-benchmark",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "",
"keywords": "debugging,benchmark",
"author": "",
"author_email": "Martin Eberlein <ebermart@informatik.hu-berlin.de>, Kai Werk <werkkai@hu-berlin.de>",
"download_url": "https://files.pythonhosted.org/packages/b6/ae/e0c7b100c62d5d6e40bc7a0f10188ffcbd1d2aed61f81f14219e1e521be9/debugging-benchmark-0.0.1.tar.gz",
"platform": null,
"description": "[](https://github.com/martineberlein/debugging-benchmark/actions/workflows/tests.yml)\n[](https://github.com/martineberlein/debugging-benchmark/actions/workflows/coverage.yml)\n[](https://coveralls.io/github/martineberlein/debugging-benchmark?branch=actions)\n\n# debugging-benchmark\n\n## Quickstart \n\nGenerating Passing and Failing Inputs:\n\n```python \nfrom debugging_benchmark.calculator.calculator import CalculatorBenchmarkRepository\nfrom debugging_framework.tools import GrammarBasedEvaluationFuzzer\n\ncalc = CalculatorBenchmarkRepository().build()\nparam = calc.to_dict()\nfuzzer = GrammarBasedEvaluationFuzzer(**param)\nfuzzer.run()\ngen_inps = fuzzer.get_generated_inputs()\n``` \n\nEvaluation:\n\n```python \nfrom debugging_benchmark.calculator.calculator import CalculatorBenchmarkRepository\nfrom debugging_framework.evaluator import Evaluation\nfrom debugging_framework.tools import InputsFromHellEvaluationFuzzer\n\ntools = [InputsFromHellEvaluationFuzzer]\n\nsubjects = SieveOfEratosthenesStudentAssignmentBenchmarkRepository().build()\n\nresult = Evaluation(\n tools=tools, \n subjects=subjects[0:1],\n repetitions=1, \n timeout=3600\n ).run()\n``` \n\n\n## Deeper Look into the Class Structure\n\nCheck out the Class Diagram for a first overview. Further down in this section we take a look at some key functions of interest.\n\n#### Class Diagram\n\n\n\n`BenchmarkRepository` and `BenchmarkProgram` can be found in `debugging_framework/benchmark.py`\n\n`StudentAssignmentBenchmarkProgram`,`StudentAssignmentRepository` and `GCDStudentAssignmentBenchmarkRepository` can be found in `debugging_benchmark/student_assignments.py`\n\nThe faulty programs can be found at `debugging_benchmark/student_assignments/problem_1_GCD` and the correct implementation at `debugging_benchmark/student_assignments/reference1.py`\n\n#### build()\n\nReturns a List of BenchmarkPrograms. Calls internally _construct_test_program(). This function is our interface.\n\n#### _construct_test_program()\n\nReturns a BenchmarkProgram. Calls internally construct_oracle() to construct a oracle for our program.\n\n#### construct_oracle()\n\nWhere the magic happens.\nReturns a Functions which loads the faulty and correct implementation, executes both with the input and checks if they are the same or not. If they are the same return OracleResult.PASSING, if not return OracleResult.FAILING \n\n#### to_dict()\n\nPLACEHOLDER\n\n## Install, Development, Testing\n### Install\n\nIf all external dependencies are available, a simple pip install PLACEHOLDER suffices.\nWe recommend installing PLACEHOLDER inside a virtual environment (virtualenv):\n\n```\npython3.10 -m venv venv\nsource venv/bin/activate\n\npip install --upgrade pip\npip install PLACEHOLDER\n```\n\n### Development and Testing\n\nFor development and testing, we recommend using PLACEHOLDER inside a virtual environment (virtualenv).\nBy doing the following steps in a standard shell (bash), one can run the PLACEHOLDER tests:\n\n```\ngit clone https://github.com/martineberlein/debugging-benchmark\ncd debugging-benchmark\n\npython3.10 -m venv venv\nsource venv/bin/activate\n\npip install --upgrade pip\n\n# Run tests\npip install -e .[dev]\npython3 -m pytest\n```\n",
"bugtrack_url": null,
"license": "",
"summary": "",
"version": "0.0.1",
"project_urls": {
"Bug Tracker": "https://github.com/martineberlein/debugging-benchmark/issues",
"Homepage": "https://github.com/martineberlein/debugging-benchmark"
},
"split_keywords": [
"debugging",
"benchmark"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "3a502d8f37973242407cecf6c507a7d0cf65606605889f0b2468f32687fa9ef5",
"md5": "d9801a6e01eeb569f99a93bfd7a66492",
"sha256": "681fdd1ecbf62b9c13d7c0a5fbd2c5b61cd4d30dc2c9a3b96020eab55ccee6a4"
},
"downloads": -1,
"filename": "debugging_benchmark-0.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d9801a6e01eeb569f99a93bfd7a66492",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 1836082,
"upload_time": "2023-12-14T20:36:23",
"upload_time_iso_8601": "2023-12-14T20:36:23.959087Z",
"url": "https://files.pythonhosted.org/packages/3a/50/2d8f37973242407cecf6c507a7d0cf65606605889f0b2468f32687fa9ef5/debugging_benchmark-0.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "b6aee0c7b100c62d5d6e40bc7a0f10188ffcbd1d2aed61f81f14219e1e521be9",
"md5": "326f712c505c585fdda2412b1661eccd",
"sha256": "c2e12b22c89cf549291374fe0a1e195ffeba31b9dada117315a5928f7a1e0f9c"
},
"downloads": -1,
"filename": "debugging-benchmark-0.0.1.tar.gz",
"has_sig": false,
"md5_digest": "326f712c505c585fdda2412b1661eccd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 302580,
"upload_time": "2023-12-14T20:36:29",
"upload_time_iso_8601": "2023-12-14T20:36:29.100944Z",
"url": "https://files.pythonhosted.org/packages/b6/ae/e0c7b100c62d5d6e40bc7a0f10188ffcbd1d2aed61f81f14219e1e521be9/debugging-benchmark-0.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-14 20:36:29",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "martineberlein",
"github_project": "debugging-benchmark",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "debugging-benchmark"
}