pygradethis


Namepygradethis JSON
Version 0.4.0 PyPI version JSON
download
home_pagehttps://github.com/nischalshrestha/pygradethis
SummaryPython autograder to facilitate code output and static code checking.
upload_time2023-08-21 14:50:59
maintainer
docs_urlNone
authorNischal Shrestha
requires_python>=3.10
licenseMIT
keywords autograder education
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # pygradethis

[![PyPI version](https://badge.fury.io/py/pygradethis.svg)](https://badge.fury.io/py/pygradethis)
[![PyPI - License](https://img.shields.io/pypi/l/pygradethis)](LICENSE)
[![Downloads](https://pepy.tech/badge/pygradethis)](https://pepy.tech/project/pygradethis)

A Python package to facilitate checking code output or static code checking
using AST analysis. It can either be used with R using the [`learnr`](https://rstudio.github.io/learnr/) package, as 
a mirror of [`gradethis`](https://rstudio-education.github.io/gradethis/index.html) package, or as a standalone package for general Python 
use in educational settings.

**Note**: This package is in early development and will undergo rapid changes.

## Install pygradethis

```
pip install pygradethis
```

## Install Dev Dependencies

```
pip install -e .[dev]
```

## Features

- Simple output checking based on pass / fail conditions with feedback
- Simple static code checking (AST), with feedback on how student's code differs from solution

## Output checks

`pygradethis` mimics the cadence to `gradethis::grade_result`. For e.g., we can
check that the student supplies the `mpg` dataset like so:

```python
grade_result(
  pass_if_equals(mpg, "You also got the mpg dataframe!"),
  fail_if_equals(None, "")
)
```

Internally, these `pass_if_equals(output, message)` or `fail_if_equals(output, message)` will be checked sequentially in
the order of arguments and return on first condition we match. The `None` here can be used
if you simply want to execute a condition if none of the other conditions matched.

If we match a `pass_if_equals` or `fail_if_equals`, we will present a feedback message wrapped in a convenient `dict`:

```python
dict(
    message = str,
    correct = True|False,
    type = "auto|success|info|warning|error|custom",
    location = "append|prepend|replace"
)
```

The `message` is the feedback, the `correct` is whether or not the student's solution is correct, `type` is the type of feedback. When 
used with `learnr` the `location` field here is useful for where the message is situated in the tutorial. However, for those using 
this package as a standalone the `location` is not an important field and it can be ignored. More on the flags [here](https://rstudio.github.io/learnr/exercises.html#Exercise_Checking).

Internally, a random praise/encouragement message will be appended before any custom message supplied. 

```python
pass_if_equals(x = mpg, message = "You also got the mpg dataframe!")
```
Feedback:
> Bravo! You also got the mpg dataframe!

```python
fail_if_equals(x = None, message = "")
```
Feedback:
> Try it again. You get better each time.

## Code checks

For static code checking, we follow a similar cadence for `gradethis::grade_code`. 

When there is a solution code being supplied, `grade_code(user_code, solution_code)` can be used to check the AST of
the user and solution code, making sure to standardize function calls and producing a helpful message for the student
to diagnose their issue.

Example:

```python
grade_code(
  student_code="2 + sqrt(log(2))", 
  solution_code="2 + sqrt(log(1))"
)

```
Feedback:
> I expected `log(1)`, but what you wrote was interpreted as `log(2)` in `sqrt(log(2))` at line 1.

Note how the feedback narrows in on the expression in which the problem occurs (`sqrt(log(2))`)
so that the student can focus on the most relevant outer expression of the problem. In this case, the 
`log(2)` is the problem and the `2` on the left operand of 
the addition is not as relevant.

Similarly, here the feedback points out that the 2 within the `log` function is incorrect, similar to the 
`gradethis` [example](https://rstudio-education.github.io/gradethis/reference/grade_code.html).

### Call Standardization
`pygradethis` also knows how to take user's function call code and map positional arguments 
to proper parameter names and set defaults if not supplied. This is so that you don't penalize
a student's code just because they did not explicitly spell out positional argument names, or
write the default arguments out.

For e.g. suppose a student is calling the following silly function `foo`:

```python
def foo(a, b=1): 
  pass
```

Grading the code with

```python
grade_code(
  student_code="foo(1)", 
  solution_code="foo(1)"
)
```

In the example above, the `grade_code` doesn't give us a feedback message since they are equivalent expressions.

However, if the student supplies `foo(2)`

```python
grade_code(
  student_code="foo(2)", 
  solution_code="foo(1)"
)
```

we get back this feedback:
> I expected `1`, but what you wrote was interpreted as `2` in `foo(2)` at line 1.

**Note:** Although underneath the hood we do standardize the arguments of both the student and the solution code
before checking, we don't surface this standardized form to the feedback message. This is certainly possible to
achieve but in certain cases can hinder learning by revealing too much information. For example, the builtin functions
like `sum` is normally called without specifying its actual formal parameters (e.g. `sum(1)` versus `sum(iterable=[1], start=0)`. In the future, a `verbose` mode could be made available such that the formal parameters are pointed out.

For call standardizing to work, the function definitions corresponding to function 
calls must be defined  and 'live' in the environment, whether that is the `globals()`/`locals()`,
`builtins`, or custom module imports `pandas`. This works if the student/solution source code also 
includes the definition (like `foo` above) in their own source code or it's included by instructor. 

Currently, common modules like `math` is imported for grading within `check_functions.py`, but more modules 
will be included to serve data science grading as well, such as `pandas` or `numpy` in the future. 
We plan to make the code more extensible for the instructor to add them as dependencies.



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/nischalshrestha/pygradethis",
    "name": "pygradethis",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "",
    "keywords": "autograder,education",
    "author": "Nischal Shrestha",
    "author_email": "nsrocker92@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/02/f7/09a7c0400f9553ec7648f4350bf50e9a502bbe8f9bb5ce27c2786144f925/pygradethis-0.4.0.tar.gz",
    "platform": null,
    "description": "# pygradethis\n\n[![PyPI version](https://badge.fury.io/py/pygradethis.svg)](https://badge.fury.io/py/pygradethis)\n[![PyPI - License](https://img.shields.io/pypi/l/pygradethis)](LICENSE)\n[![Downloads](https://pepy.tech/badge/pygradethis)](https://pepy.tech/project/pygradethis)\n\nA Python package to facilitate checking code output or static code checking\nusing AST analysis. It can either be used with R using the [`learnr`](https://rstudio.github.io/learnr/) package, as \na mirror of [`gradethis`](https://rstudio-education.github.io/gradethis/index.html) package, or as a standalone package for general Python \nuse in educational settings.\n\n**Note**: This package is in early development and will undergo rapid changes.\n\n## Install pygradethis\n\n```\npip install pygradethis\n```\n\n## Install Dev Dependencies\n\n```\npip install -e .[dev]\n```\n\n## Features\n\n- Simple output checking based on pass / fail conditions with feedback\n- Simple static code checking (AST), with feedback on how student's code differs from solution\n\n## Output checks\n\n`pygradethis` mimics the cadence to `gradethis::grade_result`. For e.g., we can\ncheck that the student supplies the `mpg` dataset like so:\n\n```python\ngrade_result(\n  pass_if_equals(mpg, \"You also got the mpg dataframe!\"),\n  fail_if_equals(None, \"\")\n)\n```\n\nInternally, these `pass_if_equals(output, message)` or `fail_if_equals(output, message)` will be checked sequentially in\nthe order of arguments and return on first condition we match. The `None` here can be used\nif you simply want to execute a condition if none of the other conditions matched.\n\nIf we match a `pass_if_equals` or `fail_if_equals`, we will present a feedback message wrapped in a convenient `dict`:\n\n```python\ndict(\n    message = str,\n    correct = True|False,\n    type = \"auto|success|info|warning|error|custom\",\n    location = \"append|prepend|replace\"\n)\n```\n\nThe `message` is the feedback, the `correct` is whether or not the student's solution is correct, `type` is the type of feedback. When \nused with `learnr` the `location` field here is useful for where the message is situated in the tutorial. However, for those using \nthis package as a standalone the `location` is not an important field and it can be ignored. More on the flags [here](https://rstudio.github.io/learnr/exercises.html#Exercise_Checking).\n\nInternally, a random praise/encouragement message will be appended before any custom message supplied. \n\n```python\npass_if_equals(x = mpg, message = \"You also got the mpg dataframe!\")\n```\nFeedback:\n> Bravo! You also got the mpg dataframe!\n\n```python\nfail_if_equals(x = None, message = \"\")\n```\nFeedback:\n> Try it again. You get better each time.\n\n## Code checks\n\nFor static code checking, we follow a similar cadence for `gradethis::grade_code`. \n\nWhen there is a solution code being supplied, `grade_code(user_code, solution_code)` can be used to check the AST of\nthe user and solution code, making sure to standardize function calls and producing a helpful message for the student\nto diagnose their issue.\n\nExample:\n\n```python\ngrade_code(\n  student_code=\"2 + sqrt(log(2))\", \n  solution_code=\"2 + sqrt(log(1))\"\n)\n\n```\nFeedback:\n> I expected `log(1)`, but what you wrote was interpreted as `log(2)` in `sqrt(log(2))` at line 1.\n\nNote how the feedback narrows in on the expression in which the problem occurs (`sqrt(log(2))`)\nso that the student can focus on the most relevant outer expression of the problem. In this case, the \n`log(2)` is the problem and the `2` on the left operand of \nthe addition is not as relevant.\n\nSimilarly, here the feedback points out that the 2 within the `log` function is incorrect, similar to the \n`gradethis` [example](https://rstudio-education.github.io/gradethis/reference/grade_code.html).\n\n### Call Standardization\n`pygradethis` also knows how to take user's function call code and map positional arguments \nto proper parameter names and set defaults if not supplied. This is so that you don't penalize\na student's code just because they did not explicitly spell out positional argument names, or\nwrite the default arguments out.\n\nFor e.g. suppose a student is calling the following silly function `foo`:\n\n```python\ndef foo(a, b=1): \n  pass\n```\n\nGrading the code with\n\n```python\ngrade_code(\n  student_code=\"foo(1)\", \n  solution_code=\"foo(1)\"\n)\n```\n\nIn the example above, the `grade_code` doesn't give us a feedback message since they are equivalent expressions.\n\nHowever, if the student supplies `foo(2)`\n\n```python\ngrade_code(\n  student_code=\"foo(2)\", \n  solution_code=\"foo(1)\"\n)\n```\n\nwe get back this feedback:\n> I expected `1`, but what you wrote was interpreted as `2` in `foo(2)` at line 1.\n\n**Note:** Although underneath the hood we do standardize the arguments of both the student and the solution code\nbefore checking, we don't surface this standardized form to the feedback message. This is certainly possible to\nachieve but in certain cases can hinder learning by revealing too much information. For example, the builtin functions\nlike `sum` is normally called without specifying its actual formal parameters (e.g. `sum(1)` versus `sum(iterable=[1], start=0)`. In the future, a `verbose` mode could be made available such that the formal parameters are pointed out.\n\nFor call standardizing to work, the function definitions corresponding to function \ncalls must be defined  and 'live' in the environment, whether that is the `globals()`/`locals()`,\n`builtins`, or custom module imports `pandas`. This works if the student/solution source code also \nincludes the definition (like `foo` above) in their own source code or it's included by instructor. \n\nCurrently, common modules like `math` is imported for grading within `check_functions.py`, but more modules \nwill be included to serve data science grading as well, such as `pandas` or `numpy` in the future. \nWe plan to make the code more extensible for the instructor to add them as dependencies.\n\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Python autograder to facilitate code output and static code checking.",
    "version": "0.4.0",
    "project_urls": {
        "Homepage": "https://github.com/nischalshrestha/pygradethis"
    },
    "split_keywords": [
        "autograder",
        "education"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e9746eba3c81c2ed7e709dfc85b79873bc84004496e4cdf0f8b8b23e15b5b7ce",
                "md5": "1214c82e72dfb3939efffa453456645d",
                "sha256": "3ef13d93301940dc32985c3231f999add90500b360872bd948adbb1b9b101a8c"
            },
            "downloads": -1,
            "filename": "pygradethis-0.4.0-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "1214c82e72dfb3939efffa453456645d",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": ">=3.10",
            "size": 33985,
            "upload_time": "2023-08-21T14:50:58",
            "upload_time_iso_8601": "2023-08-21T14:50:58.075335Z",
            "url": "https://files.pythonhosted.org/packages/e9/74/6eba3c81c2ed7e709dfc85b79873bc84004496e4cdf0f8b8b23e15b5b7ce/pygradethis-0.4.0-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "02f709a7c0400f9553ec7648f4350bf50e9a502bbe8f9bb5ce27c2786144f925",
                "md5": "b28befa6c89c50b8f08219cabac8bb4d",
                "sha256": "0ceb30ffce4be0544eda97a5d703fe4b657cd76828549f3859b28d44b920db72"
            },
            "downloads": -1,
            "filename": "pygradethis-0.4.0.tar.gz",
            "has_sig": false,
            "md5_digest": "b28befa6c89c50b8f08219cabac8bb4d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 28224,
            "upload_time": "2023-08-21T14:50:59",
            "upload_time_iso_8601": "2023-08-21T14:50:59.253189Z",
            "url": "https://files.pythonhosted.org/packages/02/f7/09a7c0400f9553ec7648f4350bf50e9a502bbe8f9bb5ce27c2786144f925/pygradethis-0.4.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-21 14:50:59",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "nischalshrestha",
    "github_project": "pygradethis",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "pygradethis"
}
        
Elapsed time: 0.19527s