nbautoeval


Namenbautoeval JSON
Version 1.8.0 PyPI version JSON
download
home_pageNone
SummaryA mini framework to implement auto-evaluated exercises in Jupyter notebooks
upload_time2024-12-11 15:51:33
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseCC-BY-SA-4.0
keywords jupyter auto-evaluation exercises
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # `nbautoeval`

`nbautoeval` is a very lightweight python framework for creating **auto-evaluated**
exercises inside a jupyter (python) notebook.

two flavours of exercises are supported at this point :

* code-oriented : given a text that describes the expectations, students are invited to
  write their own code, and can then see the outcome on teacher-defined data samples,
  compared with the results obtained through a teacher-provided solution, with a visual
  (green/red) feedback
* quizzes : a separate module allows to create quizzes

At this point, due to lack of knowledge/documentation about open/edx (read: the
version running at FUN), there is no available code for exporting the results as
grades or anything similar (hence the `autoeval` name).

There indeed are provisions in the code to accumulate statistics on all
attempted corrections, as an attempt to provide feedback to teachers.

# Try it on `mybinder`

Click the badge below to see a few sample demos under `mybinder.org` - it's all
in the `demo-notebooks` subdir.

**NOTE** the demo notebooks ship under a `.py` format and require `jupytext` to be
installed before you can open them in Jupyter.

[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/parmentelat/nbautoeval/master?filepath=demo-notebooks)


# History

This was initially embedded into a [MOOC on
python2](https://github.com/parmentelat/flotpython) that ran for the first time on [the
French FUN platform](https://www.france-universite-numerique-mooc.fr/) in Fall 2014. It
was then duplicated into a [MOOC on
bioinformatics](https://github.com/parmentelat/flotbioinfo) in Spring 2016 where it was
named `nbautoeval` for the first time, but still embedded in a greater git module.

A separate git repo was created in June 2016 from that basis, with the
intention to be used as a git subtree from these 2 repos (because at
the time, adding Python libraries in order to customize the notebook
runtime on the remote Jupyter platform was a pain)

Now this tool ships as a standalone Python library hosted on pypi.org,
and so it can easily be added to any docker image

# Installation

```
pip install nbautoeval
```

# Overview

## code-oriented

Currently supports the following types of exercises
  * `ExerciseFunction` : the student is asked to write a function
  * `ExerciseRegexp` : the student is asked to write a regular expression
  * `ExerciseGenerator` : the student is asked to write a generator function 
  * `ExerciseClass` : tests will happen on a class implementation

A teacher who wishes to implement an exercise needs to write 2 parts :

* One python file that defines an instance of an exercise class; this in a nutshell
  typically involves
  * providing one solution (let's say a function) written in Python
  * providing a set of input data - as an instance of the `Args` dedicated class
  * plus optionnally various tweaks for rendering results

* One notebook that imports this exercise object, and can then take advantage of it to
  write jupyter cells that typically
  * invoke `example()` on  the  exercise  object to show examples of the expected output
  * invite the student to write their own code
  * invoke `correction()` on  the  exercise  object to display the outcome.

## quizzes

Here again there will be 2 parts at work :

* The recommended way is to define quizzes in YAML format :
  * one YAML file can contain several quizzes - see examples in the `yaml/` subdir
  * and each quiz contain a set of questions
  * grouping questions into quizzes essentially makes sense wrt the maximal number of
    attempts
  * mostly all the pieces can be written in markdown (currently we use `myst_parser`)

* then one invokes `run_yaml_quiz()` from a notebook to display the test
  * this function takes 2 arguments, one to help locate the YAML file
  * one to spot the quiz inside the YAML file
  * run with `debug=True` to pinpoint errors in the source
  
## results and storage

Regardless of their type all tests have an `exoname` that is used to store information
about that specific test; for quizzes it is recommended to use a different name than 
the quiz name used in `run_yaml_quiz()` so that students cant guess it too easily.

stuff is stored in 2 separate locations :

* `~/.nbautoeval.trace` contain one JSON line per attempt (correction or submit)
* `~/.nbautoeval.storage` for quizzes only, preserves previous choices, number of attempts

# Known issues

see https://github.com/parmentelat/nbautoeval/issues

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "nbautoeval",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "Jupyter, auto-evaluation, exercises",
    "author": null,
    "author_email": "Thierry Parmentelat <thierry.parmentelat@inria.fr>",
    "download_url": "https://files.pythonhosted.org/packages/8b/46/5c9518ac455ef22e4eceb0fbb847d4192889e5c8ba63ddcf21bb7dccea9d/nbautoeval-1.8.0.tar.gz",
    "platform": null,
    "description": "# `nbautoeval`\n\n`nbautoeval` is a very lightweight python framework for creating **auto-evaluated**\nexercises inside a jupyter (python) notebook.\n\ntwo flavours of exercises are supported at this point\u00a0:\n\n* code-oriented\u00a0: given a text that describes the expectations, students are invited to\n  write their own code, and can then see the outcome on teacher-defined data samples,\n  compared with the results obtained through a teacher-provided solution, with a visual\n  (green/red) feedback\n* quizzes\u00a0: a separate module allows to create quizzes\n\nAt this point, due to lack of knowledge/documentation about open/edx (read: the\nversion running at FUN), there is no available code for exporting the results as\ngrades or anything similar (hence the `autoeval` name).\n\nThere indeed are provisions in the code to accumulate statistics on all\nattempted corrections, as an attempt to provide feedback to teachers.\n\n# Try it on `mybinder`\n\nClick the badge below to see a few sample demos under `mybinder.org` - it's all\nin the `demo-notebooks` subdir.\n\n**NOTE** the demo notebooks ship under a `.py` format and require `jupytext` to be\ninstalled before you can open them in Jupyter.\n\n[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/parmentelat/nbautoeval/master?filepath=demo-notebooks)\n\n\n# History\n\nThis was initially embedded into a [MOOC on\npython2](https://github.com/parmentelat/flotpython) that ran for the first time on [the\nFrench FUN platform](https://www.france-universite-numerique-mooc.fr/) in Fall 2014. It\nwas then duplicated into a [MOOC on\nbioinformatics](https://github.com/parmentelat/flotbioinfo) in Spring 2016 where it was\nnamed `nbautoeval` for the first time, but still embedded in a greater git module.\n\nA separate git repo was created in June 2016 from that basis, with the\nintention to be used as a git subtree from these 2 repos (because at\nthe time, adding Python libraries in order to customize the notebook\nruntime on the remote Jupyter platform was a pain)\n\nNow this tool ships as a standalone Python library hosted on pypi.org,\nand so it can easily be added to any docker image\n\n# Installation\n\n```\npip install nbautoeval\n```\n\n# Overview\n\n## code-oriented\n\nCurrently supports the following types of exercises\n  * `ExerciseFunction` : the student is asked to write a function\n  * `ExerciseRegexp` : the student is asked to write a regular expression\n  * `ExerciseGenerator` : the student is asked to write a generator function \n  * `ExerciseClass` : tests will happen on a class implementation\n\nA teacher who wishes to implement an exercise needs to write 2 parts :\n\n* One python file that defines an instance of an exercise class; this in a nutshell\n  typically involves\n  * providing one solution (let's say a function) written in Python\n  * providing a set of input data - as an instance of the `Args` dedicated class\n  * plus optionnally various tweaks for rendering results\n\n* One notebook that imports this exercise object, and can then take advantage of it to\n  write jupyter cells that typically\n  * invoke `example()` on  the  exercise  object to show examples of the expected output\n  * invite the student to write their own code\n  * invoke `correction()` on  the  exercise  object to display the outcome.\n\n## quizzes\n\nHere again there will be 2 parts at work\u00a0:\n\n* The recommended way is to define quizzes in YAML format\u00a0:\n  * one YAML file can contain several quizzes - see examples in the `yaml/` subdir\n  * and each quiz contain a set of questions\n  * grouping questions into quizzes essentially makes sense wrt the maximal number of\n    attempts\n  * mostly all the pieces can be written in markdown (currently we use `myst_parser`)\n\n* then one invokes `run_yaml_quiz()` from a notebook to display the test\n  * this function takes 2 arguments, one to help locate the YAML file\n  * one to spot the quiz inside the YAML file\n  * run with `debug=True` to pinpoint errors in the source\n  \n## results and storage\n\nRegardless of their type all tests have an `exoname` that is used to store information\nabout that specific test; for quizzes it is recommended to use a different name than \nthe quiz name used in `run_yaml_quiz()` so that students cant guess it too easily.\n\nstuff is stored in 2 separate locations\u00a0:\n\n* `~/.nbautoeval.trace` contain one JSON line per attempt (correction or submit)\n* `~/.nbautoeval.storage` for quizzes only, preserves previous choices, number of attempts\n\n# Known issues\n\nsee https://github.com/parmentelat/nbautoeval/issues\n",
    "bugtrack_url": null,
    "license": "CC-BY-SA-4.0",
    "summary": "A mini framework to implement auto-evaluated exercises in Jupyter notebooks",
    "version": "1.8.0",
    "project_urls": {
        "Homepage": "https://github.com/parmentelat/nbautoeval"
    },
    "split_keywords": [
        "jupyter",
        " auto-evaluation",
        " exercises"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f0d577cedbe86695f522b7e1e12c992a28799846ebe799e8a0dbaa9e1322e1d1",
                "md5": "c6614e80ba11fd4d210b90db77cd4438",
                "sha256": "0dd9aed1789bb7e309ea3d35549b95829400b8688bee89f12f46d68d22854c7f"
            },
            "downloads": -1,
            "filename": "nbautoeval-1.8.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c6614e80ba11fd4d210b90db77cd4438",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 47865,
            "upload_time": "2024-12-11T15:51:31",
            "upload_time_iso_8601": "2024-12-11T15:51:31.184472Z",
            "url": "https://files.pythonhosted.org/packages/f0/d5/77cedbe86695f522b7e1e12c992a28799846ebe799e8a0dbaa9e1322e1d1/nbautoeval-1.8.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8b465c9518ac455ef22e4eceb0fbb847d4192889e5c8ba63ddcf21bb7dccea9d",
                "md5": "78c91cc56305227ca53a36c9a29c9b8f",
                "sha256": "55454c8bba081e54ac35f673b79d13ca264c52243f3e800af4d37fb3887afec6"
            },
            "downloads": -1,
            "filename": "nbautoeval-1.8.0.tar.gz",
            "has_sig": false,
            "md5_digest": "78c91cc56305227ca53a36c9a29c9b8f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 954635,
            "upload_time": "2024-12-11T15:51:33",
            "upload_time_iso_8601": "2024-12-11T15:51:33.168579Z",
            "url": "https://files.pythonhosted.org/packages/8b/46/5c9518ac455ef22e4eceb0fbb847d4192889e5c8ba63ddcf21bb7dccea9d/nbautoeval-1.8.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-12-11 15:51:33",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "parmentelat",
    "github_project": "nbautoeval",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "nbautoeval"
}
        
Elapsed time: 1.01033s