quizzable


Namequizzable JSON
Version 1.0.2 PyPI version JSON
download
home_pagehttps://github.com/balusulapalemsaikoushik/quizzable/
SummaryEducation with Python.
upload_time2024-08-27 00:13:21
maintainerNone
docs_urlNone
authorSai Koushik Balusulapalem
requires_pythonNone
licenseMIT License
keywords education app quiz learn backend
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # quizzable
[![PyPI - Version](https://img.shields.io/pypi/v/quizzable)](https://pypi.org/project/quizzable/)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/quizzable)](https://pypi.org/project/quizzable/)
[![Test](https://github.com/balusulapalemsaikoushik/quizzable/actions/workflows/test.yml/badge.svg)](https://github.com/balusulapalemsaikoushik/quizzable/actions/workflows/test.yml)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)

`quizzable` provides an easy-to-implement interface to build a framework for educational quiz apps built on top of Python. The `quizzable` library allows you to create quizzes consisting of MCQ, FRQ, True-or-false, or Matching questions, allowing you to build educational apps that leverage the power of Python with great ease. The full documentation is described below.

## Table of Contents
* [Quickstart](#quickstart)
* [Classes](#classes)
    * [`Terms`](#terms)
        * [`Terms.get_terms()`](#termsget_terms)
        * [`Terms.get_frq_question()`](#termsget_frq_question)
        * [`Terms.get_mcq_question()`](#termsget_mcq_question)
        * [`Terms.get_true_false_question()`](#termsget_true_false_question)
        * [`Terms.get_match_question()`](#termsget_match_question)
        * [`Terms.get_random_question()`](#termsget_random_question)
        * [`Terms.get_quiz()`](#termsget_quiz)
    * [`Quiz`](#quiz)
        * [`Quiz.questions`](#quizquestions)
        * [`Quiz.from_data()`](#quizfrom_data)
        * [`Quiz.to_data()`](#quizto_data)
    * [`Question`](#question)
        * [`Question.from_dict()`](#questionfrom_dict)
        * [`Question.check_answer()`](#questioncheck_answer)
        * [`Question.to_dict()`](#questionto_dict)
    * [`MCQQuestion`](#mcqquestion)
        * [`MCQQuestion.options`](#mcqquestionoptions)
        * [`MCQQuestion.to_dict()`](#mcqquestionto_dict)
    * [`FRQQuestion`](#frqquestion)
        * [`FRQQuestion.to_dict()`](#frqquestionto_dict)
    * [`TrueFalseQuestion`](#truefalsequestion)
        * [`TrueFalseQuestion.definition`](#truefalsequestiondef)
        * [`TrueFalseQuestion.to_dict()`](#truefalsequestionto_dict)
    * [`MatchQuestion`](#matchquestion)
        * [`MatchQuestion.definitions`](#matchquestiondef)
        * [`MatchQuestion.to_dict()`](#matchquestionto_dict)
* [Exceptions](#exceptions)
    * [`BaseQuizzableException`](#basequizzableexception)
    * [`InvalidLengthError`](#invalidlengtherror)
    * [`InvalidOptionsError`](#invalidoptionserror)
    * [`InvalidTermsError`](#invalidtermserror)
    * [`InvalidQuestionError`](#invalidquestionerror)
    * [`DataIncompleteError`](#dataincompleteerror)
* [Authors](#authors)

## Quickstart

To get started, install the `quizzable` package through `pip` on a supported version of Python (`quizzable` currently supports Python 3.9+):
```console
$ python -m pip install quizzable
```
Next, import the `Terms` class from the `quizzable` module:
```py
from quizzable import Terms
```
Then, create a list of terms:
```py
data = {
    "painter": "la pintura",
    "brush": "el pincel",
    "sculpture": "la escultura",
    "palette:": "la paleta",
    "self-portrait": "el autorretrato",
    "abstract": "abstracto/a",
    # more terms...
}
terms = Terms(data)
```
or create one from JSON data:
```py
import json

with open("vocabulary.json") as terms_file:
    terms = Terms(json.loads(terms_file.read()))
```
Aftewards, you can choose to generate random types of questions using the `get_random_question` method:
```py
question = terms.get_random_question()
```
generate an entire quiz of questions using the `get_quiz` method:
```py
quiz = terms.get_quiz(types=["mcq", "match", "tf"])
```
Or create different types of questions manually like so:
```py
frq = terms.get_frq_question()  # free-response
mcq = terms.get_mcq_question()  # multiple-choice
tf = terms.get_true_false_question()  # true-or-false
matching = terms.get_frq_question()  # matching
```
A question has different properties, depending on its type:
```py
print(f"What is the translation of {mcq.term}?")
for option in mcq.options:
    print(option)
print()
answer = input("Answer: ")
```
To score a question, simply use its `check_answer` method:
```py
correct, actual = mcq.check_answer(answer)
if correct:
    print("Correct!")
else:
    print(f"Incorrect...the answer was {actual}")
```
If you'd like, you can convert a question or quiz to back to its raw data at any time:
```py
print(question.to_dict())
print(quiz.to_data())
```

## Classes

### `Terms`
A list of terms.

Should be a dictionary mapping _terms_ to _definitions_, where in this case a _term_ represents a question or vocabulary term, and a _definition_ is used to refer to the answer or vocabulary definition. For example, here is a list of terms in which each term is an English word, and its definition is its English translation:

```py
{
    "painter": "la pintura",
    "brush": "el pincel",
    "sculpture": "la escultura",
    "palette:": "la paleta",
    "self-portrait": "el autorretrato",
    "abstract": "abstracto/a"
}
```

#### `Terms.get_terms()`
Parameters:
* `answer_with = "def"`: can be `"term"`, `"def"`, or `"both"`; how the question should be answered (see [Functions](#functions))

Returns the dictionary `terms` modified based on the value for `answer_with`. May be useful for making flashcards for which terms and definitions may need to be swapped on-demand.

#### `Terms.get_frq_question()`
Returns an [`FRQQuestion`](#frqquestion) object with a random FRQ-format question generated from `terms`.

#### `Terms.get_mcq_question()`
Parameters:
* `n_options = 4`: number of options per question.

Returns an [`MCQQuestion`](#mcqquestion) object with a random MCQ-format question generated from `terms`.

#### `Terms.get_true_false_question()`
Returns a [`TrueFalseQuestion`](#truefalsequestion) object with a random True-or-false format question generated from `terms`.

#### `Terms.get_match_question()`
Parameters:
* `n_terms = 5`: how many terms have to be matched

Returns a [`MatchQuestion`](#matchquestion) object with a random matching-format question generated from `terms`.

#### `Terms.get_random_question()`
Parameters:
* `types = ["mcq", "frq", "tf"]`: list that can contain `"mcq"`, `"frq"`, `"tf"`, or `"match"`; types of questions that appear on the quiz
* `n_options = 4`: (if MCQs are involved) number of options per MCQ question
* `n_terms = 5`: (if matching questions are involved) number of terms to match per matching question

Returns a `Question` object of a random-format question generated from `terms`.

#### `Terms.get_quiz()`
Returns a [`Quiz`](#quiz) object with random questions based on the below parameters.

Parameters:
* `terms`: map of terms and definitions for quiz (see [`Terms`](#terms))
* `types = ["mcq", "frq", "tf"]`: list that can contain `"mcq"`, `"frq"`, `"tf"`, or `"match"`; types of questions that appear on the quiz
* `length = 10`: number of questions on quiz
* `answer_with = "def"`: can be `"term"`, `"def"`, or `"both"`; how the question should be answered (see below)
* `n_options = 4`: (if MCQs are involved) number of options per MCQ question
* `n_terms = 5`: (if matching questions are involved) number of terms to match per matching question

`answer_with` describes how the user should answer the question, where `"term"` means a question should be answered by giving the term, `"def"` implies that the question should be answered by providing the definition, and `"both"` means that there is a 50/50 chance of the question needing a term or a definition as input.

### `Quiz`
Arbitrary quiz object.
#### `Quiz.questions`
list of questions within the quiz, represented by a list of arbitrary `Question` objects.

#### `Quiz.from_data()`
Reconstructs a `Quiz` object from a listlike representation. See [`Quiz.to_data()`](#quizto_data) for more information on formatting.

#### `Quiz.to_data()`
Returns a listlike representation of the quiz, with each `Question` object being represented as its dictionary representation. For example, it could look like this:
```py
[
    {
        "_type": "tf",
        "term": "la iglesia",
        "definition": "shop",
        "answer": "church"
    },
    {
        "_type": "mcq",
        "term": "la playa",
        "options": {
            "beach": True,
            "park": False,
            "downtown": False,
            "museum": False,
        }
    },
    {
        "_type": "frq",
        "term": "park",
        "answer": "el parque"
    }
]
```

Please see documentation for [`MCQQuestion`](#mcqquestion), [`FRQQuestion`](#frqquestion), [`TrueFalseQuestion`](#truefalsequestion), and [`MatchQuestion`](#matchquestion) for more information on the format of the above questions.

### `Question`
Generic question object used for reconstruction of a question from JSON data.

Parameters:
* `_type`: question type
* `term`: question term (prompt)
* `answer`: question answer
* `**kwargs`: other question data (e.g. `options`, `definition`, etc.)

#### `Question.term`
Question that the user is prompted with.

#### `Question.answer`
Correct answer to the prompt `term`.

#### `Question.from_dict()`
Returns a reconstructed `Question` object made from `data`. Please see [`MCQQuestion.to_dict()`](#mcqquestionto_dict), [`FRQQuestion.to_dict()`](#frqquestionto_dict), [`TrueFalseQuestion.to_dict()`](#truefalsequestionto_dict), and [`MatchQuestion.to_dict()`](#matchquestionto_dict) for more information on formatting.

Parameters:
* `data`: dictionary containing question data.

#### `Question.check_answer()`
Returns a tuple: the first item is a boolean whose value is `True` if `answer` matches the question's `answer` attribute or `False` otherwise, and the second item is the value for the question's `answer` attribute.

Parameters:
* `answer`: answer provided by the user


#### `Question.to_dict()`
Returns a dictionary representation of the question. Each question has a `_type` key that can be used to determine how to render a question on the frontend (i.e. display multiple options for MCQ, textbox for FRQ, etc.), and a `term` key which represents the term the user is prompted with. Please see [`MCQQuestion.to_dict()`](#mcqquestionto_dict), [`FRQQuestion.to_dict()`](#frqquestionto_dict), [`TrueFalseQuestion.to_dict()`](#truefalsequestionto_dict), and [`MatchQuestion.to_dict()`](#matchquestionto_dict) for more information on formatting.

### `MCQQuestion`
Representation of an MCQ-format question. Has the same attributes as [`Question`](#question) objects, with some additional properties.

Parameters:
* `term`: question term (prompt)
* `options`: question options
* `answer`: question answer

#### `MCQQuestion.options`
List of potential answer choices.

#### `MCQQuestion.to_dict()`
The dictionary representation returned by the `to_dict` method of a `MCQQuestion` object looks like this:
```py
{
    "_type": "mcq",
    "term": "term",
    "options": {
        "option1": False,
        "option2": False,
        "option3": True,
        "option4": False,
    },
    "answer": "answer"
}
```

Here's a brief overview:
* `term` is what the user will be prompted with, whether that be to choose a term's definition or vice/versa.
* `options` is the list of potential answer choices.
* `answer` is correct choice out of `options`.

### `FRQQuestion`
Representation of an FRQ-format question. Has the same attributes as [`Question`](#question) objects, with some additional properties.

Parameters:
* `term`: question term (prompt)
* `answer`: question answer

#### `FRQQuestion.to_dict()`
The dictionary representation returned by the `to_dict` method of a `FRQQuestion` object looks like this:
```py
{
    "_type": "frq",
    "term": "term",
    "answer": "answer"
}
```

Here's a brief overview:
* `term` is what the user will be prompted with, whether that be to define a term's definition or vice/versa.
* `answer` is the response that will be accepted as correct given the user's prompt.

### `TrueFalseQuestion`
Representation of an True-or-false format question. Has the same attributes as [`Question`](#question) objects, with the some additional properties.

Parameters:
* `term`: question term (prompt)
* `definition`: question definition (what the user has to determine is True or False)
* `answer`: question answer

#### `TrueFalseQuestion.definition`
What the user has to determine is True or False.

#### `TrueFalseQuestion.to_dict()`
The dictionary representation returned by the `to_dict` method of a `TrueFalseQuestion` object looks like this:
```py
{
    "_type": "tf",
    "term": "term",
    "definition": "definition",
    "answer": "answer"
}
```

Here's a brief overview:
* `term` is what the user will be prompted with, whether that be to select True or False if the definition given matches with a specific term, or vice/versa.
* `definition` is what the user has to determine is True or False.
* `answer` is the actual definition that matches with the given `prompt`, or term.

### `MatchQuestion`
Representation of an MCQ-format question. Has the same attributes as [`Question`](#question) objects, with the some additional properties.

Parameters:
* `term`: question term (prompt)
* `definitions`: question definitions (what the user has to match with the terms)
* `answer`: question answer

#### `MatchQuestion.definitions`
What the user has to match with the corresponding terms.

#### `MatchQuestion.to_dict()`
The dictionary representation returned by the `to_dict` method of a `MatchQuestion` object looks like this:
```py
{
    "_type": "match",
    "term": [
        "term1",
        "term2",
        "term3",
        "term4"
    ],
    "definitions": [
        "definition4",
        "definition2",
        "definition1",
        "definition3",
    ],
    "answer": {
        "term1": "definition1",
        "term2": "definition2",
        "term3": "definition3",
        "term4": "definition4"
    }
}
```

Here's a brief overview:
* `term` is what the user will be prompted with, whether that be to match the term with the definition, or vice/versa.
* `definitions` is what the user has to match with the corresponding terms.
* `answer` maps the terms `term` to their actual definitions `definitions`.

## Exceptions

### `BaseQuizzableException`
The base exception for all `quizzable` errors.

### `InvalidLengthError`
The length specified is not valid (i.e. too short or too long)

Parameters:
* `length`: invalid length of the quiz

### `InvalidOptionsError`
The number of options (for MCQs) specified is not valid (i.e. too small or too large)

Parameters:
* `n_options`: invalid number of options per MCQ question

### `InvalidTermsError`
The number of terms (for matching questions) specified is not valid (i.e. too small or too large)

Parameters:
* `n_terms`: invalid number of terms per matching question

### `InvalidQuestionError`
The type of question specified is not valid (should only be `"mcq"`, `"frq"`, `"tf"`, or `"match"`).

Parameters:
* `question`: invalid type of question

### `DataIncompleteError`
The data passed into the constructor for `Question` is incomplete. See [`MCQQuestion.to_dict()`](#mcqquestionto_dict), [`FRQQuestion.to_dict()`](#frqquestionto_dict), [`TrueFalseQuestion.to_dict()`](#truefalsequestionto_dict), and [`MatchQuestion.to_dict()`](#matchquestionto_dict) for how the data for different types of questions should be formatted.

Parameters:
* `data`: incomplete data

## Authors
### Sai Koushik Balusulapalem
[GitHub](https://github.com/balusulapalemsaikoushik)

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/balusulapalemsaikoushik/quizzable/",
    "name": "quizzable",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "education, app, quiz, learn, backend",
    "author": "Sai Koushik Balusulapalem",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/28/be/5eae40720cc7e95f61d7d25e747853a6f389dad84389eedee7e30fe47658/quizzable-1.0.2.tar.gz",
    "platform": null,
    "description": "# quizzable\r\n[![PyPI - Version](https://img.shields.io/pypi/v/quizzable)](https://pypi.org/project/quizzable/)\r\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/quizzable)](https://pypi.org/project/quizzable/)\r\n[![Test](https://github.com/balusulapalemsaikoushik/quizzable/actions/workflows/test.yml/badge.svg)](https://github.com/balusulapalemsaikoushik/quizzable/actions/workflows/test.yml)\r\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\r\n\r\n`quizzable` provides an easy-to-implement interface to build a framework for educational quiz apps built on top of Python. The `quizzable` library allows you to create quizzes consisting of MCQ, FRQ, True-or-false, or Matching questions, allowing you to build educational apps that leverage the power of Python with great ease. The full documentation is described below.\r\n\r\n## Table of Contents\r\n* [Quickstart](#quickstart)\r\n* [Classes](#classes)\r\n    * [`Terms`](#terms)\r\n        * [`Terms.get_terms()`](#termsget_terms)\r\n        * [`Terms.get_frq_question()`](#termsget_frq_question)\r\n        * [`Terms.get_mcq_question()`](#termsget_mcq_question)\r\n        * [`Terms.get_true_false_question()`](#termsget_true_false_question)\r\n        * [`Terms.get_match_question()`](#termsget_match_question)\r\n        * [`Terms.get_random_question()`](#termsget_random_question)\r\n        * [`Terms.get_quiz()`](#termsget_quiz)\r\n    * [`Quiz`](#quiz)\r\n        * [`Quiz.questions`](#quizquestions)\r\n        * [`Quiz.from_data()`](#quizfrom_data)\r\n        * [`Quiz.to_data()`](#quizto_data)\r\n    * [`Question`](#question)\r\n        * [`Question.from_dict()`](#questionfrom_dict)\r\n        * [`Question.check_answer()`](#questioncheck_answer)\r\n        * [`Question.to_dict()`](#questionto_dict)\r\n    * [`MCQQuestion`](#mcqquestion)\r\n        * [`MCQQuestion.options`](#mcqquestionoptions)\r\n        * [`MCQQuestion.to_dict()`](#mcqquestionto_dict)\r\n    * [`FRQQuestion`](#frqquestion)\r\n        * [`FRQQuestion.to_dict()`](#frqquestionto_dict)\r\n    * [`TrueFalseQuestion`](#truefalsequestion)\r\n        * [`TrueFalseQuestion.definition`](#truefalsequestiondef)\r\n        * [`TrueFalseQuestion.to_dict()`](#truefalsequestionto_dict)\r\n    * [`MatchQuestion`](#matchquestion)\r\n        * [`MatchQuestion.definitions`](#matchquestiondef)\r\n        * [`MatchQuestion.to_dict()`](#matchquestionto_dict)\r\n* [Exceptions](#exceptions)\r\n    * [`BaseQuizzableException`](#basequizzableexception)\r\n    * [`InvalidLengthError`](#invalidlengtherror)\r\n    * [`InvalidOptionsError`](#invalidoptionserror)\r\n    * [`InvalidTermsError`](#invalidtermserror)\r\n    * [`InvalidQuestionError`](#invalidquestionerror)\r\n    * [`DataIncompleteError`](#dataincompleteerror)\r\n* [Authors](#authors)\r\n\r\n## Quickstart\r\n\r\nTo get started, install the `quizzable` package through `pip` on a supported version of Python (`quizzable` currently supports Python 3.9+):\r\n```console\r\n$ python -m pip install quizzable\r\n```\r\nNext, import the `Terms` class from the `quizzable` module:\r\n```py\r\nfrom quizzable import Terms\r\n```\r\nThen, create a list of terms:\r\n```py\r\ndata = {\r\n    \"painter\": \"la pintura\",\r\n    \"brush\": \"el pincel\",\r\n    \"sculpture\": \"la escultura\",\r\n    \"palette:\": \"la paleta\",\r\n    \"self-portrait\": \"el autorretrato\",\r\n    \"abstract\": \"abstracto/a\",\r\n    # more terms...\r\n}\r\nterms = Terms(data)\r\n```\r\nor create one from JSON data:\r\n```py\r\nimport json\r\n\r\nwith open(\"vocabulary.json\") as terms_file:\r\n    terms = Terms(json.loads(terms_file.read()))\r\n```\r\nAftewards, you can choose to generate random types of questions using the `get_random_question` method:\r\n```py\r\nquestion = terms.get_random_question()\r\n```\r\ngenerate an entire quiz of questions using the `get_quiz` method:\r\n```py\r\nquiz = terms.get_quiz(types=[\"mcq\", \"match\", \"tf\"])\r\n```\r\nOr create different types of questions manually like so:\r\n```py\r\nfrq = terms.get_frq_question()  # free-response\r\nmcq = terms.get_mcq_question()  # multiple-choice\r\ntf = terms.get_true_false_question()  # true-or-false\r\nmatching = terms.get_frq_question()  # matching\r\n```\r\nA question has different properties, depending on its type:\r\n```py\r\nprint(f\"What is the translation of {mcq.term}?\")\r\nfor option in mcq.options:\r\n    print(option)\r\nprint()\r\nanswer = input(\"Answer: \")\r\n```\r\nTo score a question, simply use its `check_answer` method:\r\n```py\r\ncorrect, actual = mcq.check_answer(answer)\r\nif correct:\r\n    print(\"Correct!\")\r\nelse:\r\n    print(f\"Incorrect...the answer was {actual}\")\r\n```\r\nIf you'd like, you can convert a question or quiz to back to its raw data at any time:\r\n```py\r\nprint(question.to_dict())\r\nprint(quiz.to_data())\r\n```\r\n\r\n## Classes\r\n\r\n### `Terms`\r\nA list of terms.\r\n\r\nShould be a dictionary mapping _terms_ to _definitions_, where in this case a _term_ represents a question or vocabulary term, and a _definition_ is used to refer to the answer or vocabulary definition. For example, here is a list of terms in which each term is an English word, and its definition is its English translation:\r\n\r\n```py\r\n{\r\n    \"painter\": \"la pintura\",\r\n    \"brush\": \"el pincel\",\r\n    \"sculpture\": \"la escultura\",\r\n    \"palette:\": \"la paleta\",\r\n    \"self-portrait\": \"el autorretrato\",\r\n    \"abstract\": \"abstracto/a\"\r\n}\r\n```\r\n\r\n#### `Terms.get_terms()`\r\nParameters:\r\n* `answer_with = \"def\"`: can be `\"term\"`, `\"def\"`, or `\"both\"`; how the question should be answered (see [Functions](#functions))\r\n\r\nReturns the dictionary `terms` modified based on the value for `answer_with`. May be useful for making flashcards for which terms and definitions may need to be swapped on-demand.\r\n\r\n#### `Terms.get_frq_question()`\r\nReturns an [`FRQQuestion`](#frqquestion) object with a random FRQ-format question generated from `terms`.\r\n\r\n#### `Terms.get_mcq_question()`\r\nParameters:\r\n* `n_options = 4`: number of options per question.\r\n\r\nReturns an [`MCQQuestion`](#mcqquestion) object with a random MCQ-format question generated from `terms`.\r\n\r\n#### `Terms.get_true_false_question()`\r\nReturns a [`TrueFalseQuestion`](#truefalsequestion) object with a random True-or-false format question generated from `terms`.\r\n\r\n#### `Terms.get_match_question()`\r\nParameters:\r\n* `n_terms = 5`: how many terms have to be matched\r\n\r\nReturns a [`MatchQuestion`](#matchquestion) object with a random matching-format question generated from `terms`.\r\n\r\n#### `Terms.get_random_question()`\r\nParameters:\r\n* `types = [\"mcq\", \"frq\", \"tf\"]`: list that can contain `\"mcq\"`, `\"frq\"`, `\"tf\"`, or `\"match\"`; types of questions that appear on the quiz\r\n* `n_options = 4`: (if MCQs are involved) number of options per MCQ question\r\n* `n_terms = 5`: (if matching questions are involved) number of terms to match per matching question\r\n\r\nReturns a `Question` object of a random-format question generated from `terms`.\r\n\r\n#### `Terms.get_quiz()`\r\nReturns a [`Quiz`](#quiz) object with random questions based on the below parameters.\r\n\r\nParameters:\r\n* `terms`: map of terms and definitions for quiz (see [`Terms`](#terms))\r\n* `types = [\"mcq\", \"frq\", \"tf\"]`: list that can contain `\"mcq\"`, `\"frq\"`, `\"tf\"`, or `\"match\"`; types of questions that appear on the quiz\r\n* `length = 10`: number of questions on quiz\r\n* `answer_with = \"def\"`: can be `\"term\"`, `\"def\"`, or `\"both\"`; how the question should be answered (see below)\r\n* `n_options = 4`: (if MCQs are involved) number of options per MCQ question\r\n* `n_terms = 5`: (if matching questions are involved) number of terms to match per matching question\r\n\r\n`answer_with` describes how the user should answer the question, where `\"term\"` means a question should be answered by giving the term, `\"def\"` implies that the question should be answered by providing the definition, and `\"both\"` means that there is a 50/50 chance of the question needing a term or a definition as input.\r\n\r\n### `Quiz`\r\nArbitrary quiz object.\r\n#### `Quiz.questions`\r\nlist of questions within the quiz, represented by a list of arbitrary `Question` objects.\r\n\r\n#### `Quiz.from_data()`\r\nReconstructs a `Quiz` object from a listlike representation. See [`Quiz.to_data()`](#quizto_data) for more information on formatting.\r\n\r\n#### `Quiz.to_data()`\r\nReturns a listlike representation of the quiz, with each `Question` object being represented as its dictionary representation. For example, it could look like this:\r\n```py\r\n[\r\n    {\r\n        \"_type\": \"tf\",\r\n        \"term\": \"la iglesia\",\r\n        \"definition\": \"shop\",\r\n        \"answer\": \"church\"\r\n    },\r\n    {\r\n        \"_type\": \"mcq\",\r\n        \"term\": \"la playa\",\r\n        \"options\": {\r\n            \"beach\": True,\r\n            \"park\": False,\r\n            \"downtown\": False,\r\n            \"museum\": False,\r\n        }\r\n    },\r\n    {\r\n        \"_type\": \"frq\",\r\n        \"term\": \"park\",\r\n        \"answer\": \"el parque\"\r\n    }\r\n]\r\n```\r\n\r\nPlease see documentation for [`MCQQuestion`](#mcqquestion), [`FRQQuestion`](#frqquestion), [`TrueFalseQuestion`](#truefalsequestion), and [`MatchQuestion`](#matchquestion) for more information on the format of the above questions.\r\n\r\n### `Question`\r\nGeneric question object used for reconstruction of a question from JSON data.\r\n\r\nParameters:\r\n* `_type`: question type\r\n* `term`: question term (prompt)\r\n* `answer`: question answer\r\n* `**kwargs`: other question data (e.g. `options`, `definition`, etc.)\r\n\r\n#### `Question.term`\r\nQuestion that the user is prompted with.\r\n\r\n#### `Question.answer`\r\nCorrect answer to the prompt `term`.\r\n\r\n#### `Question.from_dict()`\r\nReturns a reconstructed `Question` object made from `data`. Please see [`MCQQuestion.to_dict()`](#mcqquestionto_dict), [`FRQQuestion.to_dict()`](#frqquestionto_dict), [`TrueFalseQuestion.to_dict()`](#truefalsequestionto_dict), and [`MatchQuestion.to_dict()`](#matchquestionto_dict) for more information on formatting.\r\n\r\nParameters:\r\n* `data`: dictionary containing question data.\r\n\r\n#### `Question.check_answer()`\r\nReturns a tuple: the first item is a boolean whose value is `True` if `answer` matches the question's `answer` attribute or `False` otherwise, and the second item is the value for the question's `answer` attribute.\r\n\r\nParameters:\r\n* `answer`: answer provided by the user\r\n\r\n\r\n#### `Question.to_dict()`\r\nReturns a dictionary representation of the question. Each question has a `_type` key that can be used to determine how to render a question on the frontend (i.e. display multiple options for MCQ, textbox for FRQ, etc.), and a `term` key which represents the term the user is prompted with. Please see [`MCQQuestion.to_dict()`](#mcqquestionto_dict), [`FRQQuestion.to_dict()`](#frqquestionto_dict), [`TrueFalseQuestion.to_dict()`](#truefalsequestionto_dict), and [`MatchQuestion.to_dict()`](#matchquestionto_dict) for more information on formatting.\r\n\r\n### `MCQQuestion`\r\nRepresentation of an MCQ-format question. Has the same attributes as [`Question`](#question) objects, with some additional properties.\r\n\r\nParameters:\r\n* `term`: question term (prompt)\r\n* `options`: question options\r\n* `answer`: question answer\r\n\r\n#### `MCQQuestion.options`\r\nList of potential answer choices.\r\n\r\n#### `MCQQuestion.to_dict()`\r\nThe dictionary representation returned by the `to_dict` method of a `MCQQuestion` object looks like this:\r\n```py\r\n{\r\n    \"_type\": \"mcq\",\r\n    \"term\": \"term\",\r\n    \"options\": {\r\n        \"option1\": False,\r\n        \"option2\": False,\r\n        \"option3\": True,\r\n        \"option4\": False,\r\n    },\r\n    \"answer\": \"answer\"\r\n}\r\n```\r\n\r\nHere's a brief overview:\r\n* `term` is what the user will be prompted with, whether that be to choose a term's definition or vice/versa.\r\n* `options` is the list of potential answer choices.\r\n* `answer` is correct choice out of `options`.\r\n\r\n### `FRQQuestion`\r\nRepresentation of an FRQ-format question. Has the same attributes as [`Question`](#question) objects, with some additional properties.\r\n\r\nParameters:\r\n* `term`: question term (prompt)\r\n* `answer`: question answer\r\n\r\n#### `FRQQuestion.to_dict()`\r\nThe dictionary representation returned by the `to_dict` method of a `FRQQuestion` object looks like this:\r\n```py\r\n{\r\n    \"_type\": \"frq\",\r\n    \"term\": \"term\",\r\n    \"answer\": \"answer\"\r\n}\r\n```\r\n\r\nHere's a brief overview:\r\n* `term` is what the user will be prompted with, whether that be to define a term's definition or vice/versa.\r\n* `answer` is the response that will be accepted as correct given the user's prompt.\r\n\r\n### `TrueFalseQuestion`\r\nRepresentation of an True-or-false format question. Has the same attributes as [`Question`](#question) objects, with the some additional properties.\r\n\r\nParameters:\r\n* `term`: question term (prompt)\r\n* `definition`: question definition (what the user has to determine is True or False)\r\n* `answer`: question answer\r\n\r\n#### `TrueFalseQuestion.definition`\r\nWhat the user has to determine is True or False.\r\n\r\n#### `TrueFalseQuestion.to_dict()`\r\nThe dictionary representation returned by the `to_dict` method of a `TrueFalseQuestion` object looks like this:\r\n```py\r\n{\r\n    \"_type\": \"tf\",\r\n    \"term\": \"term\",\r\n    \"definition\": \"definition\",\r\n    \"answer\": \"answer\"\r\n}\r\n```\r\n\r\nHere's a brief overview:\r\n* `term` is what the user will be prompted with, whether that be to select True or False if the definition given matches with a specific term, or vice/versa.\r\n* `definition` is what the user has to determine is True or False.\r\n* `answer` is the actual definition that matches with the given `prompt`, or term.\r\n\r\n### `MatchQuestion`\r\nRepresentation of an MCQ-format question. Has the same attributes as [`Question`](#question) objects, with the some additional properties.\r\n\r\nParameters:\r\n* `term`: question term (prompt)\r\n* `definitions`: question definitions (what the user has to match with the terms)\r\n* `answer`: question answer\r\n\r\n#### `MatchQuestion.definitions`\r\nWhat the user has to match with the corresponding terms.\r\n\r\n#### `MatchQuestion.to_dict()`\r\nThe dictionary representation returned by the `to_dict` method of a `MatchQuestion` object looks like this:\r\n```py\r\n{\r\n    \"_type\": \"match\",\r\n    \"term\": [\r\n        \"term1\",\r\n        \"term2\",\r\n        \"term3\",\r\n        \"term4\"\r\n    ],\r\n    \"definitions\": [\r\n        \"definition4\",\r\n        \"definition2\",\r\n        \"definition1\",\r\n        \"definition3\",\r\n    ],\r\n    \"answer\": {\r\n        \"term1\": \"definition1\",\r\n        \"term2\": \"definition2\",\r\n        \"term3\": \"definition3\",\r\n        \"term4\": \"definition4\"\r\n    }\r\n}\r\n```\r\n\r\nHere's a brief overview:\r\n* `term` is what the user will be prompted with, whether that be to match the term with the definition, or vice/versa.\r\n* `definitions` is what the user has to match with the corresponding terms.\r\n* `answer` maps the terms `term` to their actual definitions `definitions`.\r\n\r\n## Exceptions\r\n\r\n### `BaseQuizzableException`\r\nThe base exception for all `quizzable` errors.\r\n\r\n### `InvalidLengthError`\r\nThe length specified is not valid (i.e. too short or too long)\r\n\r\nParameters:\r\n* `length`: invalid length of the quiz\r\n\r\n### `InvalidOptionsError`\r\nThe number of options (for MCQs) specified is not valid (i.e. too small or too large)\r\n\r\nParameters:\r\n* `n_options`: invalid number of options per MCQ question\r\n\r\n### `InvalidTermsError`\r\nThe number of terms (for matching questions) specified is not valid (i.e. too small or too large)\r\n\r\nParameters:\r\n* `n_terms`: invalid number of terms per matching question\r\n\r\n### `InvalidQuestionError`\r\nThe type of question specified is not valid (should only be `\"mcq\"`, `\"frq\"`, `\"tf\"`, or `\"match\"`).\r\n\r\nParameters:\r\n* `question`: invalid type of question\r\n\r\n### `DataIncompleteError`\r\nThe data passed into the constructor for `Question` is incomplete. See [`MCQQuestion.to_dict()`](#mcqquestionto_dict), [`FRQQuestion.to_dict()`](#frqquestionto_dict), [`TrueFalseQuestion.to_dict()`](#truefalsequestionto_dict), and [`MatchQuestion.to_dict()`](#matchquestionto_dict) for how the data for different types of questions should be formatted.\r\n\r\nParameters:\r\n* `data`: incomplete data\r\n\r\n## Authors\r\n### Sai Koushik Balusulapalem\r\n[GitHub](https://github.com/balusulapalemsaikoushik)\r\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Education with Python.",
    "version": "1.0.2",
    "project_urls": {
        "Bug Tracker": "https://github.com/balusulapalemsaikoushik/quizzable/issues",
        "Homepage": "https://github.com/balusulapalemsaikoushik/quizzable/"
    },
    "split_keywords": [
        "education",
        " app",
        " quiz",
        " learn",
        " backend"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6b69ef6affc900afe40666be9cc3e1e2dbe46afc4e301fea8a2df7c68ff69f72",
                "md5": "7f774ec17d750f1bb90dab23b3740103",
                "sha256": "9655cb4c2813d049076a86f9a33212cd523b1377d5d12818431af42a59ceffc8"
            },
            "downloads": -1,
            "filename": "quizzable-1.0.2-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7f774ec17d750f1bb90dab23b3740103",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 13846,
            "upload_time": "2024-08-27T00:13:20",
            "upload_time_iso_8601": "2024-08-27T00:13:20.629448Z",
            "url": "https://files.pythonhosted.org/packages/6b/69/ef6affc900afe40666be9cc3e1e2dbe46afc4e301fea8a2df7c68ff69f72/quizzable-1.0.2-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "28be5eae40720cc7e95f61d7d25e747853a6f389dad84389eedee7e30fe47658",
                "md5": "3290bca117584ae46c52708637039c46",
                "sha256": "72448966c4e9ada72ac74bd768971f5b1e2337858c7615468720b365a6726c96"
            },
            "downloads": -1,
            "filename": "quizzable-1.0.2.tar.gz",
            "has_sig": false,
            "md5_digest": "3290bca117584ae46c52708637039c46",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 14481,
            "upload_time": "2024-08-27T00:13:21",
            "upload_time_iso_8601": "2024-08-27T00:13:21.970257Z",
            "url": "https://files.pythonhosted.org/packages/28/be/5eae40720cc7e95f61d7d25e747853a6f389dad84389eedee7e30fe47658/quizzable-1.0.2.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-08-27 00:13:21",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "balusulapalemsaikoushik",
    "github_project": "quizzable",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "quizzable"
}
        
Elapsed time: 0.31654s