Name | pyAutoMark JSON |
Version |
0.8.2
JSON |
| download |
home_page | |
Summary | Automated marking of student electronic submissions |
upload_time | 2023-12-13 13:24:15 |
maintainer | |
docs_url | None |
author | |
requires_python | >=3.10 |
license | GPL-3.0-openpyxl |
keywords |
pytest
assessment
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Automated marking and feedback
## Applicability
This toolset is applicable for all assessments (coursework, practical work, exams and class tests ) where students
are required to submit their work electronically and where the submission can be analysed electronically e.g.
* Software which can be tested using unit testing (in any language).
* Embedded Systems which can be tested using mock libraries and headers instead of the hardware
* Digital designs using a hardware description language (e.g. VHDL) whic can be tested in simulation or synthesis
* Data files which can be read and analysed - these might br created from simulations, or experimental measurements
or data recorded from test instruments
* Spreadsheet files which might be used to capture student readings and answers to questions.
The toolset is written in Python and can run on almost any Platform - PCs, Linux or Mac.
Users will need to isntall other software needed for the actual tests such as C compilers
for host and embedded C applications, VHDL simulators and synthesis tools, Python libraries etc.
## Process and Coverage
The toolset automates the following steps
1. Collation of student work into folders for a cohort/assessment.
This may be from a Blackboard dump of files and archives from grade centre or (preferred)
the cloning and pulling of student work from github classroom repositories.
2. The collation and use of student information from a provided csv file (e.g. downloaded from Blackboard)
3. Checking of student submission against a manifest of expected files (provided)
4. The collation of a set of tests to be run against the students work - these can be almost anything (see applicability above).
5. The automated running of the tests across a cohort or set of students amd the collection of the results into a feedback report.
6. The generation of a marking template based on the set of tests - this may then be added
to as necessary to integrate different marking schemes and non-automated marking results.
7. The generation of completed marking sheets for the students (using the report data and a template marking sheet)
8. The sending out by email of the marking sheets and reports to students (work in progress)
9. The automated filling in of a csv file from the marking sheet for sending marks to the office.
## The tools
usage: pyAutoMark [-h] {run,retrieve,extract,mark,generate-template,check-submission,find-duplicates,config} ...
Automatically retrieve, mark and provide feedback for digital student submissions
optional arguments:
-h, --help show this help message and exit
subcommands:
{run,retrieve,extract,mark,generate-template,check-submission,find-duplicates,config}
run Run automated tests and generate reports
retrieve Retrieve student files from github
extract Extract student files from downloads
mark Generate mark spreadsheets from reports and template spreadsheet
generate-template Generate a template spreadsheet
check-submission Check students have submitted files listed in manifest
find-duplicates Find duplicate students files
config Set or read configration
## Requirements
### Software
| Software | Min Version | Link |
|--------------------------|-------------|------------------------------------------------------------|
| Visual Studio Code | 1.74.2 | <https://code.visualstudio.com/> |
| Python | 3.10.9 | <https://www.python.org/downloads/windows/> |
| pytest | 7.2.1 | pip install pytest |
| git | 2.39.0.windows.2 | <https://gitforwindows.org/> |
| clang-tidy (Optional) | 10.0 | <https://learn.microsoft.com/en-us/cpp/code-quality/clang-tidy?view=msvc-170> |
### Common ComponentsLibraries
| Python Libraries | Version |
|--------------------------|-------------|
| openpyxl | 3.1.2 |
| pylint (Optional) | 2.4.4 |
| pytest-timeout (Optional)| |
| VSCode Extensions | Author |
|--------------------------|-------------|
| C/C++ | Microsoft |
| C/C++ Extension Pack | Microsoft |
| Python Test Explorer | Little Fox Team |
| Github Classroom | Github |
## Documentation
See <https://willijar.github.io/pyAutoMark/>
Raw data
{
"_id": null,
"home_page": "",
"name": "pyAutoMark",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": "",
"keywords": "pytest,assessment",
"author": "",
"author_email": "John Williams <j.a.r.williams@jarw.org.uk>",
"download_url": "https://files.pythonhosted.org/packages/2a/ab/6062e355c9a0770389ef608de42953a1e117465314bec50b2881b287d34a/pyAutoMark-0.8.2.tar.gz",
"platform": null,
"description": "# Automated marking and feedback\n\n## Applicability\n\nThis toolset is applicable for all assessments (coursework, practical work, exams and class tests ) where students\nare required to submit their work electronically and where the submission can be analysed electronically e.g.\n\n* Software which can be tested using unit testing (in any language).\n* Embedded Systems which can be tested using mock libraries and headers instead of the hardware\n* Digital designs using a hardware description language (e.g. VHDL) whic can be tested in simulation or synthesis\n* Data files which can be read and analysed - these might br created from simulations, or experimental measurements\n or data recorded from test instruments\n* Spreadsheet files which might be used to capture student readings and answers to questions.\n\nThe toolset is written in Python and can run on almost any Platform - PCs, Linux or Mac.\nUsers will need to isntall other software needed for the actual tests such as C compilers\nfor host and embedded C applications, VHDL simulators and synthesis tools, Python libraries etc.\n\n## Process and Coverage\n\nThe toolset automates the following steps\n\n1. Collation of student work into folders for a cohort/assessment.\n This may be from a Blackboard dump of files and archives from grade centre or (preferred)\n the cloning and pulling of student work from github classroom repositories.\n\n2. The collation and use of student information from a provided csv file (e.g. downloaded from Blackboard)\n\n3. Checking of student submission against a manifest of expected files (provided)\n\n4. The collation of a set of tests to be run against the students work - these can be almost anything (see applicability above).\n\n5. The automated running of the tests across a cohort or set of students amd the collection of the results into a feedback report.\n\n6. The generation of a marking template based on the set of tests - this may then be added\n to as necessary to integrate different marking schemes and non-automated marking results.\n\n7. The generation of completed marking sheets for the students (using the report data and a template marking sheet)\n\n8. The sending out by email of the marking sheets and reports to students (work in progress)\n\n9. The automated filling in of a csv file from the marking sheet for sending marks to the office.\n\n## The tools\n\n usage: pyAutoMark [-h] {run,retrieve,extract,mark,generate-template,check-submission,find-duplicates,config} ...\n\n Automatically retrieve, mark and provide feedback for digital student submissions\n\n optional arguments:\n -h, --help show this help message and exit\n\n subcommands:\n {run,retrieve,extract,mark,generate-template,check-submission,find-duplicates,config}\n run Run automated tests and generate reports\n retrieve Retrieve student files from github\n extract Extract student files from downloads\n mark Generate mark spreadsheets from reports and template spreadsheet\n generate-template Generate a template spreadsheet\n check-submission Check students have submitted files listed in manifest\n find-duplicates Find duplicate students files\n config Set or read configration\n\n## Requirements\n\n### Software\n\n| Software | Min Version | Link |\n|--------------------------|-------------|------------------------------------------------------------|\n| Visual Studio Code | 1.74.2 | <https://code.visualstudio.com/> |\n| Python | 3.10.9 | <https://www.python.org/downloads/windows/> |\n| pytest | 7.2.1 | pip install pytest |\n| git | 2.39.0.windows.2 | <https://gitforwindows.org/> |\n| clang-tidy (Optional) | 10.0 | <https://learn.microsoft.com/en-us/cpp/code-quality/clang-tidy?view=msvc-170> |\n\n### Common ComponentsLibraries\n\n| Python Libraries | Version |\n|--------------------------|-------------|\n| openpyxl | 3.1.2 |\n| pylint (Optional) | 2.4.4 |\n| pytest-timeout (Optional)| |\n\n| VSCode Extensions | Author |\n|--------------------------|-------------|\n| C/C++ | Microsoft |\n| C/C++ Extension Pack | Microsoft |\n| Python Test Explorer | Little Fox Team |\n| Github Classroom | Github |\n\n## Documentation\n\nSee <https://willijar.github.io/pyAutoMark/>\n",
"bugtrack_url": null,
"license": "GPL-3.0-openpyxl",
"summary": "Automated marking of student electronic submissions",
"version": "0.8.2",
"project_urls": {
"Homepage": "https://willijar.github.io/pyAutoMark/",
"Repository": "https://github.com/willijar/pyAutoMark"
},
"split_keywords": [
"pytest",
"assessment"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "61ec2c10b749aa46ef7692ff92d211abb2350fa5925d9c5b0d51edab1cc60659",
"md5": "eaa1f4372c70c3c4bd63e4b76db0bd7f",
"sha256": "eb01f5676a9e3c7c5f1eb7f5433e967936ed71e45149238e1b4bf451a9c98061"
},
"downloads": -1,
"filename": "pyAutoMark-0.8.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "eaa1f4372c70c3c4bd63e4b76db0bd7f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 72038,
"upload_time": "2023-12-13T13:24:13",
"upload_time_iso_8601": "2023-12-13T13:24:13.988015Z",
"url": "https://files.pythonhosted.org/packages/61/ec/2c10b749aa46ef7692ff92d211abb2350fa5925d9c5b0d51edab1cc60659/pyAutoMark-0.8.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "2aab6062e355c9a0770389ef608de42953a1e117465314bec50b2881b287d34a",
"md5": "a1dc52682fbd7a4ad78915d129d2dcf7",
"sha256": "de03570ebe4f017bc93f433faf5ae0e5a201e56ef0cc8007de540ab4ccece6e1"
},
"downloads": -1,
"filename": "pyAutoMark-0.8.2.tar.gz",
"has_sig": false,
"md5_digest": "a1dc52682fbd7a4ad78915d129d2dcf7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 60653,
"upload_time": "2023-12-13T13:24:15",
"upload_time_iso_8601": "2023-12-13T13:24:15.315593Z",
"url": "https://files.pythonhosted.org/packages/2a/ab/6062e355c9a0770389ef608de42953a1e117465314bec50b2881b287d34a/pyAutoMark-0.8.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-13 13:24:15",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "willijar",
"github_project": "pyAutoMark",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "pyautomark"
}