# Socrates Assess
This is a tool to assist assessment creators to create test-based assessments
that can provide high-quality formative feedback.
![Generate automated feedback](doc/automated.png)
However, if an unexpected test failure happens.
![No automated feedback available](doc/diagram.png)
## Prepare a python environment
Since `socassess` is a Python package, here we use `poetry` for package
management and to handle our virtual environment.
```
poetry new hw1
```
Install SocAssess and then activate the virtual environment.
```
cd hw1
poetry add socassess
poetry shell
```
You should be able to assess the `socassess` command now.
```
socassess -h
```
```
usage: socassess [-h] {create,feedback} ...
positional arguments:
{create,feedback}
create Create SocAssess starter code in a new folder
feedback Generate feedback
options:
-h, --help show this help message and exit
```
## Create the starter code
The original hw1/hw1 can be deleted. We will use SocAssess to re-create one with
starter code in it.
```
rm -r hw1
socassess create --name hw1
```
```
Assessment folder is created! Let's give it a try.
cd hw1
To see the automated feedback:
socassess feedback --config=socassess.toml --artifacts=artifacts --ansdir=stu --probing=probing_tests --feedback=maps
To inspect pytest outcomes:
socassess feedback --artifacts=artifacts --ansdir=stu --probing=probing_tests
Or you can just use `pytest`
pytest -vv --artifacts=artifacts --ansdir=stu probing_tests
For more info
socassess --help
Take a look at the code in ./hw1 and have fun!
```
Running just the probing tests.
```
socassess feedback --artifacts=artifacts --ansdir=stu --probing=probing_tests
```
```
...
collected 2 items
probing_tests/test_it.py::test_exist PASSED [ 50%]
probing_tests/test_it.py::test_content PASSED [100%]
- generated xml file: .../hw1/hw1/artifacts/report.xml -
```
Those pass and fail outcomes will be mapped to
```
socassess feedback --config=socassess.toml --artifacts=artifacts --ansdir=stu --probing=probing_tests --feedback=maps
```
```
# Feedback
## general
Nice! Your answer looks good!
```
Take a look at the code inside the folder and have fun!
## A more complicated example
You can find it in examples/a1
```
cd examples/a1
poetry install
poetry shell
cd a1 # examples/a1/a1
```
Probing test outcomes
```
socassess feedback --artifacts=artifacts --ansdir=stu --probing=probing_tests
```
```
...
probing_tests/test_it.py::test_exist PASSED [ 11%]
probing_tests/test_it.py::test_single PASSED [ 22%]
probing_tests/test_it.py::test_combined_1 PASSED [ 33%]
probing_tests/test_it.py::test_combined_2 PASSED [ 44%]
probing_tests/test_it.py::test_level_lowest PASSED [ 55%]
probing_tests/test_it.py::test_level_medium_1 PASSED [ 66%]
probing_tests/test_it.py::test_level_medium_2 PASSED [ 77%]
probing_tests/test_it.py::test_ai FAILED [ 88%]
probing_tests/test_it.py::test_email FAILED [100%]
========================================================== FAILURES ===========================================================
auto-feedback/socassess/examples/a1/a1/probing_tests/test_it.py:46: AssertionError: failed due to unknown reason
auto-feedback/socassess/examples/a1/a1/probing_tests/test_it.py:58: AssertionError: failed due to unknown reason
----------------------- generated xml file: auto-feedback/socassess/examples/a1/a1/artifacts/report.xml -----------------------
=================================================== short test summary info ===================================================
FAILED probing_tests/test_it.py::test_ai - AssertionError: failed due to unknown reason
FAILED probing_tests/test_it.py::test_email - AssertionError: failed due to unknown reason
```
Generated feedback
```
socassess feedback --config=socassess.toml --artifacts=artifacts --ansdir=stu --probing=probing_tests --feedback=maps
```
```
# Feedback
## single
Congrats! test_single passed
## combined
Congrats! test_combined_1 and test_combined_2 passed
## level
Congrats! test_level_medium_1 passed. This feedback should be shown.
Congrats! test_level_medium_2 passed. This feedback should be shown.
## non_auto
non_auto: automated feedback is not available
```
### Use AI and/or Email
If in `socassess.toml`, you set `ai` and/or `email` to `true`, and filled
corresponding fields. Then socassess will seek AI feedback and/or send an email
to seek human help.
```toml
[feature]
ai = true
email = true
```
```toml
[email]
[email.account]
account = 'account' # the sender account of the mail server
password = "pswd" # the password to login to the mail server
from = 'from@address.com' # the email address to use under the account
to = "to@address.com" # to which address the email is sent, i.e., the expert email
smtp_server = "smtp.server.com" # the SMTP server to use
[email.content]
subject = "[SocAssess][Assignment 1] Human feedback needed"
email_body = '''
SocAssess is needing human feedback.
The attached files contain relevant context of the submission.
See attachments
'''
initial_reply = '''
An instructor has been notified for questions where pre-coded feedback are not available.
''' # the initial feedback to be shown to let the student know that automated feedback is not available
[openai]
openai_key = "<key>, if empty, use OPENAI_KEY environment variable"
model = "gpt-3.5-turbo"
temperature = 1
max_tokens = 2048
top_p = 1
frequency_penalty = 0
presence_penalty = 0
system_prompt = """\
You are an expert in assessing students' answers. Your message will be sent \
directly to students. When the instructor provides you with a student's answer, \
you will give a short feedback message to correct student's misunderstanding, \
but without leaking any infomation of the canonical or correct answer directly. \
""" # per TOML spec, in "" string, "\" will be used to trim whitespaces and newlines
template = '''
AI generated feedback:
{feedback}''' # support one key: `feedback`; AI response will replace {feedback}
```
```
## non_auto
AI generated feedback:
Good effort on adding content to your answer file! Make sure to review the
question prompt and ensure that your response directly addresses all aspects of
the question for a comprehensive answer. Keep up the good work!
## _email
An instructor has been notified for questions where pre-coded feedback are not available.
```
### Unit test for the assessment code in examples/a1
```
cd examples/a1
pytest -v tests
```
```
tests/test_auto.py::test_match[1.txt] PASSED [100%]
```
Raw data
{
"_id": null,
"home_page": "https://github.com/h365chen/socassess",
"name": "socassess",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": "automated, feedback, education",
"author": "Huanyi Chen",
"author_email": "huanyi.chen@uwaterloo.ca",
"download_url": "https://files.pythonhosted.org/packages/e9/07/af4aed96213416c2a5bd116cef493d8613ee803cd753ebb75212f27097b5/socassess-0.2.3.tar.gz",
"platform": null,
"description": "# Socrates Assess\n\nThis is a tool to assist assessment creators to create test-based assessments\nthat can provide high-quality formative feedback.\n\n![Generate automated feedback](doc/automated.png)\n\nHowever, if an unexpected test failure happens.\n\n![No automated feedback available](doc/diagram.png)\n\n## Prepare a python environment\n\nSince `socassess` is a Python package, here we use `poetry` for package\nmanagement and to handle our virtual environment.\n\n```\npoetry new hw1\n```\n\nInstall SocAssess and then activate the virtual environment.\n\n```\ncd hw1\npoetry add socassess\npoetry shell\n```\n\nYou should be able to assess the `socassess` command now.\n\n```\nsocassess -h\n```\n\n```\nusage: socassess [-h] {create,feedback} ...\n\npositional arguments:\n {create,feedback}\n create Create SocAssess starter code in a new folder\n feedback Generate feedback\n\noptions:\n -h, --help show this help message and exit\n```\n\n## Create the starter code\n\nThe original hw1/hw1 can be deleted. We will use SocAssess to re-create one with\nstarter code in it.\n\n```\nrm -r hw1\nsocassess create --name hw1\n```\n\n```\nAssessment folder is created! Let's give it a try.\n cd hw1\nTo see the automated feedback:\n socassess feedback --config=socassess.toml --artifacts=artifacts --ansdir=stu --probing=probing_tests --feedback=maps\nTo inspect pytest outcomes:\n socassess feedback --artifacts=artifacts --ansdir=stu --probing=probing_tests\nOr you can just use `pytest`\n pytest -vv --artifacts=artifacts --ansdir=stu probing_tests\nFor more info\n socassess --help\nTake a look at the code in ./hw1 and have fun!\n```\n\nRunning just the probing tests.\n\n```\nsocassess feedback --artifacts=artifacts --ansdir=stu --probing=probing_tests\n```\n\n```\n...\ncollected 2 items\n\nprobing_tests/test_it.py::test_exist PASSED [ 50%]\nprobing_tests/test_it.py::test_content PASSED [100%]\n\n- generated xml file: .../hw1/hw1/artifacts/report.xml -\n```\n\nThose pass and fail outcomes will be mapped to\n\n```\nsocassess feedback --config=socassess.toml --artifacts=artifacts --ansdir=stu --probing=probing_tests --feedback=maps\n```\n\n```\n# Feedback\n\n## general\n\nNice! Your answer looks good!\n```\n\nTake a look at the code inside the folder and have fun!\n\n## A more complicated example\n\nYou can find it in examples/a1\n\n```\ncd examples/a1\npoetry install\npoetry shell\ncd a1 # examples/a1/a1\n```\n\nProbing test outcomes\n\n```\nsocassess feedback --artifacts=artifacts --ansdir=stu --probing=probing_tests\n```\n\n```\n...\nprobing_tests/test_it.py::test_exist PASSED [ 11%]\nprobing_tests/test_it.py::test_single PASSED [ 22%]\nprobing_tests/test_it.py::test_combined_1 PASSED [ 33%]\nprobing_tests/test_it.py::test_combined_2 PASSED [ 44%]\nprobing_tests/test_it.py::test_level_lowest PASSED [ 55%]\nprobing_tests/test_it.py::test_level_medium_1 PASSED [ 66%]\nprobing_tests/test_it.py::test_level_medium_2 PASSED [ 77%]\nprobing_tests/test_it.py::test_ai FAILED [ 88%]\nprobing_tests/test_it.py::test_email FAILED [100%]\n\n========================================================== FAILURES ===========================================================\nauto-feedback/socassess/examples/a1/a1/probing_tests/test_it.py:46: AssertionError: failed due to unknown reason\nauto-feedback/socassess/examples/a1/a1/probing_tests/test_it.py:58: AssertionError: failed due to unknown reason\n----------------------- generated xml file: auto-feedback/socassess/examples/a1/a1/artifacts/report.xml -----------------------\n=================================================== short test summary info ===================================================\nFAILED probing_tests/test_it.py::test_ai - AssertionError: failed due to unknown reason\nFAILED probing_tests/test_it.py::test_email - AssertionError: failed due to unknown reason\n```\n\nGenerated feedback\n\n```\nsocassess feedback --config=socassess.toml --artifacts=artifacts --ansdir=stu --probing=probing_tests --feedback=maps\n```\n\n```\n# Feedback\n\n## single\n\nCongrats! test_single passed\n\n## combined\n\nCongrats! test_combined_1 and test_combined_2 passed\n\n## level\n\nCongrats! test_level_medium_1 passed. This feedback should be shown.\nCongrats! test_level_medium_2 passed. This feedback should be shown.\n\n## non_auto\n\nnon_auto: automated feedback is not available\n```\n\n### Use AI and/or Email\n\nIf in `socassess.toml`, you set `ai` and/or `email` to `true`, and filled\ncorresponding fields. Then socassess will seek AI feedback and/or send an email\nto seek human help.\n\n```toml\n[feature]\nai = true\nemail = true\n```\n\n```toml\n[email]\n\n[email.account]\naccount = 'account' # the sender account of the mail server\npassword = \"pswd\" # the password to login to the mail server\nfrom = 'from@address.com' # the email address to use under the account\nto = \"to@address.com\" # to which address the email is sent, i.e., the expert email\nsmtp_server = \"smtp.server.com\" # the SMTP server to use\n\n[email.content]\nsubject = \"[SocAssess][Assignment 1] Human feedback needed\"\nemail_body = '''\nSocAssess is needing human feedback.\nThe attached files contain relevant context of the submission.\n\nSee attachments\n'''\ninitial_reply = '''\nAn instructor has been notified for questions where pre-coded feedback are not available.\n''' # the initial feedback to be shown to let the student know that automated feedback is not available\n\n\n[openai]\nopenai_key = \"<key>, if empty, use OPENAI_KEY environment variable\"\nmodel = \"gpt-3.5-turbo\"\ntemperature = 1\nmax_tokens = 2048\ntop_p = 1\nfrequency_penalty = 0\npresence_penalty = 0\nsystem_prompt = \"\"\"\\\nYou are an expert in assessing students' answers. Your message will be sent \\\ndirectly to students. When the instructor provides you with a student's answer, \\\nyou will give a short feedback message to correct student's misunderstanding, \\\nbut without leaking any infomation of the canonical or correct answer directly. \\\n \"\"\" # per TOML spec, in \"\" string, \"\\\" will be used to trim whitespaces and newlines\ntemplate = '''\nAI generated feedback:\n{feedback}''' # support one key: `feedback`; AI response will replace {feedback}\n```\n\n```\n## non_auto\n\nAI generated feedback:\nGood effort on adding content to your answer file! Make sure to review the\nquestion prompt and ensure that your response directly addresses all aspects of\nthe question for a comprehensive answer. Keep up the good work!\n\n## _email\n\nAn instructor has been notified for questions where pre-coded feedback are not available.\n```\n\n### Unit test for the assessment code in examples/a1\n\n```\ncd examples/a1\npytest -v tests\n```\n\n```\ntests/test_auto.py::test_match[1.txt] PASSED [100%]\n```\n\n",
"bugtrack_url": null,
"license": null,
"summary": "make test-based assessment's feedback easy to setup",
"version": "0.2.3",
"project_urls": {
"Homepage": "https://github.com/h365chen/socassess",
"Repository": "https://github.com/h365chen/socassess"
},
"split_keywords": [
"automated",
" feedback",
" education"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c4499f2b8e86d5370003c74f9be6732ee00fcf0551ede787e97a06a8e3ff954f",
"md5": "4901949ed7aabcab67c5e3a57b5a3287",
"sha256": "1f282eeb3de6f2428484e6267041660bf1b201af6f923e9f9cb5ebfdef33b040"
},
"downloads": -1,
"filename": "socassess-0.2.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "4901949ed7aabcab67c5e3a57b5a3287",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 19498,
"upload_time": "2024-04-23T00:24:44",
"upload_time_iso_8601": "2024-04-23T00:24:44.162529Z",
"url": "https://files.pythonhosted.org/packages/c4/49/9f2b8e86d5370003c74f9be6732ee00fcf0551ede787e97a06a8e3ff954f/socassess-0.2.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e907af4aed96213416c2a5bd116cef493d8613ee803cd753ebb75212f27097b5",
"md5": "1368b175ba3702d7017dbba680657094",
"sha256": "d4981c8436052b37edb6672e0565dd172c83b0467672d1d3a22e1599bb7c0633"
},
"downloads": -1,
"filename": "socassess-0.2.3.tar.gz",
"has_sig": false,
"md5_digest": "1368b175ba3702d7017dbba680657094",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 15783,
"upload_time": "2024-04-23T00:24:45",
"upload_time_iso_8601": "2024-04-23T00:24:45.927618Z",
"url": "https://files.pythonhosted.org/packages/e9/07/af4aed96213416c2a5bd116cef493d8613ee803cd753ebb75212f27097b5/socassess-0.2.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-23 00:24:45",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "h365chen",
"github_project": "socassess",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "socassess"
}