Name | lograder JSON |
Version |
0.0.14
JSON |
| download |
home_page | None |
Summary | An API for easy Gradescope Autograder assignment creation. |
upload_time | 2025-09-05 05:47:49 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.10 |
license | Copyright 2025 Logan Dapp
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
gradescope
autograder
|
VCS |
 |
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# `lograder`: A Gradescope Autograder API
----
This project just serves to standard different kinds of tests
that can be run on student code for the Gradescope autograder.
Additionally, this project was developed for the **University
of Florida's Fall 2025 COP3504C** (*Advanced Programming
Fundamentals*), taught by Michael Link. However, you are
completely free to use, remix, refactor, and abuse this code
as much as you like.
----
# Project Builders
----
### C++ Complete Project with [I/O Comparison](#output-comparison)
#### Build from C++ Source (*WIP*)
To build from source, you will need to import the C++
`CxxSourceBuilder`. The executable will be randomly
named and put in either a build directory, if the student
has one (`./build`) or the project root directory (`./`).
```py
from lograder.dispatch import CxxSourceDispatcher
from lograder.output import AssignmentSummary
# Note that when you make a test, it's automatically
# registered with the `lograder.tests.registry.TestRegistry`
assignment = CxxSourceDispatcher(project_root="/autograder/submission")
preprocessor_results = assignment.preprocess()
build_results = assignment.build()
runtime_results = assignment.run_tests()
summary = AssignmentSummary(
preprocessor_output=preprocessor_results.get_output(),
build_output=build_results.get_output(),
runtime_summary=runtime_results.get_summary(),
test_cases=runtime_results.get_test_cases()
)
```
#### Build using CMake (*WIP*)
To build from a `CMakeLists.txt`, you will need to import the C++
`CMakeBuilder`. This method will automatically run a breadth-first
search starting in the project root directory (`./`) and "lock on"
the first (i.e. the file in the highest-level) `CMakeLists.txt` that
it finds. If it can't find a `CMakeLists.txt`, it will raise an error.
Additionally, the program will look for the following targets first:
`main`, `build`, and `demo`. Afterward, it will search for any target
that doesn't match: `all`, `install`, `test`, `package`, `package_source`,
`edit_cache`, `rebuild_cache`, `clean`, `help`, `ALL_BUILD`, `ZERO_CHECK`,
`INSTALL`, `RUN_TESTS`, and `PACKAGE`, and run the first target that it
finds. If it can't find a valid target, it will raise an error.
```py
from lograder.dispatch import CMakeDispatcher
from lograder.output import AssignmentSummary
# Note that when you make a test, it's automatically
# registered with the `lograder.tests.registry.TestRegistry`
assignment = CMakeDispatcher(project_root="/autograder/submission")
preprocessor_results = assignment.preprocess()
build_results = assignment.build()
runtime_results = assignment.run_tests()
summary = AssignmentSummary(
preprocessor_output=preprocessor_results.get_output(),
build_output=build_results.get_output(),
runtime_summary=runtime_results.get_summary(),
test_cases=runtime_results.get_test_cases()
)
```
----
### C++ Catch2 Unit Testing (*WIP*)
----
### Python Complete Project with [I/O Comparison](#output-comparison)
#### Run project from `main.py` (*WIP*)
#### Run project from `pyproject.toml` (*WIP*)
----
### Python pytest Unit Testing (*WIP*)
----
### Makefile Complete Project with [I/O Comparison](#output-comparison) (*WIP*)
To build from a `Makefile`, you will need a `MakefileBuilder`. It follows
the same general idea as the `CMakeBuilder` except that it searches for
`Makefile` instead of `CMakeLists.txt`. Additionally, `MakefileBuilder`
will just run the default `make`.
```py
from lograder.dispatch import MakefileDispatcher
from lograder.output import AssignmentSummary
# Note that when you make a test, it's automatically
# registered with the `lograder.tests.registry.TestRegistry`
assignment = MakefileDispatcher(project_root="/autograder/submission")
preprocessor_results = assignment.preprocess()
build_results = assignment.build()
runtime_results = assignment.run_tests()
summary = AssignmentSummary(
preprocessor_output=preprocessor_results.get_output(),
build_output=build_results.get_output(),
runtime_summary=runtime_results.get_summary(),
test_cases=runtime_results.get_test_cases()
)
```
----
# Test Generation
----
## Output Comparison
### Compare Simple Strings
For the smallest number of tiny test cases, there's no reason
to have an over-bloated mess. You can just use:
```py
from typing import Sequence, Optional, List
from pathlib import Path
from lograder.tests import make_tests_from_strs, ExecutableOutputComparisonTest
def make_test_from_strs(
*, # kwargs-only; to avoid confusion with argument sequence.
names: Sequence[str],
inputs: Sequence[str],
expected_outputs: Sequence[str],
flag_sets: Optional[Sequence[List[str | Path]]] = None,
# Pass flags like ["--option-1", "--option-2"] to student programs
weights: Optional[Sequence[float]] = None, # Defaults to equal-weight.
) -> List[ExecutableOutputComparisonTest]: ...
# Here's an example of how you'd use the above method:
make_tests_from_strs(
names=["Test Case 1", "Test Case 2"],
inputs=["stdin-1", "stdin-2"],
expected_outputs=["stdout-1", "stdout-2"]
)
```
### Compare from Files
If you have a larger test, it would be very convenient to
read files for input and output. Luckily, there's just the
method to do so:
```py
from typing import Sequence, Optional, List
from pathlib import Path
from lograder.tests import make_tests_from_files, FilePath, ExecutableOutputComparisonTest
# `make_tests_from_files` has the following signature.
def make_tests_from_files(
*, # kwargs-only; to avoid confusion with argument sequence.
names: Sequence[str],
input_files: Optional[Sequence[FilePath]] = None, # `input_files` and `input_strs` mutually exclusive.
input_strs: Optional[Sequence[str]] = None,
expected_output_files: Optional[Sequence[FilePath]] = None,
# same with `expected_output_files` and `expected_output_strs`
expected_output_strs: Optional[Sequence[str]] = None,
flag_sets: Optional[Sequence[List[str | Path]]] = None,
# Pass flags like ["--option-1", "--option-2"] to student programs
weights: Optional[Sequence[float]] = None, # Defaults to equal-weight.
) -> List[ExecutableOutputComparisonTest]: ...
# Here's an example of how you'd use the above method:
make_tests_from_files(
names=["Test Case 1", "Test Case 2"],
input_files=["test/inputs/input1.txt", "test/inputs/input2.txt"],
expected_output_files=["test/inputs/output1.txt", "test/inputs/output2.txt"]
)
```
### Compare from Template
Finally, sometimes the test-cases might be very long but
very repetitive. You can use `make_tests_from_template`
and pass a `TestCaseTemplate` object and ...
```py
from typing import Sequence, Optional, List
from pathlib import Path
from lograder.tests import make_tests_from_template, TestCaseTemplate, FilePath
# Here's the signature of a `TemplateSubstitution`
class TemplateSubstitution:
def __init__(self, *args, **kwargs):
# Stores args and kwargs to pass to str.format(...) later.
...
TSub = TemplateSubstitution # Here's an alias that's quicker to type.
# Here's the signature of a `TestCaseTemplate`
class TestCaseTemplate:
def __init__(self, *,
inputs: Optional[Sequence[str]] = None,
input_template_file: Optional[FilePath] = None,
input_template_str: Optional[str] = None,
input_substitutions: Optional[Sequence[TemplateSubstitution]] = None,
expected_outputs: Optional[Sequence[str]] = None,
expected_output_template_file: Optional[FilePath] = None,
expected_output_template_str: Optional[str] = None,
expected_output_substitutions: Optional[Sequence[TemplateSubstitution]] = None,
flag_sets: Optional[Sequence[List[str | Path]]] = None, # Pass flags like ["--option-1", "--option-2"] to student programs
):
# +=====================================================================================+
# | Validation Rules |
# +=====================================================================================+
# * If `inputs` is specified, all other `input_*` parameters must be left unspecified.
# * Same thing with `expected_outputs`.
# * If `inputs` is not specified, you must specify either (mutually exclusive)
# `input_template_file` or `input_template_str` that follows a typical python
# format string, and you must specify `input_substitutions`.
# * Same thing with `expected_output_template_file`, `expected_output_template_str`,
# and `expected_output_substitutions`
...
# Here's an example of how you would use TestCaseTemplate
test_suite_1 = TestCaseTemplate(
inputs=["A", "B", "C"], # Three (3) Total Cases
expected_output_template_str="{}, {kwarged}, {}",
expected_output_substitutions=[
TSub(1.0, 2.0, kwarged="middle-arg-1"), # Case 1 Substitutions
TSub(2.0, 5.0, kwarged="middle-arg-2"), # Case 2 Substitutions
TSub(7.0, 6.0, kwarged="middle-arg-3"), # Case 3 Substitutions
]
)
make_tests_from_template(
["Test 1", "Test 2", "Test 3"],
test_suite_1
) # remember to construct the tests!
```
### Compare from Python Generator/Iterable
Sometimes, you want to generate a ton of test-cases (especially
small test-cases), and it would be incredibly waste to have thousands
of single-line files. You can create a python generator function that
follows either the following `Protocol` or `TypedDict`.
```py
from typing import Protocol, TypedDict, Generator, NotRequired, List
from pathlib import Path
from lograder.tests import make_tests_from_generator
# Your generator may return objects following the protocol...
class TestCaseProtocol(Protocol):
def get_name(self): ...
def get_input(self): ...
def get_expected_output(self): ...
class FlaggedTestCaseProtocol(TestCaseProtocol, Protocol):
def get_flags(self) -> List[str | Path]: ...
# Notice that TestCaseProtocol defaults to equal-weights
class WeightedTestCaseProtocol(TestCaseProtocol, Protocol):
def get_weight(self): ...
# ... or you can directly return a dict with the following keys.
class TestCaseDict(TypedDict):
name: str
input: str
expected_output: str
weight: NotRequired[float] # Defaults to 1.0, a.k.a. equal-weight.
flags: NotRequired[List[str | Path]]
# Here's an example of the syntax as well as the required
# signature of such a method:
@make_tests_from_generator
def test_suite_1() -> Generator[TestCaseProtocol | WeightedTestCaseProtocol | TestCaseDict, None, None]:
pass
# You'll have to query the `TestRegistry` from `lograder.tests` to access these tests directly, though.
```
Raw data
{
"_id": null,
"home_page": null,
"name": "lograder",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "gradescope, autograder",
"author": null,
"author_email": "lognd <logan@logand.app>",
"download_url": "https://files.pythonhosted.org/packages/cb/5d/1450324514eea3675fd902824665d2b170f686337e7f58dd923b14f0e301/lograder-0.0.14.tar.gz",
"platform": null,
"description": "# `lograder`: A Gradescope Autograder API\n\n----\nThis project just serves to standard different kinds of tests\nthat can be run on student code for the Gradescope autograder.\nAdditionally, this project was developed for the **University\nof Florida's Fall 2025 COP3504C** (*Advanced Programming \nFundamentals*), taught by Michael Link. However, you are\ncompletely free to use, remix, refactor, and abuse this code\nas much as you like.\n\n----\n# Project Builders\n\n----\n\n### C++ Complete Project with [I/O Comparison](#output-comparison)\n\n#### Build from C++ Source (*WIP*)\nTo build from source, you will need to import the C++\n`CxxSourceBuilder`. The executable will be randomly\nnamed and put in either a build directory, if the student\nhas one (`./build`) or the project root directory (`./`).\n\n```py\nfrom lograder.dispatch import CxxSourceDispatcher\nfrom lograder.output import AssignmentSummary\n\n# Note that when you make a test, it's automatically\n# registered with the `lograder.tests.registry.TestRegistry`\n\nassignment = CxxSourceDispatcher(project_root=\"/autograder/submission\")\npreprocessor_results = assignment.preprocess()\nbuild_results = assignment.build()\nruntime_results = assignment.run_tests()\n\nsummary = AssignmentSummary(\n preprocessor_output=preprocessor_results.get_output(),\n build_output=build_results.get_output(),\n runtime_summary=runtime_results.get_summary(),\n test_cases=runtime_results.get_test_cases()\n)\n```\n\n#### Build using CMake (*WIP*)\nTo build from a `CMakeLists.txt`, you will need to import the C++\n`CMakeBuilder`. This method will automatically run a breadth-first\nsearch starting in the project root directory (`./`) and \"lock on\"\nthe first (i.e. the file in the highest-level) `CMakeLists.txt` that\nit finds. If it can't find a `CMakeLists.txt`, it will raise an error.\n\nAdditionally, the program will look for the following targets first:\n`main`, `build`, and `demo`. Afterward, it will search for any target\nthat doesn't match: `all`, `install`, `test`, `package`, `package_source`,\n`edit_cache`, `rebuild_cache`, `clean`, `help`, `ALL_BUILD`, `ZERO_CHECK`,\n`INSTALL`, `RUN_TESTS`, and `PACKAGE`, and run the first target that it\nfinds. If it can't find a valid target, it will raise an error.\n\n```py\nfrom lograder.dispatch import CMakeDispatcher\nfrom lograder.output import AssignmentSummary\n\n# Note that when you make a test, it's automatically\n# registered with the `lograder.tests.registry.TestRegistry`\n\nassignment = CMakeDispatcher(project_root=\"/autograder/submission\")\npreprocessor_results = assignment.preprocess()\nbuild_results = assignment.build()\nruntime_results = assignment.run_tests()\n\nsummary = AssignmentSummary(\n preprocessor_output=preprocessor_results.get_output(),\n build_output=build_results.get_output(),\n runtime_summary=runtime_results.get_summary(),\n test_cases=runtime_results.get_test_cases()\n)\n```\n----\n\n### C++ Catch2 Unit Testing (*WIP*)\n\n----\n\n### Python Complete Project with [I/O Comparison](#output-comparison)\n\n#### Run project from `main.py` (*WIP*)\n\n#### Run project from `pyproject.toml` (*WIP*)\n\n----\n\n### Python pytest Unit Testing (*WIP*)\n\n----\n\n### Makefile Complete Project with [I/O Comparison](#output-comparison) (*WIP*)\n\nTo build from a `Makefile`, you will need a `MakefileBuilder`. It follows\nthe same general idea as the `CMakeBuilder` except that it searches for\n`Makefile` instead of `CMakeLists.txt`. Additionally, `MakefileBuilder`\nwill just run the default `make`.\n\n```py\nfrom lograder.dispatch import MakefileDispatcher\nfrom lograder.output import AssignmentSummary\n\n# Note that when you make a test, it's automatically\n# registered with the `lograder.tests.registry.TestRegistry`\n\nassignment = MakefileDispatcher(project_root=\"/autograder/submission\")\npreprocessor_results = assignment.preprocess()\nbuild_results = assignment.build()\nruntime_results = assignment.run_tests()\n\nsummary = AssignmentSummary(\n preprocessor_output=preprocessor_results.get_output(),\n build_output=build_results.get_output(),\n runtime_summary=runtime_results.get_summary(),\n test_cases=runtime_results.get_test_cases()\n)\n```\n\n----\n# Test Generation\n\n----\n\n## Output Comparison\n\n### Compare Simple Strings\n\nFor the smallest number of tiny test cases, there's no reason\nto have an over-bloated mess. You can just use:\n\n```py\nfrom typing import Sequence, Optional, List\nfrom pathlib import Path\nfrom lograder.tests import make_tests_from_strs, ExecutableOutputComparisonTest\n\n\ndef make_test_from_strs(\n *, # kwargs-only; to avoid confusion with argument sequence.\n names: Sequence[str],\n inputs: Sequence[str],\n expected_outputs: Sequence[str],\n flag_sets: Optional[Sequence[List[str | Path]]] = None,\n # Pass flags like [\"--option-1\", \"--option-2\"] to student programs\n weights: Optional[Sequence[float]] = None, # Defaults to equal-weight.\n) -> List[ExecutableOutputComparisonTest]: ...\n\n\n# Here's an example of how you'd use the above method:\nmake_tests_from_strs(\n names=[\"Test Case 1\", \"Test Case 2\"],\n inputs=[\"stdin-1\", \"stdin-2\"],\n expected_outputs=[\"stdout-1\", \"stdout-2\"]\n)\n```\n\n### Compare from Files\n\nIf you have a larger test, it would be very convenient to\nread files for input and output. Luckily, there's just the\nmethod to do so:\n\n```py\nfrom typing import Sequence, Optional, List\nfrom pathlib import Path\nfrom lograder.tests import make_tests_from_files, FilePath, ExecutableOutputComparisonTest\n\n\n# `make_tests_from_files` has the following signature.\ndef make_tests_from_files(\n *, # kwargs-only; to avoid confusion with argument sequence.\n names: Sequence[str],\n input_files: Optional[Sequence[FilePath]] = None, # `input_files` and `input_strs` mutually exclusive.\n input_strs: Optional[Sequence[str]] = None,\n expected_output_files: Optional[Sequence[FilePath]] = None,\n # same with `expected_output_files` and `expected_output_strs`\n expected_output_strs: Optional[Sequence[str]] = None,\n flag_sets: Optional[Sequence[List[str | Path]]] = None,\n # Pass flags like [\"--option-1\", \"--option-2\"] to student programs\n weights: Optional[Sequence[float]] = None, # Defaults to equal-weight.\n) -> List[ExecutableOutputComparisonTest]: ...\n\n\n# Here's an example of how you'd use the above method:\nmake_tests_from_files(\n names=[\"Test Case 1\", \"Test Case 2\"],\n input_files=[\"test/inputs/input1.txt\", \"test/inputs/input2.txt\"],\n expected_output_files=[\"test/inputs/output1.txt\", \"test/inputs/output2.txt\"]\n)\n```\n\n### Compare from Template\n\nFinally, sometimes the test-cases might be very long but \nvery repetitive. You can use `make_tests_from_template` \nand pass a `TestCaseTemplate` object and ...\n\n```py\nfrom typing import Sequence, Optional, List\nfrom pathlib import Path\nfrom lograder.tests import make_tests_from_template, TestCaseTemplate, FilePath\n\n\n# Here's the signature of a `TemplateSubstitution`\nclass TemplateSubstitution:\n def __init__(self, *args, **kwargs):\n # Stores args and kwargs to pass to str.format(...) later.\n ...\n\n\nTSub = TemplateSubstitution # Here's an alias that's quicker to type.\n\n\n# Here's the signature of a `TestCaseTemplate`\nclass TestCaseTemplate:\n def __init__(self, *,\n inputs: Optional[Sequence[str]] = None,\n input_template_file: Optional[FilePath] = None,\n input_template_str: Optional[str] = None,\n input_substitutions: Optional[Sequence[TemplateSubstitution]] = None,\n expected_outputs: Optional[Sequence[str]] = None,\n expected_output_template_file: Optional[FilePath] = None,\n expected_output_template_str: Optional[str] = None,\n expected_output_substitutions: Optional[Sequence[TemplateSubstitution]] = None,\n flag_sets: Optional[Sequence[List[str | Path]]] = None, # Pass flags like [\"--option-1\", \"--option-2\"] to student programs\n ):\n # +=====================================================================================+\n # | Validation Rules |\n # +=====================================================================================+\n # * If `inputs` is specified, all other `input_*` parameters must be left unspecified.\n # * Same thing with `expected_outputs`.\n # * If `inputs` is not specified, you must specify either (mutually exclusive) \n # `input_template_file` or `input_template_str` that follows a typical python\n # format string, and you must specify `input_substitutions`.\n # * Same thing with `expected_output_template_file`, `expected_output_template_str`, \n # and `expected_output_substitutions`\n ...\n\n\n# Here's an example of how you would use TestCaseTemplate\ntest_suite_1 = TestCaseTemplate(\n inputs=[\"A\", \"B\", \"C\"], # Three (3) Total Cases\n expected_output_template_str=\"{}, {kwarged}, {}\",\n expected_output_substitutions=[\n TSub(1.0, 2.0, kwarged=\"middle-arg-1\"), # Case 1 Substitutions\n TSub(2.0, 5.0, kwarged=\"middle-arg-2\"), # Case 2 Substitutions\n TSub(7.0, 6.0, kwarged=\"middle-arg-3\"), # Case 3 Substitutions\n ]\n)\nmake_tests_from_template(\n [\"Test 1\", \"Test 2\", \"Test 3\"],\n test_suite_1\n) # remember to construct the tests!\n\n```\n\n### Compare from Python Generator/Iterable\n\nSometimes, you want to generate a ton of test-cases (especially\nsmall test-cases), and it would be incredibly waste to have thousands\nof single-line files. You can create a python generator function that\nfollows either the following `Protocol` or `TypedDict`.\n\n```py\nfrom typing import Protocol, TypedDict, Generator, NotRequired, List\nfrom pathlib import Path\nfrom lograder.tests import make_tests_from_generator\n\n\n# Your generator may return objects following the protocol...\nclass TestCaseProtocol(Protocol):\n def get_name(self): ...\n\n def get_input(self): ...\n\n def get_expected_output(self): ...\n\nclass FlaggedTestCaseProtocol(TestCaseProtocol, Protocol):\n def get_flags(self) -> List[str | Path]: ...\n \n# Notice that TestCaseProtocol defaults to equal-weights\nclass WeightedTestCaseProtocol(TestCaseProtocol, Protocol):\n def get_weight(self): ...\n\n# ... or you can directly return a dict with the following keys.\nclass TestCaseDict(TypedDict):\n name: str\n input: str\n expected_output: str\n weight: NotRequired[float] # Defaults to 1.0, a.k.a. equal-weight.\n flags: NotRequired[List[str | Path]]\n\n\n# Here's an example of the syntax as well as the required \n# signature of such a method:\n@make_tests_from_generator\ndef test_suite_1() -> Generator[TestCaseProtocol | WeightedTestCaseProtocol | TestCaseDict, None, None]:\n pass\n\n# You'll have to query the `TestRegistry` from `lograder.tests` to access these tests directly, though.\n```\n\n\n\n",
"bugtrack_url": null,
"license": "Copyright 2025 Logan Dapp\n \n Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \u201cSoftware\u201d), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:\n \n The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.\n \n THE SOFTWARE IS PROVIDED \u201cAS IS\u201d, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.",
"summary": "An API for easy Gradescope Autograder assignment creation.",
"version": "0.0.14",
"project_urls": {
"Documentation": "https://github.com/lognd/lograder",
"Homepage": "https://github.com/lognd/lograder",
"Issues": "https://github.com/lognd/lograder/issues",
"Source": "https://github.com/lognd/lograder"
},
"split_keywords": [
"gradescope",
" autograder"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "ff7acc2e56b5c791ab11f429a55ff2a868f41339295b01e22f0c02dc54be15a9",
"md5": "002f23e83219c8b8fa6dd2f08903dd22",
"sha256": "53374712cab60d61c741ee3ec0fc78ab44dd5d82ed8130d2c9e208f309be92e0"
},
"downloads": -1,
"filename": "lograder-0.0.14-py3-none-any.whl",
"has_sig": false,
"md5_digest": "002f23e83219c8b8fa6dd2f08903dd22",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 45715,
"upload_time": "2025-09-05T05:47:47",
"upload_time_iso_8601": "2025-09-05T05:47:47.654291Z",
"url": "https://files.pythonhosted.org/packages/ff/7a/cc2e56b5c791ab11f429a55ff2a868f41339295b01e22f0c02dc54be15a9/lograder-0.0.14-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "cb5d1450324514eea3675fd902824665d2b170f686337e7f58dd923b14f0e301",
"md5": "19874b9bcbbd11d4d266623139358eea",
"sha256": "a02d7f709a315bf531e68001730e2a30b29bb67f29c76f3b4f932d8768425ce4"
},
"downloads": -1,
"filename": "lograder-0.0.14.tar.gz",
"has_sig": false,
"md5_digest": "19874b9bcbbd11d4d266623139358eea",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 35414,
"upload_time": "2025-09-05T05:47:49",
"upload_time_iso_8601": "2025-09-05T05:47:49.963086Z",
"url": "https://files.pythonhosted.org/packages/cb/5d/1450324514eea3675fd902824665d2b170f686337e7f58dd923b14f0e301/lograder-0.0.14.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-05 05:47:49",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "lognd",
"github_project": "lograder",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "lograder"
}