# grandjury
Python client for the GrandJury ML evaluation and verdict analysis API.
This package provides comprehensive access to the GrandJury server for ML model evaluation and voting analysis, supporting:
- **Model scoring** with decay-adjusted algorithms
- **Vote analysis** across multiple dimensions (time, completeness, confidence)
- **Multiple data formats** (pandas, polars, CSV, parquet, dict/list)
- **Performance optimizations** with optional dependencies
- **Backward compatibility** with existing code
**Patent Pending.**
## Installation
```bash
pip install grandjury
```
Optional performance dependencies:
```bash
pip install grandjury[performance] # Installs msgspec, pyarrow, polars
```
## Quick Start
### Basic Model Evaluation
```python
from grandjury import GrandJuryClient
# Initialize client
client = GrandJuryClient(api_key="your-api-key")
# Evaluate model performance
result = client.evaluate_model(
previous_score=0.7,
votes=[0.9, 0.8, 0.6],
reputations=[1.0, 1.0, 0.8]
)
print(f"Score: {result['score']:.4f}")
```
### Vote Analysis with Multiple Data Formats
```python
import pandas as pd
import polars as pl
# Your vote data
vote_data = [
{
"inference_id": 1,
"vote": True,
"voter_id": 101,
"vote_time": "2024-07-07T19:22:30",
# ... other fields
}
# ... more votes
]
# No authentication needed for analysis endpoints
client = GrandJuryClient()
# Use with different data formats
histogram = client.vote_histogram(vote_data) # dict/list
histogram = client.vote_histogram(pd.DataFrame(vote_data)) # pandas
histogram = client.vote_histogram(pl.DataFrame(vote_data)) # polars
histogram = client.vote_histogram("votes.csv") # CSV file
histogram = client.vote_histogram("votes.parquet") # Parquet file
# Vote completeness analysis
completeness = client.vote_completeness(
data=vote_data,
voter_list=[101, 102, 103]
)
# Population confidence
confidence = client.population_confidence(
data=vote_data,
voter_list=[101, 102, 103]
)
# Majority vote analysis
majority = client.majority_good_votes(
data=vote_data,
good_vote=True,
threshold=0.5
)
# Vote distribution per inference
distribution = client.votes_distribution(vote_data)
```
### Backward Compatibility
```python
# Original function still works
from grandjury import evaluate_model
result = evaluate_model(
predictions=["Model output 1", "Model output 2"],
references=["Expected 1", "Expected 2"],
api_key="your-api-key"
)
```
## API Endpoints
| Method | Description | Authentication |
|--------|-------------|----------------|
| `evaluate_model()` | Model scoring with decay algorithms | Required |
| `vote_histogram()` | Vote time distribution analysis | Optional |
| `vote_completeness()` | Voting completeness metrics | Optional |
| `population_confidence()` | Population confidence analysis | Optional |
| `majority_good_votes()` | Majority vote counting | Optional |
| `votes_distribution()` | Vote distribution per inference | Optional |
## Performance Features
The client automatically uses performance optimizations when available:
- **msgspec**: Faster JSON serialization
- **PyArrow**: Efficient Parquet file reading
- **Polars**: Native DataFrame support
Install with: `pip install msgspec pyarrow polars`
## Error Handling
```python
try:
result = client.vote_histogram(invalid_data)
except Exception as e:
print(f"API Error: {e}")
```
## Server URL Configuration
```python
# Default: https://grandjury-server.onrender.com/api/v1
client = GrandJuryClient()
# Custom server
client = GrandJuryClient(base_url="https://your-server.com")
# Automatically appends /api/v1 if missing
```
```bash
pip install grandjury
```
Optional performance dependencies:
```bash
pip install grandjury[performance] # Installs msgspec, pyarrow, polars
```
## Quick Start
### Basic Model Evaluation
```python
from grandjury import GrandJuryClient
# Initialize client
client = GrandJuryClient(api_key="your-api-key")
# Evaluate model performance
result = client.evaluate_model(
previous_score=0.7,
votes=[0.9, 0.8, 0.6],
reputations=[1.0, 1.0, 0.8]
)
print(f"Score: {result['score']:.4f}")
```
### Vote Analysis with Multiple Data Formats
```python
import pandas as pd
import polars as pl
# Your vote data
vote_data = [
{
"inference_id": 1,
"vote": True,
"voter_id": 101,
"vote_time": "2024-07-07T19:22:30",
# ... other fields
}
# ... more votes
]
# No authentication needed for analysis endpoints
client = GrandJuryClient()
# Use with different data formats
histogram = client.vote_histogram(vote_data) # dict/list
histogram = client.vote_histogram(pd.DataFrame(vote_data)) # pandas
histogram = client.vote_histogram(pl.DataFrame(vote_data)) # polars
histogram = client.vote_histogram("votes.csv") # CSV file
histogram = client.vote_histogram("votes.parquet") # Parquet file
# Vote completeness analysis
completeness = client.vote_completeness(
data=vote_data,
voter_list=[101, 102, 103]
)
# Population confidence
confidence = client.population_confidence(
data=vote_data,
voter_list=[101, 102, 103]
)
# Majority vote analysis
majority = client.majority_good_votes(
data=vote_data,
good_vote=True,
threshold=0.5
)
# Vote distribution per inference
distribution = client.votes_distribution(vote_data)
```
### Backward Compatibility
```python
# Original function still works
from grandjury import evaluate_model
result = evaluate_model(
predictions=["Model output 1", "Model output 2"],
references=["Expected 1", "Expected 2"],
api_key="your-api-key"
)
```
## API Endpoints
| Method | Description | Authentication |
|--------|-------------|----------------|
| `evaluate_model()` | Model scoring with decay algorithms | Required |
| `vote_histogram()` | Vote time distribution analysis | Optional |
| `vote_completeness()` | Voting completeness metrics | Optional |
| `population_confidence()` | Population confidence analysis | Optional |
| `majority_good_votes()` | Majority vote counting | Optional |
| `votes_distribution()` | Vote distribution per inference | Optional |
## Performance Features
The client automatically uses performance optimizations when available:
- **msgspec**: Faster JSON serialization
- **PyArrow**: Efficient Parquet file reading
- **Polars**: Native DataFrame support
Install with: `pip install msgspec pyarrow polars`
## Error Handling
```python
try:
result = client.vote_histogram(invalid_data)
except Exception as e:
print(f"API Error: {e}")
```
## Server URL Configuration
```python
# Default: https://grandjury-server.onrender.com/api/v1
client = GrandJuryClient()
# Custom server
client = GrandJuryClient(base_url="https://your-server.com")
# Automatically appends /api/v1 if missing
```
Raw data
{
"_id": null,
"home_page": null,
"name": "grandjury",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "ai, api-client, collective-intelligence, evaluation, model-scoring",
"author": null,
"author_email": "GrandJury Team <support@grandjury.com>",
"download_url": "https://files.pythonhosted.org/packages/3b/01/ef7c10662c3d2483ace11eae88f02638ffc6773efae12c3a4ff481c5a294/grandjury-1.0.1.tar.gz",
"platform": null,
"description": "# grandjury\n\nPython client for the GrandJury ML evaluation and verdict analysis API.\n\nThis package provides comprehensive access to the GrandJury server for ML model evaluation and voting analysis, supporting:\n\n- **Model scoring** with decay-adjusted algorithms\n- **Vote analysis** across multiple dimensions (time, completeness, confidence)\n- **Multiple data formats** (pandas, polars, CSV, parquet, dict/list)\n- **Performance optimizations** with optional dependencies\n- **Backward compatibility** with existing code\n\n**Patent Pending.**\n\n## Installation\n\n```bash\npip install grandjury\n```\n\nOptional performance dependencies:\n```bash\npip install grandjury[performance] # Installs msgspec, pyarrow, polars\n```\n\n## Quick Start\n\n### Basic Model Evaluation\n```python\nfrom grandjury import GrandJuryClient\n\n# Initialize client\nclient = GrandJuryClient(api_key=\"your-api-key\")\n\n# Evaluate model performance\nresult = client.evaluate_model(\n previous_score=0.7,\n votes=[0.9, 0.8, 0.6],\n reputations=[1.0, 1.0, 0.8]\n)\nprint(f\"Score: {result['score']:.4f}\")\n```\n\n### Vote Analysis with Multiple Data Formats\n\n```python\nimport pandas as pd\nimport polars as pl\n\n# Your vote data\nvote_data = [\n {\n \"inference_id\": 1,\n \"vote\": True,\n \"voter_id\": 101,\n \"vote_time\": \"2024-07-07T19:22:30\",\n # ... other fields\n }\n # ... more votes\n]\n\n# No authentication needed for analysis endpoints \nclient = GrandJuryClient()\n\n# Use with different data formats\nhistogram = client.vote_histogram(vote_data) # dict/list\nhistogram = client.vote_histogram(pd.DataFrame(vote_data)) # pandas\nhistogram = client.vote_histogram(pl.DataFrame(vote_data)) # polars\nhistogram = client.vote_histogram(\"votes.csv\") # CSV file\nhistogram = client.vote_histogram(\"votes.parquet\") # Parquet file\n\n# Vote completeness analysis\ncompleteness = client.vote_completeness(\n data=vote_data,\n voter_list=[101, 102, 103]\n)\n\n# Population confidence\nconfidence = client.population_confidence(\n data=vote_data,\n voter_list=[101, 102, 103]\n)\n\n# Majority vote analysis\nmajority = client.majority_good_votes(\n data=vote_data,\n good_vote=True,\n threshold=0.5\n)\n\n# Vote distribution per inference\ndistribution = client.votes_distribution(vote_data)\n```\n\n### Backward Compatibility\n\n```python\n# Original function still works\nfrom grandjury import evaluate_model\n\nresult = evaluate_model(\n predictions=[\"Model output 1\", \"Model output 2\"],\n references=[\"Expected 1\", \"Expected 2\"],\n api_key=\"your-api-key\"\n)\n```\n\n## API Endpoints\n\n| Method | Description | Authentication |\n|--------|-------------|----------------|\n| `evaluate_model()` | Model scoring with decay algorithms | Required |\n| `vote_histogram()` | Vote time distribution analysis | Optional |\n| `vote_completeness()` | Voting completeness metrics | Optional |\n| `population_confidence()` | Population confidence analysis | Optional |\n| `majority_good_votes()` | Majority vote counting | Optional |\n| `votes_distribution()` | Vote distribution per inference | Optional |\n\n## Performance Features\n\nThe client automatically uses performance optimizations when available:\n\n- **msgspec**: Faster JSON serialization\n- **PyArrow**: Efficient Parquet file reading \n- **Polars**: Native DataFrame support\n\nInstall with: `pip install msgspec pyarrow polars`\n\n## Error Handling\n\n```python\ntry:\n result = client.vote_histogram(invalid_data)\nexcept Exception as e:\n print(f\"API Error: {e}\")\n```\n\n## Server URL Configuration\n\n```python\n# Default: https://grandjury-server.onrender.com/api/v1\nclient = GrandJuryClient()\n\n# Custom server\nclient = GrandJuryClient(base_url=\"https://your-server.com\")\n# Automatically appends /api/v1 if missing\n```\n\n```bash\npip install grandjury\n```\n\nOptional performance dependencies:\n```bash\npip install grandjury[performance] # Installs msgspec, pyarrow, polars\n```\n\n## Quick Start\n\n### Basic Model Evaluation\n```python\nfrom grandjury import GrandJuryClient\n\n# Initialize client\nclient = GrandJuryClient(api_key=\"your-api-key\")\n\n# Evaluate model performance\nresult = client.evaluate_model(\n previous_score=0.7,\n votes=[0.9, 0.8, 0.6],\n reputations=[1.0, 1.0, 0.8]\n)\nprint(f\"Score: {result['score']:.4f}\")\n```\n\n### Vote Analysis with Multiple Data Formats\n\n```python\nimport pandas as pd\nimport polars as pl\n\n# Your vote data\nvote_data = [\n {\n \"inference_id\": 1,\n \"vote\": True,\n \"voter_id\": 101,\n \"vote_time\": \"2024-07-07T19:22:30\",\n # ... other fields\n }\n # ... more votes\n]\n\n# No authentication needed for analysis endpoints \nclient = GrandJuryClient()\n\n# Use with different data formats\nhistogram = client.vote_histogram(vote_data) # dict/list\nhistogram = client.vote_histogram(pd.DataFrame(vote_data)) # pandas\nhistogram = client.vote_histogram(pl.DataFrame(vote_data)) # polars\nhistogram = client.vote_histogram(\"votes.csv\") # CSV file\nhistogram = client.vote_histogram(\"votes.parquet\") # Parquet file\n\n# Vote completeness analysis\ncompleteness = client.vote_completeness(\n data=vote_data,\n voter_list=[101, 102, 103]\n)\n\n# Population confidence\nconfidence = client.population_confidence(\n data=vote_data,\n voter_list=[101, 102, 103]\n)\n\n# Majority vote analysis\nmajority = client.majority_good_votes(\n data=vote_data,\n good_vote=True,\n threshold=0.5\n)\n\n# Vote distribution per inference\ndistribution = client.votes_distribution(vote_data)\n```\n\n### Backward Compatibility\n\n```python\n# Original function still works\nfrom grandjury import evaluate_model\n\nresult = evaluate_model(\n predictions=[\"Model output 1\", \"Model output 2\"],\n references=[\"Expected 1\", \"Expected 2\"],\n api_key=\"your-api-key\"\n)\n```\n\n## API Endpoints\n\n| Method | Description | Authentication |\n|--------|-------------|----------------|\n| `evaluate_model()` | Model scoring with decay algorithms | Required |\n| `vote_histogram()` | Vote time distribution analysis | Optional |\n| `vote_completeness()` | Voting completeness metrics | Optional |\n| `population_confidence()` | Population confidence analysis | Optional |\n| `majority_good_votes()` | Majority vote counting | Optional |\n| `votes_distribution()` | Vote distribution per inference | Optional |\n\n## Performance Features\n\nThe client automatically uses performance optimizations when available:\n\n- **msgspec**: Faster JSON serialization\n- **PyArrow**: Efficient Parquet file reading \n- **Polars**: Native DataFrame support\n\nInstall with: `pip install msgspec pyarrow polars`\n\n## Error Handling\n\n```python\ntry:\n result = client.vote_histogram(invalid_data)\nexcept Exception as e:\n print(f\"API Error: {e}\")\n```\n\n## Server URL Configuration\n\n```python\n# Default: https://grandjury-server.onrender.com/api/v1\nclient = GrandJuryClient()\n\n# Custom server\nclient = GrandJuryClient(base_url=\"https://your-server.com\")\n# Automatically appends /api/v1 if missing\n```\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Python client for GrandJury server API - collective intelligence for model evaluation",
"version": "1.0.1",
"project_urls": {
"Bug Tracker": "https://github.com/grandjury/grandjury-python/issues",
"Documentation": "https://grandjury.readthedocs.io",
"Homepage": "https://github.com/grandjury/grandjury-python",
"Repository": "https://github.com/grandjury/grandjury-python.git"
},
"split_keywords": [
"ai",
" api-client",
" collective-intelligence",
" evaluation",
" model-scoring"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "44b269560e3e3d83125c01bb274c5ae4c1a7bc291b935c10ef663ef1100c45c6",
"md5": "ed6b0d1aba4f7512505952f818c6b4a4",
"sha256": "9ea3bf64a7892e464478b0c952597af2935099cc6942f88a0069ce809fe7122e"
},
"downloads": -1,
"filename": "grandjury-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ed6b0d1aba4f7512505952f818c6b4a4",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 6921,
"upload_time": "2025-07-18T05:08:39",
"upload_time_iso_8601": "2025-07-18T05:08:39.320946Z",
"url": "https://files.pythonhosted.org/packages/44/b2/69560e3e3d83125c01bb274c5ae4c1a7bc291b935c10ef663ef1100c45c6/grandjury-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "3b01ef7c10662c3d2483ace11eae88f02638ffc6773efae12c3a4ff481c5a294",
"md5": "e4675bb2a016fce36fdbde4b9729845f",
"sha256": "41006f4bb1ff1a4b9f89da25416dadda11fcb8c9a5259f25da4d244414d56ac0"
},
"downloads": -1,
"filename": "grandjury-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "e4675bb2a016fce36fdbde4b9729845f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 192033,
"upload_time": "2025-07-18T05:08:40",
"upload_time_iso_8601": "2025-07-18T05:08:40.451925Z",
"url": "https://files.pythonhosted.org/packages/3b/01/ef7c10662c3d2483ace11eae88f02638ffc6773efae12c3a4ff481c5a294/grandjury-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-18 05:08:40",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "grandjury",
"github_project": "grandjury-python",
"github_not_found": true,
"lcname": "grandjury"
}