# Evaluatr
<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->
[](https://pypi.org/project/evaluatr/)
[](LICENSE)
[](https://franckalbinet.github.io/evaluatr/)
## What is Evaluatr?
`Evaluatr` is an AI-powered system that automates the creation of
**Evidence Maps** - structured, visual tools that organize what we know
(and don’t know) about programs, policies, and interventions. It
transforms hundreds of hours of manual document review into an
automated, intelligent process.
### The Challenge We Solve
Traditional evidence mapping requires hundreds of staff-hours manually
reviewing and tagging evaluation documents. Evaluatr automates this
process while maintaining **accuracy** through human-AI collaboration,
making evidence mapping accessible and efficient.
## Key Features (WIP)
### Document Processing
- **Multi-Format Repository Reading**: Seamlessly process evaluation
repositories from diverse organizations with standardized outputs
- **OCR Processing**: Extract text from scanned PDFs and images to
ensure no valuable information is missed
- **Intelligent Document Chunking**: Break down documents into
meaningful segments for optimal analysis
### AI-Powered Analysis
- **Automated Information Extraction**: Extract key program details,
context, and findings using advanced AI models
- **Smart Document Tagging**: AI-assisted categorization and labeling of
evaluation reports
- **Enhanced RAG Implementation**: Enriched chunking strategy for more
accurate retrieval and generation
### Flexibility & Openness
- **Framework Agnostic Design**: Compatible with multiple evaluation
frameworks including IOM Strategic Results Framework and Global
Compact on Migration
- **Open Training Data**: Contribute to and benefit from
community-curated training datasets
## ️ Installation
### From PyPI (Recommended)
``` bash
pip install evaluatr
```
### From GitHub
``` bash
pip install git+https://github.com/franckalbinet/evaluatr.git
```
### Development Installation
``` bash
# Clone the repository
git clone https://github.com/franckalbinet/evaluatr.git
cd evaluatr
# Install in development mode
pip install -e .
# Make changes in nbs/ directory, then compile:
nbdev_prepare
```
## Quick Start
### Reading an IOM Evaluation Repository
``` python
from evaluatr.readers import IOMRepoReader
# Initialize reader with your Excel file
reader = IOMRepoReader('files/test/eval_repo_iom.xlsx')
# Process the repository
evaluations = reader()
# Each evaluation is a standardized dictionary
for eval in evaluations[:3]: # Show first 3
print(f"ID: {eval['id']}")
print(f"Title: {eval['meta']['Title']}")
print(f"Documents: {len(eval['docs'])}")
print("---")
```
ID: 1a57974ab89d7280988aa6b706147ce1
Title: EX-POST EVALUATION OF THE PROJECT: NIGERIA: STRENGTHENING REINTEGRATION FOR RETURNEES (SRARP) - PHASE II
Documents: 2
---
ID: c660e774d14854e20dc74457712b50ec
Title: FINAL EVALUATION OF THE PROJECT: STRENGTHEN BORDER MANAGEMENT AND SECURITY IN MALI AND NIGER THROUGH CAPACITY BUILDING OF BORDER AUTHORITIES AND ENHANCED DIALOGUE WITH BORDER COMMUNITIES
Documents: 2
---
ID: 2cae361c6779b561af07200e3d4e4051
Title: Final Evaluation of the project "SUPPORTING THE IMPLEMENTATION OF AN E RESIDENCE PLATFORM IN CABO VERDE"
Documents: 2
---
Exporting it to JSON:
reader.to_json('processed_evaluations.json')
## Downloading evaluation documents
``` python
from evaluatr.downloaders import download_docs
from pathlib import Path
fname = 'files/test/evaluations.json'
base_dir = Path("files/test/pdf_library")
download_docs(fname, base_dir=base_dir, n_workers=0, overwrite=True)
```
(#24) ['Downloaded Internal%20Evaluation_NG20P0516_MAY_2023_FINAL_Abderrahim%20EL%20MOULAT.pdf','Downloaded RR0163_Evaluation%20Brief_MAY_%202023_Abderrahim%20EL%20MOULAT.pdf','Downloaded IB0238_Evaluation%20Brief_FEB_%202023_Abderrahim%20EL%20MOULAT.pdf','Downloaded Internal%20Evaluation_IB0238__FEB_2023_FINAL%20RE_Abderrahim%20EL%20MOULAT.pdf','Downloaded IB0053_Evaluation%20Brief_SEP_%202022_Abderrahim%20EL%20MOULAT.pdf','Downloaded Internal%20Evaluation_IB0053_OCT_2022_FINAL_Abderrahim%20EL%20MOULAT_0.pdf','Downloaded Internal%20Evaluation_NC0030_JUNE_2022_FINAL_Abderrahim%20EL%20MOULAT_0.pdf','Downloaded NC0030_Evaluation%20Brief_June%202022_Abderrahim%20EL%20MOULAT.pdf','Downloaded CD0015_Evaluation%20Brief_May%202022_Abderrahim%20EL%20MOULAT.pdf','Downloaded Projet%20CD0015_Final%20Evaluation%20Report_May_202_Abderrahim%20EL%20MOULAT.pdf','Downloaded Internal%20Evaluation_Retour%20Vert_JUL_2021_Fina_Abderrahim%20EL%20MOULAT.pdf','Downloaded NC0012_Evaluation%20Brief_JUL%202021_Abderrahim%20EL%20MOULAT.pdf','Downloaded Nigeria%20GIZ%20Internal%20Evaluation_JANUARY_2021__Abderrahim%20EL%20MOULAT.pdf','Downloaded Nigeria%20GIZ%20Project_Evaluation%20Brief_JAN%202021_Abderrahim%20EL%20MOULAT_0.pdf','Downloaded Evaluation%20Brief_ARCO_Shiraz%20JERBI.pdF','Downloaded Final%20evaluation%20report_ARCO_Shiraz%20JERBI_1.pdf','Downloaded Management%20Response%20Matrix_ARCO_Shiraz%20JERBI.pdf','Downloaded IOM%20MANAGEMENT%20RESPONSE%20MATRIX.pdf','Downloaded IOM%20Niger%20-%20MIRAA%20III%20-%20Final%20Evaluation%20Report%20%28003%29.pdf','Downloaded CE.0369%20-%20IDEE%20-%20ANNEXE%201%20-%20Rapport%20Recherche_Joanie%20DUROCHER_0.pdf'...]
## Documentation
- **Full Documentation**: [GitHub
Pages](https://fr.anckalbi.net/evalstack/)
- **API Reference**: Available in the documentation
- **Examples**: See the `nbs/` directory for Jupyter notebooks
## Contributing
We welcome contributions! Here’s how you can help:
1. **Fork** the repository
2. **Create** a feature branch
(`git checkout -b feature/amazing-feature`)
3. **Make** your changes in the `nbs/` directory
4. **Compile** with `nbdev_prepare`
5. **Commit** your changes (`git commit -m 'Add amazing feature'`)
6. **Push** to the branch (`git push origin feature/amazing-feature`)
7. **Open** a Pull Request
### Development Setup
``` bash
# Install development dependencies
pip install -e .
# Make changes in nbs/ directory
# ...
# Compile changes to evalstack package
nbdev_prepare
```
## License
This project is licensed under the MIT License - see the
[LICENSE](LICENSE) file for details.
## Acknowledgments
- Built with [nbdev](https://nbdev.fast.ai/) for literate programming
- Uses [pandas](https://pandas.pydata.org/) for data processing
- Powered by [rich](https://rich.readthedocs.io/) for beautiful terminal
output
## Support
- **Issues**: [GitHub
Issues](https://github.com/franckalbinet/evalstack/issues)
- **Discussions**: [GitHub
Discussions](https://github.com/franckalbinet/evalstack/discussions)
- **Email**: \[Your email here\]
Raw data
{
"_id": null,
"home_page": "https://github.com/franckalbinet/evaluatr",
"name": "evaluatr",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "nbdev jupyter notebook python",
"author": "Franck Albinet",
"author_email": "franckalbinet@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/08/e3/0faba6b6b0fe4053271c1f938bd2eaf766789c7214ac4bfd0fb788c90022/evaluatr-0.0.4.tar.gz",
"platform": null,
"description": "# Evaluatr\n\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n[](https://pypi.org/project/evaluatr/)\n[](LICENSE)\n[](https://franckalbinet.github.io/evaluatr/)\n\n## What is Evaluatr?\n\n`Evaluatr` is an AI-powered system that automates the creation of\n**Evidence Maps** - structured, visual tools that organize what we know\n(and don\u2019t know) about programs, policies, and interventions. It\ntransforms hundreds of hours of manual document review into an\nautomated, intelligent process.\n\n### The Challenge We Solve\n\nTraditional evidence mapping requires hundreds of staff-hours manually\nreviewing and tagging evaluation documents. Evaluatr automates this\nprocess while maintaining **accuracy** through human-AI collaboration,\nmaking evidence mapping accessible and efficient.\n\n## Key Features (WIP)\n\n### Document Processing\n\n- **Multi-Format Repository Reading**: Seamlessly process evaluation\n repositories from diverse organizations with standardized outputs\n- **OCR Processing**: Extract text from scanned PDFs and images to\n ensure no valuable information is missed\n- **Intelligent Document Chunking**: Break down documents into\n meaningful segments for optimal analysis\n\n### AI-Powered Analysis\n\n- **Automated Information Extraction**: Extract key program details,\n context, and findings using advanced AI models\n- **Smart Document Tagging**: AI-assisted categorization and labeling of\n evaluation reports\n- **Enhanced RAG Implementation**: Enriched chunking strategy for more\n accurate retrieval and generation\n\n### Flexibility & Openness\n\n- **Framework Agnostic Design**: Compatible with multiple evaluation\n frameworks including IOM Strategic Results Framework and Global\n Compact on Migration\n- **Open Training Data**: Contribute to and benefit from\n community-curated training datasets\n\n## \ufe0f Installation\n\n### From PyPI (Recommended)\n\n``` bash\npip install evaluatr\n```\n\n### From GitHub\n\n``` bash\npip install git+https://github.com/franckalbinet/evaluatr.git\n```\n\n### Development Installation\n\n``` bash\n# Clone the repository\ngit clone https://github.com/franckalbinet/evaluatr.git\ncd evaluatr\n\n# Install in development mode\npip install -e .\n\n# Make changes in nbs/ directory, then compile:\nnbdev_prepare\n```\n\n## Quick Start\n\n### Reading an IOM Evaluation Repository\n\n``` python\nfrom evaluatr.readers import IOMRepoReader\n\n# Initialize reader with your Excel file\nreader = IOMRepoReader('files/test/eval_repo_iom.xlsx')\n\n# Process the repository\nevaluations = reader()\n\n# Each evaluation is a standardized dictionary\nfor eval in evaluations[:3]: # Show first 3\n print(f\"ID: {eval['id']}\")\n print(f\"Title: {eval['meta']['Title']}\")\n print(f\"Documents: {len(eval['docs'])}\")\n print(\"---\")\n```\n\n ID: 1a57974ab89d7280988aa6b706147ce1\n Title: EX-POST EVALUATION OF THE PROJECT: NIGERIA: STRENGTHENING REINTEGRATION FOR RETURNEES (SRARP) - PHASE II\n Documents: 2\n ---\n ID: c660e774d14854e20dc74457712b50ec\n Title: FINAL EVALUATION OF THE PROJECT: STRENGTHEN BORDER MANAGEMENT AND SECURITY IN MALI AND NIGER THROUGH CAPACITY BUILDING OF BORDER AUTHORITIES AND ENHANCED DIALOGUE WITH BORDER COMMUNITIES\n Documents: 2\n ---\n ID: 2cae361c6779b561af07200e3d4e4051\n Title: Final Evaluation of the project \"SUPPORTING THE IMPLEMENTATION OF AN E RESIDENCE PLATFORM IN CABO VERDE\"\n Documents: 2\n ---\n\nExporting it to JSON:\n\n reader.to_json('processed_evaluations.json')\n\n## Downloading evaluation documents\n\n``` python\nfrom evaluatr.downloaders import download_docs\nfrom pathlib import Path\n\nfname = 'files/test/evaluations.json'\nbase_dir = Path(\"files/test/pdf_library\")\ndownload_docs(fname, base_dir=base_dir, n_workers=0, overwrite=True)\n```\n\n (#24) ['Downloaded Internal%20Evaluation_NG20P0516_MAY_2023_FINAL_Abderrahim%20EL%20MOULAT.pdf','Downloaded RR0163_Evaluation%20Brief_MAY_%202023_Abderrahim%20EL%20MOULAT.pdf','Downloaded IB0238_Evaluation%20Brief_FEB_%202023_Abderrahim%20EL%20MOULAT.pdf','Downloaded Internal%20Evaluation_IB0238__FEB_2023_FINAL%20RE_Abderrahim%20EL%20MOULAT.pdf','Downloaded IB0053_Evaluation%20Brief_SEP_%202022_Abderrahim%20EL%20MOULAT.pdf','Downloaded Internal%20Evaluation_IB0053_OCT_2022_FINAL_Abderrahim%20EL%20MOULAT_0.pdf','Downloaded Internal%20Evaluation_NC0030_JUNE_2022_FINAL_Abderrahim%20EL%20MOULAT_0.pdf','Downloaded NC0030_Evaluation%20Brief_June%202022_Abderrahim%20EL%20MOULAT.pdf','Downloaded CD0015_Evaluation%20Brief_May%202022_Abderrahim%20EL%20MOULAT.pdf','Downloaded Projet%20CD0015_Final%20Evaluation%20Report_May_202_Abderrahim%20EL%20MOULAT.pdf','Downloaded Internal%20Evaluation_Retour%20Vert_JUL_2021_Fina_Abderrahim%20EL%20MOULAT.pdf','Downloaded NC0012_Evaluation%20Brief_JUL%202021_Abderrahim%20EL%20MOULAT.pdf','Downloaded Nigeria%20GIZ%20Internal%20Evaluation_JANUARY_2021__Abderrahim%20EL%20MOULAT.pdf','Downloaded Nigeria%20GIZ%20Project_Evaluation%20Brief_JAN%202021_Abderrahim%20EL%20MOULAT_0.pdf','Downloaded Evaluation%20Brief_ARCO_Shiraz%20JERBI.pdF','Downloaded Final%20evaluation%20report_ARCO_Shiraz%20JERBI_1.pdf','Downloaded Management%20Response%20Matrix_ARCO_Shiraz%20JERBI.pdf','Downloaded IOM%20MANAGEMENT%20RESPONSE%20MATRIX.pdf','Downloaded IOM%20Niger%20-%20MIRAA%20III%20-%20Final%20Evaluation%20Report%20%28003%29.pdf','Downloaded CE.0369%20-%20IDEE%20-%20ANNEXE%201%20-%20Rapport%20Recherche_Joanie%20DUROCHER_0.pdf'...]\n\n## Documentation\n\n- **Full Documentation**: [GitHub\n Pages](https://fr.anckalbi.net/evalstack/)\n- **API Reference**: Available in the documentation\n- **Examples**: See the `nbs/` directory for Jupyter notebooks\n\n## Contributing\n\nWe welcome contributions! Here\u2019s how you can help:\n\n1. **Fork** the repository\n2. **Create** a feature branch\n (`git checkout -b feature/amazing-feature`)\n3. **Make** your changes in the `nbs/` directory\n4. **Compile** with `nbdev_prepare`\n5. **Commit** your changes (`git commit -m 'Add amazing feature'`)\n6. **Push** to the branch (`git push origin feature/amazing-feature`)\n7. **Open** a Pull Request\n\n### Development Setup\n\n``` bash\n# Install development dependencies\npip install -e .\n\n# Make changes in nbs/ directory\n# ...\n\n# Compile changes to evalstack package\nnbdev_prepare\n```\n\n## License\n\nThis project is licensed under the MIT License - see the\n[LICENSE](LICENSE) file for details.\n\n## Acknowledgments\n\n- Built with [nbdev](https://nbdev.fast.ai/) for literate programming\n- Uses [pandas](https://pandas.pydata.org/) for data processing\n- Powered by [rich](https://rich.readthedocs.io/) for beautiful terminal\n output\n\n## Support\n\n- **Issues**: [GitHub\n Issues](https://github.com/franckalbinet/evalstack/issues)\n- **Discussions**: [GitHub\n Discussions](https://github.com/franckalbinet/evalstack/discussions)\n- **Email**: \\[Your email here\\]\n",
"bugtrack_url": null,
"license": "Apache Software License 2.0",
"summary": "Streamline policy evaluation workflows with AI-driven analysis and evaluation framework-agnostic processing",
"version": "0.0.4",
"project_urls": {
"Homepage": "https://github.com/franckalbinet/evaluatr"
},
"split_keywords": [
"nbdev",
"jupyter",
"notebook",
"python"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "64c00a226093586b13da7b29bff5b6afaa39ea063b46130e840670ca4912268c",
"md5": "77dc30eec9b43487611839f769a79e8f",
"sha256": "a00cdbf96a2e8c36353494104189516a4808aa5826369672fcd583a910663f1f"
},
"downloads": -1,
"filename": "evaluatr-0.0.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "77dc30eec9b43487611839f769a79e8f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 12700,
"upload_time": "2025-07-18T17:30:59",
"upload_time_iso_8601": "2025-07-18T17:30:59.526197Z",
"url": "https://files.pythonhosted.org/packages/64/c0/0a226093586b13da7b29bff5b6afaa39ea063b46130e840670ca4912268c/evaluatr-0.0.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "08e30faba6b6b0fe4053271c1f938bd2eaf766789c7214ac4bfd0fb788c90022",
"md5": "1a0ec9d0a158fd4a7841c160c16afe43",
"sha256": "19694ee2cd9103802cd08235a1fe65a51bd789e0a417ef76d1071e746bc51e3a"
},
"downloads": -1,
"filename": "evaluatr-0.0.4.tar.gz",
"has_sig": false,
"md5_digest": "1a0ec9d0a158fd4a7841c160c16afe43",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 13579,
"upload_time": "2025-07-18T17:31:00",
"upload_time_iso_8601": "2025-07-18T17:31:00.618304Z",
"url": "https://files.pythonhosted.org/packages/08/e3/0faba6b6b0fe4053271c1f938bd2eaf766789c7214ac4bfd0fb788c90022/evaluatr-0.0.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-18 17:31:00",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "franckalbinet",
"github_project": "evaluatr",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "evaluatr"
}