ai-resume-parser


Nameai-resume-parser JSON
Version 1.0.3 PyPI version JSON
download
home_pageNone
SummaryAI-powered resume parser with parallel processing capabilities
upload_time2025-07-25 16:16:54
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseNone
keywords resume parsing ai nlp parallel processing recruitment hr llm resume parser gemini google genai openai job
VCS
bugtrack_url
requirements pydantic langchain-core python-dateutil pdfminer.six PyMuPDF python-docx phonenumbers
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ResumeParser Pro πŸš€

[![PyPI version](https://badge.fury.io/py/ai-resume-parser.svg)](https://badge.fury.io/py/resumeparser-pro)
[![Python Support](https://img.shields.io/pypi/pyversions/ai-resume-parser.svg)](https://pypi.org/project/resumeparser-pro/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

Production-ready AI-powered resume parser with parallel processing capabilities. Extract structured data from resumes in PDF, DOCX, and TXT formats using state-of-the-art language models.

## 🌟 Features

- **πŸ€– AI-Powered**: Uses advanced language models (GPT, Gemini, Claude, etc.)
- **⚑ Parallel Processing**: Process multiple resumes simultaneously
- **πŸ“Š Structured Output**: Returns clean, validated JSON data
- **🎯 High Accuracy**: Extracts 20+ fields with intelligent categorization
- **πŸ“ˆ Production Ready**: Robust error handling and logging
- **πŸ”Œ Easy Integration**: Simple API with just 3 lines of code

## πŸš€ Quick Start

### Installation
```bash
pip install ai-resume-parser
```
For full functionality (recommended)
```bash
pip install ai-resume-parser[full]
```


### Basic Usage
```python
from resumeparser_pro import ResumeParserPro

#Initialize parser
parser = ResumeParserPro(
provider="google_genai",
model_name="gemini-2.0-flash",
api_key="your-api-key"
)

#Parse single resume
result = parser.parse_resume("resume.pdf")
print(f"Name: {result.resume_data.contact_info.full_name}")
print(f"Experience: {result.resume_data.total_experience_months} months")
```
### Example Parsed Resume data

```python
{'file_path': 'resume.pdf',
 'success': True,
 'resume_data': {'contact_info': {'full_name': 'Jason Miller',
   'email': 'email@email.com',
   'phone': '+1386862',
   'location': 'Los Angeles, CA 90291, United States',
   'linkedin': 'https://www.linkedin.com/in/jason-miller',
   'github': None,
   'portfolio': None,
   'other_profiles': ['https://www.pinterest.com/jason-miller']},
  'professional_summary': 'Experienced Amazon Associate with five years’ tenure in a shipping yard setting, maintaining an average picking/packing speed of 98%. Holds a zero error% score in adhering to packing specs and 97% error-free ratio on packing records. Completed a certificate in Warehouse Sanitation and has a valid commercial driver’s license.',
  'skills': [{'category': 'Technical Skills',
    'skills': ['Picking',
     'Packing',
     'Inventory Management',
     'Shipping',
     'Record Keeping',
     'Kanban System',
     'Kaizen',
     'Gemba',
     '5S'],
    'proficiency_level': None},
   {'category': 'Soft Skills',
    'skills': ['Mathematics'],
    'proficiency_level': None},
   {'category': 'Other',
    'skills': ['Cleaning Equipment', 'Deep Sanitation Practices'],
    'proficiency_level': None}],
  'work_experience': [{'job_title': 'Amazon Warehouse Associate',
    'company': 'Amazon',
    'location': 'Miami Gardens',
    'employment_type': None,
    'start_date': '2021-01',
    'end_date': '2022-07',
    'duration_months': 19,
    'description': 'Performed all warehouse laborer duties such as packing, picking, counting, record keeping, and maintaining a clean area.',
    'responsibilities': [],
    'achievements': ['Consistently maintained picking/packing speeds in the 98th percentile.',
     'Picked all orders with 100% accuracy despite high speeds.',
     'Maintained a clean work area, meeting 97.5% of the inspection requirements.'],
    'technologies': []},
   {'job_title': 'Laboratory Inventory Assistant',
    'company': 'Dunrea Laboratories',
    'location': 'Orlando',
    'employment_type': 'Full-time',
    'start_date': '2019-01',
    'end_date': '2020-12',
    'duration_months': 24,
    'description': 'Full-time lab assistant in a small, regional laboratory tasked with Β· participating in Kaizen Events, Gemba walks, and 5S to remove barriers and improve productivity.',
    'responsibilities': ['Filled the warehouse helper job description, which involved picking, packing, shipping, inventory management, and cleaning equipment.'],
    'achievements': ['Saved 12% on UPS orders by staying on top of special deals.',
     'Cut down storage waste by 23% by switching to a Kanban system.'],
    'technologies': []}],
  'education': [{'degree': 'Associates Degree in Logistics and Supply Chain Fundamentals',
    'field_of_study': None,
    'institution': 'Atlanta Technical College',
    'location': 'Atlanta',
    'start_date': '2021-01',
    'end_date': '2022-07',
    'gpa': None,
    'honors': [],
    'relevant_coursework': ['Warehousing Operations',
     'Logistics and Distribution Practices',
     'Inventory Systems',
     'Supply Chain Principles']}],
  'projects': [],
  'certifications': [],
  'languages': [{'language': 'English', 'proficiency': None},
   {'language': 'Spanish', 'proficiency': None}],
  'publications': [],
  'awards': [{'title': 'Employee of the month',
    'issuer': 'Amazon',
    'date': None,
    'description': None}],
  'volunteer_experience': [],
  'interests': ['Action Cricket', 'Rugby', 'Athletics'],
  'total_experience_months': 43,
  'industry': 'Logistics & Supply Chain',
  'seniority_level': 'Mid-level'},
 'error_message': None,
 'parsing_time_seconds': 3.71349,
 'timestamp': '2025-07-25T15:19:50.614831'}
```

# Flexible approach (recommended)
```python
if result.success:
    name = result.resume_data.contact_info.full_name
    experience = result.resume_data.total_experience_months
    
# Quick overview (convenience method)
print(result.get_summary())

# Full data export
resume_dict = result.model_dump()
```

### Batch Processing
```python
#Process multiple resumes in parallel
file_paths = ["resume1.pdf", "resume2.docx", "resume3.pdf"]
results = parser.parse_batch(file_paths)
#returns list of parsed resumes

#Get successful results
successful_resumes = parser.get_successful_resumes(results)
print(f"Parsed {len(successful_resumes)} resumes successfully")
```

## πŸ“Š Extracted Data

ResumeParser Pro extracts **20+ structured fields**:

### Contact Information
- Full name, email, phone number
- Location, LinkedIn, GitHub, portfolio
- Other social profiles

### Professional Data
- Work experience with **integer month durations**
- Education with GPA standardization
- Skills categorized by type
- Projects with technologies and outcomes
- Certifications with dates and organizations

### Metadata
- Total experience in months
- Industry classification
- Seniority level assessment

## 🎯 Supported AI Providers

Since `ai-resume-parser` uses LangChain's `init_chat_model`, it supports **all LangChain-compatible providers**:

### **Major Providers:**
| Provider | Example Models | Setup |
|----------|--------|-------|
| **Google** | Gemini 2.0 Flash, Gemini Pro, Gemini 1.5 | `provider="google_genai"` |
| **OpenAI** | GPT-4o, GPT-4o-mini, GPT-4 Turbo | `provider="openai"` |
| **Anthropic** | Claude 3.5 Sonnet, Claude 3 Opus | `provider="anthropic"` |
| **Azure OpenAI** | GPT-4, GPT-3.5-turbo | `provider="azure_openai"` |
| **AWS Bedrock** | Claude, Llama, Titan | `provider="bedrock"` |
| **Cohere** | Command, Command-R | `provider="cohere"` |
| **Mistral** | Mistral Large, Mixtral | `provider="mistral"` |
| **Ollama** | Local models (Llama, CodeLlama) | `provider="ollama"` |
| **Together** | Various open-source models | `provider="together"` |

### **Usage Examples:**
```python
#Google Gemini
!pip install langchain-google-genai

parser = ResumeParserPro(
provider="google_genai",
model_name="gemini-2.0-flash",
api_key="your-google-api-key"
)

#Azure OpenAI
parser = ResumeParserPro(
provider="azure_openai",
model_name="gpt-4",
api_key="your-azure-key"
)

#Local Ollama
parser = ResumeParserPro(
provider="ollama",
model_name="llama2:7b",
api_key="" # No API key needed for local
)

#AWS Bedrock
parser = ResumeParserPro(
provider="bedrock",
model_name="anthropic.claude-3-sonnet-20240229-v1:0",
api_key="your-aws-credentials"
)
```

**Full list**: See [LangChain Model Providers](https://python.langchain.com/docs/integrations/chat/) for complete provider support.


## πŸ“ˆ Performance

- **Speed**: ~3-5 seconds per resume (based on th llm used)
- **Parallel Processing**: 5-10x faster for batch operations
- **Accuracy**: 95%+ field extraction accuracy
- **File Support**: PDF, DOCX, TXT formats

## πŸ› οΈ Advanced Features

### Custom Configuration
```python
parser = ResumeParserPro(
provider="openai",
model_name="gpt-4o-mini",
api_key="your-api-key",
max_workers=10, # Parallel processing workers
temperature=0.1 # Model consistency
)
```

### Error Handling
```python
results = parser.parse_batch(file_paths, include_failed=True)

Get processing summary
summary = parser.get_summary(results)
print(f"Success rate: {summary['success_rate']:.1f}%")
print(f"Failed files: {len(summary['failed_files'])}")
```

## πŸ“‹ Requirements

- Python 3.8+
- API key from supported provider
- Optional: PyMuPDF, python-docx for enhanced file support

## 🀝 Contributing

Contributions welcome! Please read our contributing guidelines.

## πŸ“„ License

MIT License - see LICENSE file for details.

## πŸ†˜ Support

- πŸ“– [Documentation](https://github.com/Ruthikr/ai-resume-parser/docs)
- πŸ› [Issue Tracker](https://github.com/Ruthikr/ai-resume-parser/issues)
- πŸ’¬ [Discussions](https://github.com/Ruthikr/ai-resume-parser/discussions)

---

**Built with ❀️ for the recruitment and HR community**

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ai-resume-parser",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "resume parsing, AI, NLP, parallel processing, recruitment, HR, LLM, resume, parser, Gemini, Google GenAI, OpenAI, job",
    "author": null,
    "author_email": "Ruthik Reddy <ruthikr369@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/e6/0d/a75344e6a24a886003194aaa460bbe5e828e218bf1ce7b295af144e1b93a/ai_resume_parser-1.0.3.tar.gz",
    "platform": null,
    "description": "# ResumeParser Pro \ud83d\ude80\n\n[![PyPI version](https://badge.fury.io/py/ai-resume-parser.svg)](https://badge.fury.io/py/resumeparser-pro)\n[![Python Support](https://img.shields.io/pypi/pyversions/ai-resume-parser.svg)](https://pypi.org/project/resumeparser-pro/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)\n\nProduction-ready AI-powered resume parser with parallel processing capabilities. Extract structured data from resumes in PDF, DOCX, and TXT formats using state-of-the-art language models.\n\n## \ud83c\udf1f Features\n\n- **\ud83e\udd16 AI-Powered**: Uses advanced language models (GPT, Gemini, Claude, etc.)\n- **\u26a1 Parallel Processing**: Process multiple resumes simultaneously\n- **\ud83d\udcca Structured Output**: Returns clean, validated JSON data\n- **\ud83c\udfaf High Accuracy**: Extracts 20+ fields with intelligent categorization\n- **\ud83d\udcc8 Production Ready**: Robust error handling and logging\n- **\ud83d\udd0c Easy Integration**: Simple API with just 3 lines of code\n\n## \ud83d\ude80 Quick Start\n\n### Installation\n```bash\npip install ai-resume-parser\n```\nFor full functionality (recommended)\n```bash\npip install ai-resume-parser[full]\n```\n\n\n### Basic Usage\n```python\nfrom resumeparser_pro import ResumeParserPro\n\n#Initialize parser\nparser = ResumeParserPro(\nprovider=\"google_genai\",\nmodel_name=\"gemini-2.0-flash\",\napi_key=\"your-api-key\"\n)\n\n#Parse single resume\nresult = parser.parse_resume(\"resume.pdf\")\nprint(f\"Name: {result.resume_data.contact_info.full_name}\")\nprint(f\"Experience: {result.resume_data.total_experience_months} months\")\n```\n### Example Parsed Resume data\n\n```python\n{'file_path': 'resume.pdf',\n 'success': True,\n 'resume_data': {'contact_info': {'full_name': 'Jason Miller',\n   'email': 'email@email.com',\n   'phone': '+1386862',\n   'location': 'Los Angeles, CA 90291, United States',\n   'linkedin': 'https://www.linkedin.com/in/jason-miller',\n   'github': None,\n   'portfolio': None,\n   'other_profiles': ['https://www.pinterest.com/jason-miller']},\n  'professional_summary': 'Experienced Amazon Associate with five years\u2019 tenure in a shipping yard setting, maintaining an average picking/packing speed of 98%. Holds a zero error% score in adhering to packing specs and 97% error-free ratio on packing records. Completed a certificate in Warehouse Sanitation and has a valid commercial driver\u2019s license.',\n  'skills': [{'category': 'Technical Skills',\n    'skills': ['Picking',\n     'Packing',\n     'Inventory Management',\n     'Shipping',\n     'Record Keeping',\n     'Kanban System',\n     'Kaizen',\n     'Gemba',\n     '5S'],\n    'proficiency_level': None},\n   {'category': 'Soft Skills',\n    'skills': ['Mathematics'],\n    'proficiency_level': None},\n   {'category': 'Other',\n    'skills': ['Cleaning Equipment', 'Deep Sanitation Practices'],\n    'proficiency_level': None}],\n  'work_experience': [{'job_title': 'Amazon Warehouse Associate',\n    'company': 'Amazon',\n    'location': 'Miami Gardens',\n    'employment_type': None,\n    'start_date': '2021-01',\n    'end_date': '2022-07',\n    'duration_months': 19,\n    'description': 'Performed all warehouse laborer duties such as packing, picking, counting, record keeping, and maintaining a clean area.',\n    'responsibilities': [],\n    'achievements': ['Consistently maintained picking/packing speeds in the 98th percentile.',\n     'Picked all orders with 100% accuracy despite high speeds.',\n     'Maintained a clean work area, meeting 97.5% of the inspection requirements.'],\n    'technologies': []},\n   {'job_title': 'Laboratory Inventory Assistant',\n    'company': 'Dunrea Laboratories',\n    'location': 'Orlando',\n    'employment_type': 'Full-time',\n    'start_date': '2019-01',\n    'end_date': '2020-12',\n    'duration_months': 24,\n    'description': 'Full-time lab assistant in a small, regional laboratory tasked with \u00b7 participating in Kaizen Events, Gemba walks, and 5S to remove barriers and improve productivity.',\n    'responsibilities': ['Filled the warehouse helper job description, which involved picking, packing, shipping, inventory management, and cleaning equipment.'],\n    'achievements': ['Saved 12% on UPS orders by staying on top of special deals.',\n     'Cut down storage waste by 23% by switching to a Kanban system.'],\n    'technologies': []}],\n  'education': [{'degree': 'Associates Degree in Logistics and Supply Chain Fundamentals',\n    'field_of_study': None,\n    'institution': 'Atlanta Technical College',\n    'location': 'Atlanta',\n    'start_date': '2021-01',\n    'end_date': '2022-07',\n    'gpa': None,\n    'honors': [],\n    'relevant_coursework': ['Warehousing Operations',\n     'Logistics and Distribution Practices',\n     'Inventory Systems',\n     'Supply Chain Principles']}],\n  'projects': [],\n  'certifications': [],\n  'languages': [{'language': 'English', 'proficiency': None},\n   {'language': 'Spanish', 'proficiency': None}],\n  'publications': [],\n  'awards': [{'title': 'Employee of the month',\n    'issuer': 'Amazon',\n    'date': None,\n    'description': None}],\n  'volunteer_experience': [],\n  'interests': ['Action Cricket', 'Rugby', 'Athletics'],\n  'total_experience_months': 43,\n  'industry': 'Logistics & Supply Chain',\n  'seniority_level': 'Mid-level'},\n 'error_message': None,\n 'parsing_time_seconds': 3.71349,\n 'timestamp': '2025-07-25T15:19:50.614831'}\n```\n\n# Flexible approach (recommended)\n```python\nif result.success:\n    name = result.resume_data.contact_info.full_name\n    experience = result.resume_data.total_experience_months\n    \n# Quick overview (convenience method)\nprint(result.get_summary())\n\n# Full data export\nresume_dict = result.model_dump()\n```\n\n### Batch Processing\n```python\n#Process multiple resumes in parallel\nfile_paths = [\"resume1.pdf\", \"resume2.docx\", \"resume3.pdf\"]\nresults = parser.parse_batch(file_paths)\n#returns list of parsed resumes\n\n#Get successful results\nsuccessful_resumes = parser.get_successful_resumes(results)\nprint(f\"Parsed {len(successful_resumes)} resumes successfully\")\n```\n\n## \ud83d\udcca Extracted Data\n\nResumeParser Pro extracts **20+ structured fields**:\n\n### Contact Information\n- Full name, email, phone number\n- Location, LinkedIn, GitHub, portfolio\n- Other social profiles\n\n### Professional Data\n- Work experience with **integer month durations**\n- Education with GPA standardization\n- Skills categorized by type\n- Projects with technologies and outcomes\n- Certifications with dates and organizations\n\n### Metadata\n- Total experience in months\n- Industry classification\n- Seniority level assessment\n\n## \ud83c\udfaf Supported AI Providers\n\nSince `ai-resume-parser` uses LangChain's `init_chat_model`, it supports **all LangChain-compatible providers**:\n\n### **Major Providers:**\n| Provider | Example Models | Setup |\n|----------|--------|-------|\n| **Google** | Gemini 2.0 Flash, Gemini Pro, Gemini 1.5 | `provider=\"google_genai\"` |\n| **OpenAI** | GPT-4o, GPT-4o-mini, GPT-4 Turbo | `provider=\"openai\"` |\n| **Anthropic** | Claude 3.5 Sonnet, Claude 3 Opus | `provider=\"anthropic\"` |\n| **Azure OpenAI** | GPT-4, GPT-3.5-turbo | `provider=\"azure_openai\"` |\n| **AWS Bedrock** | Claude, Llama, Titan | `provider=\"bedrock\"` |\n| **Cohere** | Command, Command-R | `provider=\"cohere\"` |\n| **Mistral** | Mistral Large, Mixtral | `provider=\"mistral\"` |\n| **Ollama** | Local models (Llama, CodeLlama) | `provider=\"ollama\"` |\n| **Together** | Various open-source models | `provider=\"together\"` |\n\n### **Usage Examples:**\n```python\n#Google Gemini\n!pip install langchain-google-genai\n\nparser = ResumeParserPro(\nprovider=\"google_genai\",\nmodel_name=\"gemini-2.0-flash\",\napi_key=\"your-google-api-key\"\n)\n\n#Azure OpenAI\nparser = ResumeParserPro(\nprovider=\"azure_openai\",\nmodel_name=\"gpt-4\",\napi_key=\"your-azure-key\"\n)\n\n#Local Ollama\nparser = ResumeParserPro(\nprovider=\"ollama\",\nmodel_name=\"llama2:7b\",\napi_key=\"\" # No API key needed for local\n)\n\n#AWS Bedrock\nparser = ResumeParserPro(\nprovider=\"bedrock\",\nmodel_name=\"anthropic.claude-3-sonnet-20240229-v1:0\",\napi_key=\"your-aws-credentials\"\n)\n```\n\n**Full list**: See [LangChain Model Providers](https://python.langchain.com/docs/integrations/chat/) for complete provider support.\n\n\n## \ud83d\udcc8 Performance\n\n- **Speed**: ~3-5 seconds per resume (based on th llm used)\n- **Parallel Processing**: 5-10x faster for batch operations\n- **Accuracy**: 95%+ field extraction accuracy\n- **File Support**: PDF, DOCX, TXT formats\n\n## \ud83d\udee0\ufe0f Advanced Features\n\n### Custom Configuration\n```python\nparser = ResumeParserPro(\nprovider=\"openai\",\nmodel_name=\"gpt-4o-mini\",\napi_key=\"your-api-key\",\nmax_workers=10, # Parallel processing workers\ntemperature=0.1 # Model consistency\n)\n```\n\n### Error Handling\n```python\nresults = parser.parse_batch(file_paths, include_failed=True)\n\nGet processing summary\nsummary = parser.get_summary(results)\nprint(f\"Success rate: {summary['success_rate']:.1f}%\")\nprint(f\"Failed files: {len(summary['failed_files'])}\")\n```\n\n## \ud83d\udccb Requirements\n\n- Python 3.8+\n- API key from supported provider\n- Optional: PyMuPDF, python-docx for enhanced file support\n\n## \ud83e\udd1d Contributing\n\nContributions welcome! Please read our contributing guidelines.\n\n## \ud83d\udcc4 License\n\nMIT License - see LICENSE file for details.\n\n## \ud83c\udd98 Support\n\n- \ud83d\udcd6 [Documentation](https://github.com/Ruthikr/ai-resume-parser/docs)\n- \ud83d\udc1b [Issue Tracker](https://github.com/Ruthikr/ai-resume-parser/issues)\n- \ud83d\udcac [Discussions](https://github.com/Ruthikr/ai-resume-parser/discussions)\n\n---\n\n**Built with \u2764\ufe0f for the recruitment and HR community**\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "AI-powered resume parser with parallel processing capabilities",
    "version": "1.0.3",
    "project_urls": {
        "Documentation": "https://github.com/Ruthikr/ai-resume-parser/tree/main/docs",
        "Homepage": "https://github.com/Ruthikr",
        "Issues": "https://github.com/Ruthikr/ai-resume-parser/issues",
        "Repository": "https://github.com/Ruthikr/ai-resume-parser"
    },
    "split_keywords": [
        "resume parsing",
        " ai",
        " nlp",
        " parallel processing",
        " recruitment",
        " hr",
        " llm",
        " resume",
        " parser",
        " gemini",
        " google genai",
        " openai",
        " job"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f38e9c28d35f21f3bebf9f404ff07769bf7d18f4a4508f0663a309be6bafd90c",
                "md5": "8d1024bffb38d55b0e1f477d666239e7",
                "sha256": "362a09f178e5295ff893cca3e4fe4270c04c4f6b64ee7d6b283a9da4294fc9ec"
            },
            "downloads": -1,
            "filename": "ai_resume_parser-1.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "8d1024bffb38d55b0e1f477d666239e7",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 15526,
            "upload_time": "2025-07-25T16:16:52",
            "upload_time_iso_8601": "2025-07-25T16:16:52.677913Z",
            "url": "https://files.pythonhosted.org/packages/f3/8e/9c28d35f21f3bebf9f404ff07769bf7d18f4a4508f0663a309be6bafd90c/ai_resume_parser-1.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "e60da75344e6a24a886003194aaa460bbe5e828e218bf1ce7b295af144e1b93a",
                "md5": "73df355594673b2312948f1e0c3ac435",
                "sha256": "450ac54894c73da3d0ac5aa456c11e2487fbc9375b8945c9b83850b2d66b26a7"
            },
            "downloads": -1,
            "filename": "ai_resume_parser-1.0.3.tar.gz",
            "has_sig": false,
            "md5_digest": "73df355594673b2312948f1e0c3ac435",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 14131,
            "upload_time": "2025-07-25T16:16:54",
            "upload_time_iso_8601": "2025-07-25T16:16:54.892513Z",
            "url": "https://files.pythonhosted.org/packages/e6/0d/a75344e6a24a886003194aaa460bbe5e828e218bf1ce7b295af144e1b93a/ai_resume_parser-1.0.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-25 16:16:54",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Ruthikr",
    "github_project": "ai-resume-parser",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "pydantic",
            "specs": [
                [
                    ">=",
                    "2.0.0"
                ]
            ]
        },
        {
            "name": "langchain-core",
            "specs": [
                [
                    ">=",
                    "0.1.0"
                ]
            ]
        },
        {
            "name": "python-dateutil",
            "specs": [
                [
                    ">=",
                    "2.8.0"
                ]
            ]
        },
        {
            "name": "pdfminer.six",
            "specs": [
                [
                    ">=",
                    "20221105"
                ]
            ]
        },
        {
            "name": "PyMuPDF",
            "specs": [
                [
                    ">=",
                    "1.23.0"
                ]
            ]
        },
        {
            "name": "python-docx",
            "specs": [
                [
                    ">=",
                    "0.8.11"
                ]
            ]
        },
        {
            "name": "phonenumbers",
            "specs": [
                [
                    ">=",
                    "8.13.0"
                ]
            ]
        }
    ],
    "lcname": "ai-resume-parser"
}
        
Elapsed time: 0.42240s