ai-test-agent


Nameai-test-agent JSON
Version 0.1.0 PyPI version JSON
download
home_pageNone
SummaryAI-powered test automation agent
upload_time2025-10-20 23:33:33
maintainerNone
docs_urlNone
authorYour Name
requires_python<4.0,>=3.12
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AI Test Agent

An AI-powered test automation agent that analyzes code, generates tests, and produces detailed reports.

## Features

- **Project Analysis**: Analyzes project structure and identifies components for testing
- **Test Generation**: Generates comprehensive tests for various programming languages
- **Test Execution**: Runs tests and collects results
- **Reporting**: Produces detailed test reports in multiple formats
- **Coverage Analysis**: Generates code coverage reports
- **CLI Interface**: Command-line interface for easy integration
- **Docker Support**: Containerized for easy deployment

## Supported Languages

- Python
- JavaScript/TypeScript
- Java

## Installation

### Option 1: Pip / Pipx (recommended)

Install the CLI globally so it is available from any project directory.

```bash
pipx install ai-test-agent
# or, if you are working from a checked-out repo
pipx install .
```

You can also install into an existing virtual environment:

```bash
pip install ai-test-agent
```

### Option 2: Using Poetry (development)

```bash
git clone https://github.com/yourusername/ai-test-agent.git
cd ai-test-agent
poetry install
```

## Using Docker
```bash
docker build -t ai-test-agent .
docker run -it ai-test-agent
```

## Quickstart Workflow

1. **Initialize your project**  
   From inside the project directory you want the agent to manage:
   ```bash
   ai-test-agent init
   ```
   This creates `.aitestagent/config.json` with sensible defaults (paths, thresholds, model names).  
   Rerun `init` at any time to regenerate or adjust the manifest.

2. **Analyze the codebase**  
   When inside an initialized project, you no longer need to repeat `--project-path`:
   ```bash
    ai-test-agent analyze
   ```
   Results are written to the configured analysis output path (defaults to `analysis.json`).

3. **Generate tests**
   ```bash
   ai-test-agent generate
   ```
   Tests are written to the configured tests directory (defaults to `tests/`).

4. **Execute tests**
   ```bash
   ai-test-agent run
   ```
   Test results and coverage thresholds respect the manifest configuration.

5. **Produce a report**
   ```bash
   ai-test-agent report
   ```

6. **Run everything end-to-end**
   ```bash
   ai-test-agent all
   ```

If you prefer to override any setting ad-hoc (e.g., a different output path), pass the flag and it will override the manifest for that invocation.

## Commands At a Glance

| Command | Description |
| ------- | ----------- |
| `ai-test-agent init` | Create `.aitestagent/config.json` in the current project. |
| `ai-test-agent analyze` | Parse the project and write the structural analysis. |
| `ai-test-agent generate` | Generate tests into the configured output directory. |
| `ai-test-agent run` | Execute tests, collect results, and enforce coverage thresholds. |
| `ai-test-agent report` | Build an HTML report from stored results. |
| `ai-test-agent all` | Run analyze → generate → run → report in one go. |
| `ai-test-agent debug` | Attempt iterative fixes when tests fail. |
| `ai-test-agent interactive` | (Placeholder) Future interactive workflow. |

From outside an initialized project, you can still supply paths explicitly, e.g.:
```bash
ai-test-agent analyze --project-path /path/to/project
```

For a detailed option reference, see [`docs/CLI_REFERENCE.md`](docs/CLI_REFERENCE.md).

## Project Manifest

The manifest created by `init` lives at `.aitestagent/config.json` and stores relative paths
and defaults for:

- project root
- tests/analysis/results/report output paths
- coverage thresholds
- default LLM model

Feel free to edit this file directly or rerun `ai-test-agent init` to regenerate it.  
All commands look for the manifest in the current directory or any parent directory.

## Configuration

Additional runtime settings can be provided via environment variables (particularly for LLM access):

 - **OLLAMA_HOST**: Host for Ollama service
 - **OLLAMA_PORT**: Port for Ollama service
 - **DEFAULT_MODEL**: Default LLM model to use

The CLI will also create `.aitestagent/.env.example` during `init` for convenience.

### Contributing
 - Fork the repository
 - Create a feature branch
 - Make your changes
 - Add tests
 - Submit a pull request


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ai-test-agent",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.12",
    "maintainer_email": null,
    "keywords": null,
    "author": "Your Name",
    "author_email": "your.email@example.com",
    "download_url": "https://files.pythonhosted.org/packages/18/90/e19abe24c5c11e33d2a81e86cc3102165dc5375041ca857ec218c335b0b6/ai_test_agent-0.1.0.tar.gz",
    "platform": null,
    "description": "# AI Test Agent\n\nAn AI-powered test automation agent that analyzes code, generates tests, and produces detailed reports.\n\n## Features\n\n- **Project Analysis**: Analyzes project structure and identifies components for testing\n- **Test Generation**: Generates comprehensive tests for various programming languages\n- **Test Execution**: Runs tests and collects results\n- **Reporting**: Produces detailed test reports in multiple formats\n- **Coverage Analysis**: Generates code coverage reports\n- **CLI Interface**: Command-line interface for easy integration\n- **Docker Support**: Containerized for easy deployment\n\n## Supported Languages\n\n- Python\n- JavaScript/TypeScript\n- Java\n\n## Installation\n\n### Option 1: Pip / Pipx (recommended)\n\nInstall the CLI globally so it is available from any project directory.\n\n```bash\npipx install ai-test-agent\n# or, if you are working from a checked-out repo\npipx install .\n```\n\nYou can also install into an existing virtual environment:\n\n```bash\npip install ai-test-agent\n```\n\n### Option 2: Using Poetry (development)\n\n```bash\ngit clone https://github.com/yourusername/ai-test-agent.git\ncd ai-test-agent\npoetry install\n```\n\n## Using Docker\n```bash\ndocker build -t ai-test-agent .\ndocker run -it ai-test-agent\n```\n\n## Quickstart Workflow\n\n1. **Initialize your project**  \n   From inside the project directory you want the agent to manage:\n   ```bash\n   ai-test-agent init\n   ```\n   This creates `.aitestagent/config.json` with sensible defaults (paths, thresholds, model names).  \n   Rerun `init` at any time to regenerate or adjust the manifest.\n\n2. **Analyze the codebase**  \n   When inside an initialized project, you no longer need to repeat `--project-path`:\n   ```bash\n    ai-test-agent analyze\n   ```\n   Results are written to the configured analysis output path (defaults to `analysis.json`).\n\n3. **Generate tests**\n   ```bash\n   ai-test-agent generate\n   ```\n   Tests are written to the configured tests directory (defaults to `tests/`).\n\n4. **Execute tests**\n   ```bash\n   ai-test-agent run\n   ```\n   Test results and coverage thresholds respect the manifest configuration.\n\n5. **Produce a report**\n   ```bash\n   ai-test-agent report\n   ```\n\n6. **Run everything end-to-end**\n   ```bash\n   ai-test-agent all\n   ```\n\nIf you prefer to override any setting ad-hoc (e.g., a different output path), pass the flag and it will override the manifest for that invocation.\n\n## Commands At a Glance\n\n| Command | Description |\n| ------- | ----------- |\n| `ai-test-agent init` | Create `.aitestagent/config.json` in the current project. |\n| `ai-test-agent analyze` | Parse the project and write the structural analysis. |\n| `ai-test-agent generate` | Generate tests into the configured output directory. |\n| `ai-test-agent run` | Execute tests, collect results, and enforce coverage thresholds. |\n| `ai-test-agent report` | Build an HTML report from stored results. |\n| `ai-test-agent all` | Run analyze \u2192 generate \u2192 run \u2192 report in one go. |\n| `ai-test-agent debug` | Attempt iterative fixes when tests fail. |\n| `ai-test-agent interactive` | (Placeholder) Future interactive workflow. |\n\nFrom outside an initialized project, you can still supply paths explicitly, e.g.:\n```bash\nai-test-agent analyze --project-path /path/to/project\n```\n\nFor a detailed option reference, see [`docs/CLI_REFERENCE.md`](docs/CLI_REFERENCE.md).\n\n## Project Manifest\n\nThe manifest created by `init` lives at `.aitestagent/config.json` and stores relative paths\nand defaults for:\n\n- project root\n- tests/analysis/results/report output paths\n- coverage thresholds\n- default LLM model\n\nFeel free to edit this file directly or rerun `ai-test-agent init` to regenerate it.  \nAll commands look for the manifest in the current directory or any parent directory.\n\n## Configuration\n\nAdditional runtime settings can be provided via environment variables (particularly for LLM access):\n\n - **OLLAMA_HOST**: Host for Ollama service\n - **OLLAMA_PORT**: Port for Ollama service\n - **DEFAULT_MODEL**: Default LLM model to use\n\nThe CLI will also create `.aitestagent/.env.example` during `init` for convenience.\n\n### Contributing\n - Fork the repository\n - Create a feature branch\n - Make your changes\n - Add tests\n - Submit a pull request\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "AI-powered test automation agent",
    "version": "0.1.0",
    "project_urls": null,
    "split_keywords": [],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "e73ce14abc582accb135a123dc6de51887e44179652d51f02d5bc31ff67d9ee3",
                "md5": "a9f4e697d20eb533655b717bbae0d556",
                "sha256": "4b3773ba21b323e13f2b8348981d130d80ad654c8d7e496837818f1fb23a9f24"
            },
            "downloads": -1,
            "filename": "ai_test_agent-0.1.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "a9f4e697d20eb533655b717bbae0d556",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.12",
            "size": 57421,
            "upload_time": "2025-10-20T23:33:31",
            "upload_time_iso_8601": "2025-10-20T23:33:31.195074Z",
            "url": "https://files.pythonhosted.org/packages/e7/3c/e14abc582accb135a123dc6de51887e44179652d51f02d5bc31ff67d9ee3/ai_test_agent-0.1.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1890e19abe24c5c11e33d2a81e86cc3102165dc5375041ca857ec218c335b0b6",
                "md5": "6017b69824829bc4d8dab630cffdd470",
                "sha256": "ba23d7df1826d424e0d412a6b77bfca125575cd16c11b7c5b239271c224f871a"
            },
            "downloads": -1,
            "filename": "ai_test_agent-0.1.0.tar.gz",
            "has_sig": false,
            "md5_digest": "6017b69824829bc4d8dab630cffdd470",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.12",
            "size": 48756,
            "upload_time": "2025-10-20T23:33:33",
            "upload_time_iso_8601": "2025-10-20T23:33:33.619880Z",
            "url": "https://files.pythonhosted.org/packages/18/90/e19abe24c5c11e33d2a81e86cc3102165dc5375041ca857ec218c335b0b6/ai_test_agent-0.1.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-10-20 23:33:33",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "ai-test-agent"
}
        
Elapsed time: 2.31848s