aac-req-qa


Nameaac-req-qa JSON
Version 0.3.1 PyPI version JSON
download
home_pagehttps://github.com/DevOps-MBSE/AaC-Req-QA
SummaryNone
upload_time2024-07-09 18:37:50
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseMIT License
keywords mbse requirements qa
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AaC-Req-QA

An AaC plugin to perform automated quality checks for the shall statements in your AaC model.

This plugin will scan your architecture for any `req` entries and will use an LLM to evaluate
the quality of the `shall` field.  The plugin is configured to evaluate your shall statement
using the following attributes:

- Unambiguous: The requirement should be simple, direct, and precise with no room for interpretation.
- Testable (verifiable): The requirement should be testable, and it should be possible to verify that the system meets the requirement.  Preferable the requirement should be verifiable by automated acceptance test, automated analysis, or demonstration rather than inspection.  If inspection is the only rational means of verification it will have a lower rating.
- Clear: The requirement should be concise, terse, simple, and precise.
- Correct:  The requirement should not have any false or invalid assertions.
- Understandable:  The requirement should be easily understood by all stakeholders.
- Feasible: The requirement should be realistic and achievable.
- Independent:  The requirement should stand-alone and not be dependent on other requirements.
- Atomic: The requirement should be a single, discrete, and indivisible statement.
- Necessary: The requirement should be necessary to the solution and not be redundant or superfluous.
- Implementation-free: The requirement should not specify how the solution will be implemented.  It should only specify what the solution should do, not how it should do it.

If the `shall` is evaluated to be of sufficient quality, the `aac check` will pass.  Otherwise, you will receive a
failure message produced by the AI with an assessment of each attribute and an overall score.  Failure results
from an overall score of `C (Medium)` or lower from the AI.

## Usage

If you haven't already, install Architecture-as-Code (AaC):
```bash
pip install aac
```
Next install this AaC-Req-QA plugin:
```bash
pip install aac-req-qa
```

Set the environment variables needed to access an OpenAI endpoint.  This may be a commercial
endpoint in OpenAI or Azure OpenAI or a self-hosted endpoint using a tool like Ollama or vLLM.

- AAC_AI_URL:  The usl of the LLM endpoint.  Example:  https://localhost:11434/v1 
- AAC_AI_MODEL:  The name of the LLM model.  Example:  `mistral` for local (i.e. Ollama), `gpt-4` for OpenAI or Azure
- AAC_AI_KEY:  The access key for the API.  If using a local model, any value will work but it must not be empty or missing.  Example: not-a-real-key

If you wish to use an Azure OpenAI set the following environment variables.

- AAC_AI_TYPE:  Set to "Azure", otherwise standard OpenAI client will be used.
- AAC_AI_API_VERSION:  Set to your desired / supported API Version.  Example: 2023-12-01-preview

If you have a proxy, set the proxy environment variables.

- AAC_HTTP_PROXY:  The HTTP proxy.  Example:  http://userid:password@myproxy.myorg.com:80
- AAC_HTTPS_PROXY:  The HTTPS proxy.  Example:  https://userid:password@myproxy.myorg.com:443
- AAC_SSL_VERIFY:  Allows you to disable SSL verification.  Value must be `true` or `false` (default: `true`).

Although this is a bit cumbersome, it is necessary as there is no other way to provide configuration data
within AaC, particularly for constraint plugins.  Remember to protect your secrets when configuring these
environment variables.

### Eval-Req Command

This plugin provides a new command called `eval-req` that will execute the requirements QA
on a specified AaC file.  This will perform an evaluation of each `req` in the AaC file based on
INCOSE requirements quality guidelines, and will give you all the AI output for each requirement.
Be aware, this performs a requirement-by-requirement evaluation with no context of surrounding
requirements in the specification.  It is often very difficult to meet all INCOSE guidelines
in a single requirement statement.  It will also perform separate AI evaluation calls for each
requirement which can take a lot of time. If you wish to perform fewer AI calls and evaluate your 
requirements as a set, use the `eval-spec` command instead.

### Eval-Spec Command

This plugin provides a new command called `eval-spec` that will execute the requirements specification QA
on a specified AaC file.  This will perform a wholistic review of the requirement specification and all
the included requirements against the INCOSE requirements quality guidelines.  For instances where
the quality of requirements need to be assessed in context, this is a good solution.


## Caveat

Because this is using an LLM, it is a non-deterministic process and cannot be guaranteed
to perform consistently.  The LLM is tuned to reduce variation and provide reliable, repeatable
performance to the greatest extent possible, but no guarantees can be made with the current
state-of-the art LLM models.

Performance is completely dependent on the performance of the LLM provided by the endpoint.
This has been tested with Azure OpenAI using GPT-4 as well as Mistral 7B run within Ollama
and had acceptable performance in both.  Performance with other models may be better or worse.

## Attribution

We're adapting the [analyze claims](https://github.com/danielmiessler/fabric/blob/main/patterns/analyze_claims/system.md) pattern
from the open source [Fabric project](https://github.com/danielmiessler/fabric) to evaluate requirements.  Huge thanks to the
Fabric team for the innovation and examples.

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/DevOps-MBSE/AaC-Req-QA",
    "name": "aac-req-qa",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "MBSE, Requirements, QA",
    "author": null,
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/97/e6/a7fac2b886bf41b6d6f50fcff581b71f3f70f686114b5e96ca856344032b/aac_req_qa-0.3.1.tar.gz",
    "platform": null,
    "description": "# AaC-Req-QA\n\nAn AaC plugin to perform automated quality checks for the shall statements in your AaC model.\n\nThis plugin will scan your architecture for any `req` entries and will use an LLM to evaluate\nthe quality of the `shall` field.  The plugin is configured to evaluate your shall statement\nusing the following attributes:\n\n- Unambiguous: The requirement should be simple, direct, and precise with no room for interpretation.\n- Testable (verifiable): The requirement should be testable, and it should be possible to verify that the system meets the requirement.  Preferable the requirement should be verifiable by automated acceptance test, automated analysis, or demonstration rather than inspection.  If inspection is the only rational means of verification it will have a lower rating.\n- Clear: The requirement should be concise, terse, simple, and precise.\n- Correct:  The requirement should not have any false or invalid assertions.\n- Understandable:  The requirement should be easily understood by all stakeholders.\n- Feasible: The requirement should be realistic and achievable.\n- Independent:  The requirement should stand-alone and not be dependent on other requirements.\n- Atomic: The requirement should be a single, discrete, and indivisible statement.\n- Necessary: The requirement should be necessary to the solution and not be redundant or superfluous.\n- Implementation-free: The requirement should not specify how the solution will be implemented.  It should only specify what the solution should do, not how it should do it.\n\nIf the `shall` is evaluated to be of sufficient quality, the `aac check` will pass.  Otherwise, you will receive a\nfailure message produced by the AI with an assessment of each attribute and an overall score.  Failure results\nfrom an overall score of `C (Medium)` or lower from the AI.\n\n## Usage\n\nIf you haven't already, install Architecture-as-Code (AaC):\n```bash\npip install aac\n```\nNext install this AaC-Req-QA plugin:\n```bash\npip install aac-req-qa\n```\n\nSet the environment variables needed to access an OpenAI endpoint.  This may be a commercial\nendpoint in OpenAI or Azure OpenAI or a self-hosted endpoint using a tool like Ollama or vLLM.\n\n- AAC_AI_URL:  The usl of the LLM endpoint.  Example:  https://localhost:11434/v1 \n- AAC_AI_MODEL:  The name of the LLM model.  Example:  `mistral` for local (i.e. Ollama), `gpt-4` for OpenAI or Azure\n- AAC_AI_KEY:  The access key for the API.  If using a local model, any value will work but it must not be empty or missing.  Example: not-a-real-key\n\nIf you wish to use an Azure OpenAI set the following environment variables.\n\n- AAC_AI_TYPE:  Set to \"Azure\", otherwise standard OpenAI client will be used.\n- AAC_AI_API_VERSION:  Set to your desired / supported API Version.  Example: 2023-12-01-preview\n\nIf you have a proxy, set the proxy environment variables.\n\n- AAC_HTTP_PROXY:  The HTTP proxy.  Example:  http://userid:password@myproxy.myorg.com:80\n- AAC_HTTPS_PROXY:  The HTTPS proxy.  Example:  https://userid:password@myproxy.myorg.com:443\n- AAC_SSL_VERIFY:  Allows you to disable SSL verification.  Value must be `true` or `false` (default: `true`).\n\nAlthough this is a bit cumbersome, it is necessary as there is no other way to provide configuration data\nwithin AaC, particularly for constraint plugins.  Remember to protect your secrets when configuring these\nenvironment variables.\n\n### Eval-Req Command\n\nThis plugin provides a new command called `eval-req` that will execute the requirements QA\non a specified AaC file.  This will perform an evaluation of each `req` in the AaC file based on\nINCOSE requirements quality guidelines, and will give you all the AI output for each requirement.\nBe aware, this performs a requirement-by-requirement evaluation with no context of surrounding\nrequirements in the specification.  It is often very difficult to meet all INCOSE guidelines\nin a single requirement statement.  It will also perform separate AI evaluation calls for each\nrequirement which can take a lot of time. If you wish to perform fewer AI calls and evaluate your \nrequirements as a set, use the `eval-spec` command instead.\n\n### Eval-Spec Command\n\nThis plugin provides a new command called `eval-spec` that will execute the requirements specification QA\non a specified AaC file.  This will perform a wholistic review of the requirement specification and all\nthe included requirements against the INCOSE requirements quality guidelines.  For instances where\nthe quality of requirements need to be assessed in context, this is a good solution.\n\n\n## Caveat\n\nBecause this is using an LLM, it is a non-deterministic process and cannot be guaranteed\nto perform consistently.  The LLM is tuned to reduce variation and provide reliable, repeatable\nperformance to the greatest extent possible, but no guarantees can be made with the current\nstate-of-the art LLM models.\n\nPerformance is completely dependent on the performance of the LLM provided by the endpoint.\nThis has been tested with Azure OpenAI using GPT-4 as well as Mistral 7B run within Ollama\nand had acceptable performance in both.  Performance with other models may be better or worse.\n\n## Attribution\n\nWe're adapting the [analyze claims](https://github.com/danielmiessler/fabric/blob/main/patterns/analyze_claims/system.md) pattern\nfrom the open source [Fabric project](https://github.com/danielmiessler/fabric) to evaluate requirements.  Huge thanks to the\nFabric team for the innovation and examples.\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": null,
    "version": "0.3.1",
    "project_urls": {
        "Homepage": "https://github.com/DevOps-MBSE/AaC-Req-QA"
    },
    "split_keywords": [
        "mbse",
        " requirements",
        " qa"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "a2e894c2a4d145528b7ed6fb805ed824309b75d44fe83d5d76269dcf9b2f7f63",
                "md5": "b8212774581b27601ed16972f1846153",
                "sha256": "c1a79df14ee01357e2bd357c1acd0bbfc42b9446182d913380d838e3f6a86001"
            },
            "downloads": -1,
            "filename": "aac_req_qa-0.3.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b8212774581b27601ed16972f1846153",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 12696,
            "upload_time": "2024-07-09T18:37:49",
            "upload_time_iso_8601": "2024-07-09T18:37:49.290757Z",
            "url": "https://files.pythonhosted.org/packages/a2/e8/94c2a4d145528b7ed6fb805ed824309b75d44fe83d5d76269dcf9b2f7f63/aac_req_qa-0.3.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "97e6a7fac2b886bf41b6d6f50fcff581b71f3f70f686114b5e96ca856344032b",
                "md5": "34a1bfd32213af5fe25e7f81fd2992b0",
                "sha256": "03f67e7586bcd9edeaeab2eda55feb184f302db9a90ee52c5f844290093aaeea"
            },
            "downloads": -1,
            "filename": "aac_req_qa-0.3.1.tar.gz",
            "has_sig": false,
            "md5_digest": "34a1bfd32213af5fe25e7f81fd2992b0",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 12586,
            "upload_time": "2024-07-09T18:37:50",
            "upload_time_iso_8601": "2024-07-09T18:37:50.668991Z",
            "url": "https://files.pythonhosted.org/packages/97/e6/a7fac2b886bf41b6d6f50fcff581b71f3f70f686114b5e96ca856344032b/aac_req_qa-0.3.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-07-09 18:37:50",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "DevOps-MBSE",
    "github_project": "AaC-Req-QA",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "tox": true,
    "lcname": "aac-req-qa"
}
        
Elapsed time: 0.36589s