ntqr


Namentqr JSON
Version 0.4.2.3 PyPI version JSON
download
home_pageNone
SummaryTools for the logic of evaluation using unlabeled data
upload_time2024-10-31 13:59:07
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseNone
keywords evaluation algebra machine learning logic ai safety
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Logic tools to make your AI safer

![NTQR](./img/NTQRpt24.png)

```console
~$: pip install ntqr
```

:::{figure-md}
![Prevalence estimates](./img/threeLLMsBIGBenchMistakeMultistepArithmetic.png)

**The only evaluations possible for three LLMs (Claude, Mistral, ChatGPT) that
graded a fourth one (PaLM2) doing the multistep-arithmetic test in the BIG-Bench-Mistake
dataset. Using the axiom for the single binary classifier, we can reduce each LLMs possible
evaluations as grader of the PaLM2 to the circumscribed planes inside the space of
all possible evaluations.**
:::


Evaluation of noisy decision makers in unsupervised settings is a fundamental
safety engineering problem. This library contains algorithms that treat this
problem from a logical point of view by considering the axiomatic relationships
that must govern how groups may be performing given how they agree and disagree
in their answers to a common test.

A simple demonstration of the power of logic to clarify possible group evaluations
for noisy experts is given by the example of two of them taking a common test
and disagreeing on at least one answer. We can immediately deduce that it is
impossible for **both** to be 100% correct. Notice the power of this elimination
argument. We do not need to know anything about the test or its correct answers.
By just looking at how they agree and disagree, we can immediately deduce what
group evaluations are impossible. The NTQR package carries this out by formulating
algebraic logical axioms that must be obeyed by all evaluations of a given type
or model.

Using logic as an aid for safety is well known in other industries. Formal
software verification is one requirement for certifying the software that
carries out emergency shutdowns of nuclear plants. Autonomous vehicles are
field tested using linear temporal logic.

But logic seems out of place with how we engineer AI agents nowadays,
mostly as statistical algorithms that depend and rely on a well developed set
of ideas from probability theory. In addition, for some readers it may remind
them of the symbolic AI systems that were developed before probability theory
became the basic tool for building algorithms. These used knowledge bases and
tried to reason about the world and their actions. The logic of unsupervised
evaluation we are talking about is none of those.

It is not a logic about how to make decisions. That is what the symbolic
systems try to do. They do withs using a **world model**. These are hard
define except in simulated worlds. Evaluations are not like that. A binary
test is a binary test in any domain. The logic of unsupervised evaluation
requires **evaluation models**. And these are trivial to specify and construct.

The current version is building out the axioms and logic for the case of binary
classifiers and responders (R=2) and 3-class classifiers (R=3).
The current version (v0.4) has
taken a big step by using the case of 3-class classification to go back and
rewrite the 2-class (binary) classification case. In the process, it has now
become obvious how to create an alarm for any number of classes using just
the algorithms used here that depend on just the single classifier axioms.

Brief guide:
1. Formal verification of unsupervised evaluations: The NTQR package is
  working out the logic for verifying unsupervised evaluations - what are
  the group evaluations that are consistent with how the test takers agree
  and disagree on multiple-choice exams? The page "Formal verification of
  evaluations" explains this further.
2. A way to stop infinite monitoring chains: Who grades the graders? Montioring
  unsupervised test takers raises the specter of endless graders or monitors.
  By having a logic of unsupervised evaluation, we can stop those infinite
  chains. We can verify that pairs of classifiers are misaligned, for example.
  Take a look at the "Logical Alarms" Jupyter notebook.
3. Jury evaluation theorems: Jury decision theorems - when does the crowd
  decide wisely? - go as far back as Condorcet's 1785 theorem proving that majority
  voting makes the crowd wiser for better than average jury members. The NTQR
  package contains jury evaluation theorems - when does the crowd
  evaluate itself wisely? It turns out it does this better than majority voting
  can decide. This has important consequences for how we shoud design
  safer Ai systems. Check out the "Evaluation is easier than decision"
  page.

>**Warning**
This library is under heavy development and is presently meant only
for research and educational purposes. AI or any safety engineering is
not solvable by any one tool. These tools are meant to be part of a broader
safety monitoring system and are not meant as standalone solutions.
NTQR algorithms are meant to complement, not supplant, other safety tools.


            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ntqr",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "evaluation, algebra, machine learning, logic, ai safety",
    "author": null,
    "author_email": "Andres Corrada-Emmanuel <andres.corrada@dataengines.com>, Walker Lee <walker.lee@dataengines.com>, Adam Sloat <adam.sloat@dataengines.com>",
    "download_url": "https://files.pythonhosted.org/packages/fd/53/e3a0ae239bef90320141954ba5fcd554f30f74b08fee890cde2014218e16/ntqr-0.4.2.3.tar.gz",
    "platform": null,
    "description": "# Logic tools to make your AI safer\n\n![NTQR](./img/NTQRpt24.png)\n\n```console\n~$: pip install ntqr\n```\n\n:::{figure-md}\n![Prevalence estimates](./img/threeLLMsBIGBenchMistakeMultistepArithmetic.png)\n\n**The only evaluations possible for three LLMs (Claude, Mistral, ChatGPT) that\ngraded a fourth one (PaLM2) doing the multistep-arithmetic test in the BIG-Bench-Mistake\ndataset. Using the axiom for the single binary classifier, we can reduce each LLMs possible\nevaluations as grader of the PaLM2 to the circumscribed planes inside the space of\nall possible evaluations.**\n:::\n\n\nEvaluation of noisy decision makers in unsupervised settings is a fundamental\nsafety engineering problem. This library contains algorithms that treat this\nproblem from a logical point of view by considering the axiomatic relationships\nthat must govern how groups may be performing given how they agree and disagree\nin their answers to a common test.\n\nA simple demonstration of the power of logic to clarify possible group evaluations\nfor noisy experts is given by the example of two of them taking a common test\nand disagreeing on at least one answer. We can immediately deduce that it is\nimpossible for **both** to be 100% correct. Notice the power of this elimination\nargument. We do not need to know anything about the test or its correct answers.\nBy just looking at how they agree and disagree, we can immediately deduce what\ngroup evaluations are impossible. The NTQR package carries this out by formulating\nalgebraic logical axioms that must be obeyed by all evaluations of a given type\nor model.\n\nUsing logic as an aid for safety is well known in other industries. Formal\nsoftware verification is one requirement for certifying the software that\ncarries out emergency shutdowns of nuclear plants. Autonomous vehicles are\nfield tested using linear temporal logic.\n\nBut logic seems out of place with how we engineer AI agents nowadays,\nmostly as statistical algorithms that depend and rely on a well developed set\nof ideas from probability theory. In addition, for some readers it may remind\nthem of the symbolic AI systems that were developed before probability theory\nbecame the basic tool for building algorithms. These used knowledge bases and\ntried to reason about the world and their actions. The logic of unsupervised\nevaluation we are talking about is none of those.\n\nIt is not a logic about how to make decisions. That is what the symbolic\nsystems try to do. They do withs using a **world model**. These are hard\ndefine except in simulated worlds. Evaluations are not like that. A binary\ntest is a binary test in any domain. The logic of unsupervised evaluation\nrequires **evaluation models**. And these are trivial to specify and construct.\n\nThe current version is building out the axioms and logic for the case of binary\nclassifiers and responders (R=2) and 3-class classifiers (R=3).\nThe current version (v0.4) has\ntaken a big step by using the case of 3-class classification to go back and\nrewrite the 2-class (binary) classification case. In the process, it has now\nbecome obvious how to create an alarm for any number of classes using just\nthe algorithms used here that depend on just the single classifier axioms.\n\nBrief guide:\n1. Formal verification of unsupervised evaluations: The NTQR package is\n  working out the logic for verifying unsupervised evaluations - what are\n  the group evaluations that are consistent with how the test takers agree\n  and disagree on multiple-choice exams? The page \"Formal verification of\n  evaluations\" explains this further.\n2. A way to stop infinite monitoring chains: Who grades the graders? Montioring\n  unsupervised test takers raises the specter of endless graders or monitors.\n  By having a logic of unsupervised evaluation, we can stop those infinite\n  chains. We can verify that pairs of classifiers are misaligned, for example.\n  Take a look at the \"Logical Alarms\" Jupyter notebook.\n3. Jury evaluation theorems: Jury decision theorems - when does the crowd\n  decide wisely? - go as far back as Condorcet's 1785 theorem proving that majority\n  voting makes the crowd wiser for better than average jury members. The NTQR\n  package contains jury evaluation theorems - when does the crowd\n  evaluate itself wisely? It turns out it does this better than majority voting\n  can decide. This has important consequences for how we shoud design\n  safer Ai systems. Check out the \"Evaluation is easier than decision\"\n  page.\n\n>**Warning**\nThis library is under heavy development and is presently meant only\nfor research and educational purposes. AI or any safety engineering is\nnot solvable by any one tool. These tools are meant to be part of a broader\nsafety monitoring system and are not meant as standalone solutions.\nNTQR algorithms are meant to complement, not supplant, other safety tools.\n\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Tools for the logic of evaluation using unlabeled data",
    "version": "0.4.2.3",
    "project_urls": {
        "Documentation": "https://ntqr.readthedocs.org/en/latest",
        "Home": "https://github.com/andrescorrada/IntroductionToAlgebraicEvaluation/tree/main/python"
    },
    "split_keywords": [
        "evaluation",
        " algebra",
        " machine learning",
        " logic",
        " ai safety"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "f1999a90c42f6de5cd6807ac5b52082c48adb6f8edf2c0100fc78aaf681795cb",
                "md5": "b677efbfccc396c1977d78fa5b349728",
                "sha256": "2e0e9695258f500f1b0a27c178a1d81acf0a0e6b03ea27fd883156246e469852"
            },
            "downloads": -1,
            "filename": "ntqr-0.4.2.3-py2.py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b677efbfccc396c1977d78fa5b349728",
            "packagetype": "bdist_wheel",
            "python_version": "py2.py3",
            "requires_python": null,
            "size": 37863,
            "upload_time": "2024-10-31T13:58:58",
            "upload_time_iso_8601": "2024-10-31T13:58:58.246491Z",
            "url": "https://files.pythonhosted.org/packages/f1/99/9a90c42f6de5cd6807ac5b52082c48adb6f8edf2c0100fc78aaf681795cb/ntqr-0.4.2.3-py2.py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "fd53e3a0ae239bef90320141954ba5fcd554f30f74b08fee890cde2014218e16",
                "md5": "7ab6569226b660fec6bcb265e50b4081",
                "sha256": "744458ef50348de5e1ca439b65b966d31718a9debf51dc9a2924cdb584e44c85"
            },
            "downloads": -1,
            "filename": "ntqr-0.4.2.3.tar.gz",
            "has_sig": false,
            "md5_digest": "7ab6569226b660fec6bcb265e50b4081",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 2627201,
            "upload_time": "2024-10-31T13:59:07",
            "upload_time_iso_8601": "2024-10-31T13:59:07.063044Z",
            "url": "https://files.pythonhosted.org/packages/fd/53/e3a0ae239bef90320141954ba5fcd554f30f74b08fee890cde2014218e16/ntqr-0.4.2.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-31 13:59:07",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "andrescorrada",
    "github_project": "IntroductionToAlgebraicEvaluation",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "ntqr"
}
        
Elapsed time: 8.45137s