# bilateral-truth: Caching Bilateral Factuality Evaluation
A Python package for bilateral factuality evaluation with generalized truth values and persistent caching.
## Overview
This package implements the mathematical function:
**ζ_c: ℒ_AT → 𝒱³ × 𝒱³**
Where:
- ℒ_AT is the language of assertions
- 𝒱³ represents 3-valued logic components {t, e, f} (true, undefined, false)
- The function returns generalized truth values <u,v> with bilateral evaluation
## Key Features
- **Bilateral Evaluation**: Each assertion receives a generalized truth value <u,v> where u represents verifiability and v represents refutability
- **Persistent Caching**: The evaluation function maintains a cache to avoid recomputing truth values for previously evaluated assertions
- **3-Valued Logic**: Supports true (t), undefined (e), and false (f) truth value components
- **Extensible Evaluation**: Custom evaluation functions can be provided for domain-specific logic
## Installation
### From PyPI (Recommended)
```bash
# Core package with mock evaluator
pip install bilateral-truth
# With OpenAI support
pip install bilateral-truth[openai]
# With Anthropic (Claude) support
pip install bilateral-truth[anthropic]
# With all LLM providers
pip install bilateral-truth[all]
```
### Development Setup
#### Option 1: Automated Setup (Recommended)
```bash
# Set up virtual environment and install everything
./setup_venv.sh
# Activate the virtual environment
source venv/bin/activate
```
#### Option 2: Manual Setup
```bash
# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate
# Install the package in development mode with all dependencies
pip install -e .[all,dev]
```
## Quick Start
```python
from bilateral_truth import Assertion, zeta_c, create_llm_evaluator
# Create an LLM evaluator (requires API key)
evaluator = create_llm_evaluator('openai', model='gpt-4')
# or: evaluator = create_llm_evaluator('anthropic', model='claude-sonnet-4-20250514')
# or: evaluator = create_llm_evaluator('mock') # for testing
# Create assertions
assertion1 = Assertion("The capital of France is Paris")
assertion2 = Assertion("loves", "alice", "bob")
assertion3 = Assertion("It will rain tomorrow")
# Evaluate using ζ_c with LLM-based bilateral assessment
result1 = zeta_c(assertion1, evaluator.evaluate_bilateral)
result2 = zeta_c(assertion2, evaluator.evaluate_bilateral)
result3 = zeta_c(assertion3, evaluator.evaluate_bilateral)
print(f"zeta_c({assertion1}) = {result1}")
print(f"zeta_c({assertion2}) = {result2}")
print(f"zeta_c({assertion3}) = {result3}")
```
## Core Components
### Generalized Truth Values
```python
from bilateral_truth import GeneralizedTruthValue, TruthValueComponent
# Classical values using projection
from bilateral_truth import EpistemicPolicy
classical_true = GeneralizedTruthValue(TruthValueComponent.TRUE, TruthValueComponent.FALSE) # <t,f>
classical_false = GeneralizedTruthValue(TruthValueComponent.FALSE, TruthValueComponent.TRUE) # <f,t>
undefined_val = GeneralizedTruthValue(TruthValueComponent.UNDEFINED, TruthValueComponent.UNDEFINED) # <e,e>
# Project to 3-valued logic
projected_true = classical_true.project(EpistemicPolicy.CLASSICAL) # t
projected_false = classical_false.project(EpistemicPolicy.CLASSICAL) # f
projected_undefined = undefined_val.project(EpistemicPolicy.CLASSICAL) # e
# Custom combinations
custom_val = GeneralizedTruthValue(
TruthValueComponent.TRUE,
TruthValueComponent.UNDEFINED
) # <t,e>
```
### Assertions
```python
from bilateral_truth import Assertion
# Simple statement
statement = Assertion("The sky is blue")
# Predicate with arguments
loves = Assertion("loves", "alice", "bob")
# With named arguments
distance = Assertion("distance",
start="NYC",
end="LA",
value=2500,
unit="miles")
# Natural language statements
weather = Assertion("It will rain tomorrow")
fact = Assertion("The capital of France is Paris")
```
### Caching Behavior
The zeta_c function implements the mathematical definition:
```
zeta_c(φ) = {
c(φ) if φ ∈ dom(c)
ζ(φ) otherwise, and c := c ∪ {(φ, ζ(φ))}
}
```
```python
from bilateral_truth import zeta_c, get_cache_size, clear_cache
assertion = Assertion("test")
# First evaluation computes and caches
result1 = zeta_c(assertion)
print(f"Cache size: {get_cache_size()}") # 1
# Second evaluation uses cache
result2 = zeta_c(assertion)
print(f"Same result: {result1 == result2}") # True
print(f"Cache size: {get_cache_size()}") # Still 1
```
### LLM-Based Bilateral Evaluation
```python
# Set up environment variables first:
# export OPENAI_API_KEY='your-key'
# export ANTHROPIC_API_KEY='your-key'
from bilateral_truth import zeta_c, create_llm_evaluator, Assertion
# Create real LLM evaluator
openai_evaluator = create_llm_evaluator('openai', model='gpt-4')
claude_evaluator = create_llm_evaluator('anthropic')
# Or use mock evaluator for testing/development
mock_evaluator = create_llm_evaluator('mock')
# The LLM will assess both verifiability and refutability
assertion = Assertion("The Earth is round")
result = zeta_c(assertion, openai_evaluator.evaluate_bilateral)
# The LLM receives a prompt asking it to evaluate:
# 1. Can this statement be verified as true? (verifiability)
# 2. Can this statement be refuted as false? (refutability)
# And returns a structured <u,v> response
```
## API Reference
### Functions
- **`zeta(assertion, evaluator)`**: Base bilateral evaluation function (requires LLM evaluator)
- **`zeta_c(assertion, evaluator, cache=None)`**: Cached bilateral evaluation function
- **`clear_cache()`**: Clear the global cache
- **`get_cache_size()`**: Get the number of cached entries
- **`create_llm_evaluator(provider, **kwargs)`**: Factory for creating LLM evaluators
### Classes
- **`Assertion(statement, *args, **kwargs)`**: Represents natural language assertions or predicates
- **`GeneralizedTruthValue(u, v)`**: Represents <u,v> truth values
- **`TruthValueComponent`**: Enum for t, e, f values
- **`ZetaCache`**: Cache implementation for zeta_c
- **`OpenAIEvaluator`**: LLM evaluator using OpenAI's API
- **`AnthropicEvaluator`**: LLM evaluator using Anthropic's API
- **`MockLLMEvaluator`**: Mock evaluator for testing/development
## Command Line Interface
After installation, use the `bilateral-truth` command:
```bash
# Install the package first
pip install -e .
# Interactive mode with GPT-4 (requires OPENAI_API_KEY)
bilateral-truth --model gpt-4 --interactive
# Single assertion evaluation with Claude (requires ANTHROPIC_API_KEY)
bilateral-truth --model claude "The capital of France is Paris"
# Use OpenRouter with Llama model (requires OPENROUTER_API_KEY)
bilateral-truth --model llama3-70b "Climate change is real"
# Use mock model for testing (no API key needed)
bilateral-truth --model mock "The sky is blue"
# Use majority voting with 5 samples for more robust results
bilateral-truth --model gpt-4 --samples 5 "Climate change is real"
# Use pessimistic tiebreaking with even number of samples
bilateral-truth --model claude --samples 4 --tiebreak pessimistic "The Earth is round"
# List all available models
bilateral-truth --list-models
# Get information about a specific model
bilateral-truth --model-info gpt-4
```
### Running without installation:
```bash
# Use the standalone script
python cli.py -m mock "The Earth is round"
# Interactive mode with sampling
python cli.py -m mock -s 3 --tiebreak random -i
# Single evaluation with majority voting
python cli.py -m llama3 -s 5 "The sky is blue"
# Run the demo
python demo_cli.py
```
### Supported Models
The CLI supports models from multiple providers:
- **OpenAI**: GPT-4, GPT-3.5-turbo, etc.
- **Anthropic**: Claude-4 (Opus, Sonnet)
- **OpenRouter**: Llama, Mistral, Gemini, and many more models
- **Mock**: For testing and development
### API Keys
Set environment variables for the providers you want to use:
```bash
export OPENAI_API_KEY='your-openai-key'
export ANTHROPIC_API_KEY='your-anthropic-key'
export OPENROUTER_API_KEY='your-openrouter-key'
```
### Sampling and Majority Voting
The CLI supports robust evaluation using multiple samples and majority voting, as described in the ArXiv paper:
```bash
# Single evaluation (default)
python cli.py -m gpt4 "The sky is blue"
# Majority voting with 5 samples for more robust results
python cli.py -m gpt4 -s 5 "Climate change is real"
# Even number of samples with tiebreaking strategies
python cli.py -m claude -s 4 --tiebreak conservative "The Earth is round"
python cli.py -m llama3 -s 6 --tiebreak optimistic "AI will be beneficial"
python cli.py -m mixtral -s 4 --tiebreak random "Democracy is good"
```
**Tiebreaking Strategies:**
When multiple samples produce tied votes for a component, the tiebreaking strategy determines the outcome:
- **`random`** (default): Randomly choose among tied components
- Unbiased but unpredictable
- Example: `[t,t,f,f]` → randomly pick `t` or `f`
- **`pessimistic`**: Prefer `f` (cannot verify/refute) when in doubt
- Bias toward epistemic caution: "Better to admit uncertainty than make false claims"
- Tends toward `<f,f>` (paracomplete/unknown) outcomes
- Example: `[t,t,f,f]` → choose `f`
- **`optimistic`**: Prefer `t` (verified/refuted) when in doubt
- Bias toward strong claims: "Give statements the benefit of the doubt"
- Tends toward classical `<t,f>` or `<f,t>` outcomes
- Example: `[t,t,f,f]` → choose `t`
**Benefits of Sampling:**
- Reduces variance in LLM responses
- More reliable bilateral evaluation results
- Configurable confidence through sample size
- Handles ties systematically with multiple strategies
## Examples
Run the included examples:
```bash
python llm_examples.py # LLM-based bilateral evaluation examples
python examples.py # Legacy examples (deprecated)
python demo_cli.py # CLI demonstration
```
## Testing
Run the test suite:
```bash
python -m pytest tests/
```
Or run individual test modules:
```bash
python -m unittest tests.test_truth_values
python -m unittest tests.test_assertions
python -m unittest tests.test_zeta_function
```
## Mathematical Background
This implementation is based on bilateral factuality evaluation as described in the research paper. The key mathematical concepts include:
1. **Generalized Truth Values**: <u,v> pairs where both components are from {t, e, f} where:
- First position (u): t = verifiable, f = not verifiable, e = undefined
- Second position (v): t = refutable, f = not refutable, e = undefined
2. **Bilateral Evaluation**: Separate assessment of verifiability (u) and refutability (v)
3. **Persistent Caching**: Immutable cache updates maintaining consistency across evaluations
## Requirements
- Python 3.9+
- No external dependencies (uses only Python standard library)
## License
MIT License
## Citation
If you use this implementation in research, please cite the original paper:
[ArXiv Paper Link](https://arxiv.org/html/2507.09751v2)
Raw data
{
"_id": null,
"home_page": null,
"name": "bilateral-truth",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": "Bradley Allen <b.p.allen@uva.nl>",
"keywords": "factuality, truth-values, bilateral-evaluation, llm, ai-safety, logic, verification",
"author": "Bradley Allen",
"author_email": "Bradley Allen <b.p.allen@uva.nl>",
"download_url": "https://files.pythonhosted.org/packages/04/fa/c8038a39490e949f1e4c6bcf27df4371afb2585adebd76936d5b1bdef6c4/bilateral_truth-0.2.1.tar.gz",
"platform": null,
"description": "# bilateral-truth: Caching Bilateral Factuality Evaluation\n\nA Python package for bilateral factuality evaluation with generalized truth values and persistent caching.\n\n## Overview\n\nThis package implements the mathematical function:\n\n**\u03b6_c: \u2112_AT \u2192 \ud835\udcb1\u00b3 \u00d7 \ud835\udcb1\u00b3**\n\nWhere:\n- \u2112_AT is the language of assertions\n- \ud835\udcb1\u00b3 represents 3-valued logic components {t, e, f} (true, undefined, false)\n- The function returns generalized truth values <u,v> with bilateral evaluation\n\n## Key Features\n\n- **Bilateral Evaluation**: Each assertion receives a generalized truth value <u,v> where u represents verifiability and v represents refutability\n- **Persistent Caching**: The evaluation function maintains a cache to avoid recomputing truth values for previously evaluated assertions\n- **3-Valued Logic**: Supports true (t), undefined (e), and false (f) truth value components\n- **Extensible Evaluation**: Custom evaluation functions can be provided for domain-specific logic\n\n## Installation\n\n### From PyPI (Recommended)\n\n```bash\n# Core package with mock evaluator\npip install bilateral-truth\n\n# With OpenAI support\npip install bilateral-truth[openai]\n\n# With Anthropic (Claude) support \npip install bilateral-truth[anthropic]\n\n# With all LLM providers\npip install bilateral-truth[all]\n```\n\n### Development Setup\n\n#### Option 1: Automated Setup (Recommended)\n\n```bash\n# Set up virtual environment and install everything\n./setup_venv.sh\n\n# Activate the virtual environment\nsource venv/bin/activate\n```\n\n#### Option 2: Manual Setup\n\n```bash\n# Create and activate virtual environment\npython3 -m venv venv\nsource venv/bin/activate\n\n# Install the package in development mode with all dependencies\npip install -e .[all,dev]\n```\n\n## Quick Start\n\n```python\nfrom bilateral_truth import Assertion, zeta_c, create_llm_evaluator\n\n# Create an LLM evaluator (requires API key)\nevaluator = create_llm_evaluator('openai', model='gpt-4')\n# or: evaluator = create_llm_evaluator('anthropic', model='claude-sonnet-4-20250514')\n# or: evaluator = create_llm_evaluator('mock') # for testing\n\n# Create assertions\nassertion1 = Assertion(\"The capital of France is Paris\")\nassertion2 = Assertion(\"loves\", \"alice\", \"bob\") \nassertion3 = Assertion(\"It will rain tomorrow\")\n\n# Evaluate using \u03b6_c with LLM-based bilateral assessment\nresult1 = zeta_c(assertion1, evaluator.evaluate_bilateral)\nresult2 = zeta_c(assertion2, evaluator.evaluate_bilateral)\nresult3 = zeta_c(assertion3, evaluator.evaluate_bilateral)\n\nprint(f\"zeta_c({assertion1}) = {result1}\")\nprint(f\"zeta_c({assertion2}) = {result2}\")\nprint(f\"zeta_c({assertion3}) = {result3}\")\n```\n\n## Core Components\n\n### Generalized Truth Values\n\n```python\nfrom bilateral_truth import GeneralizedTruthValue, TruthValueComponent\n\n# Classical values using projection\nfrom bilateral_truth import EpistemicPolicy\n\nclassical_true = GeneralizedTruthValue(TruthValueComponent.TRUE, TruthValueComponent.FALSE) # <t,f>\nclassical_false = GeneralizedTruthValue(TruthValueComponent.FALSE, TruthValueComponent.TRUE) # <f,t>\nundefined_val = GeneralizedTruthValue(TruthValueComponent.UNDEFINED, TruthValueComponent.UNDEFINED) # <e,e>\n\n# Project to 3-valued logic\nprojected_true = classical_true.project(EpistemicPolicy.CLASSICAL) # t\nprojected_false = classical_false.project(EpistemicPolicy.CLASSICAL) # f\nprojected_undefined = undefined_val.project(EpistemicPolicy.CLASSICAL) # e\n\n# Custom combinations\ncustom_val = GeneralizedTruthValue(\n TruthValueComponent.TRUE,\n TruthValueComponent.UNDEFINED\n) # <t,e>\n```\n\n### Assertions\n\n```python\nfrom bilateral_truth import Assertion\n\n# Simple statement\nstatement = Assertion(\"The sky is blue\")\n\n# Predicate with arguments \nloves = Assertion(\"loves\", \"alice\", \"bob\")\n\n# With named arguments\ndistance = Assertion(\"distance\", \n start=\"NYC\", \n end=\"LA\", \n value=2500, \n unit=\"miles\")\n\n# Natural language statements\nweather = Assertion(\"It will rain tomorrow\")\nfact = Assertion(\"The capital of France is Paris\")\n```\n\n### Caching Behavior\n\nThe zeta_c function implements the mathematical definition:\n\n```\nzeta_c(\u03c6) = {\n c(\u03c6) if \u03c6 \u2208 dom(c)\n \u03b6(\u03c6) otherwise, and c := c \u222a {(\u03c6, \u03b6(\u03c6))}\n}\n```\n\n```python\nfrom bilateral_truth import zeta_c, get_cache_size, clear_cache\n\nassertion = Assertion(\"test\")\n\n# First evaluation computes and caches\nresult1 = zeta_c(assertion)\nprint(f\"Cache size: {get_cache_size()}\") # 1\n\n# Second evaluation uses cache\nresult2 = zeta_c(assertion)\nprint(f\"Same result: {result1 == result2}\") # True\nprint(f\"Cache size: {get_cache_size()}\") # Still 1\n```\n\n### LLM-Based Bilateral Evaluation\n\n```python\n# Set up environment variables first:\n# export OPENAI_API_KEY='your-key'\n# export ANTHROPIC_API_KEY='your-key'\n\nfrom bilateral_truth import zeta_c, create_llm_evaluator, Assertion\n\n# Create real LLM evaluator \nopenai_evaluator = create_llm_evaluator('openai', model='gpt-4')\nclaude_evaluator = create_llm_evaluator('anthropic')\n\n# Or use mock evaluator for testing/development\nmock_evaluator = create_llm_evaluator('mock')\n\n# The LLM will assess both verifiability and refutability\nassertion = Assertion(\"The Earth is round\")\nresult = zeta_c(assertion, openai_evaluator.evaluate_bilateral)\n\n# The LLM receives a prompt asking it to evaluate:\n# 1. Can this statement be verified as true? (verifiability) \n# 2. Can this statement be refuted as false? (refutability)\n# And returns a structured <u,v> response\n```\n\n## API Reference\n\n### Functions\n\n- **`zeta(assertion, evaluator)`**: Base bilateral evaluation function (requires LLM evaluator)\n- **`zeta_c(assertion, evaluator, cache=None)`**: Cached bilateral evaluation function\n- **`clear_cache()`**: Clear the global cache\n- **`get_cache_size()`**: Get the number of cached entries\n- **`create_llm_evaluator(provider, **kwargs)`**: Factory for creating LLM evaluators\n\n### Classes\n\n- **`Assertion(statement, *args, **kwargs)`**: Represents natural language assertions or predicates\n- **`GeneralizedTruthValue(u, v)`**: Represents <u,v> truth values\n- **`TruthValueComponent`**: Enum for t, e, f values\n- **`ZetaCache`**: Cache implementation for zeta_c\n- **`OpenAIEvaluator`**: LLM evaluator using OpenAI's API\n- **`AnthropicEvaluator`**: LLM evaluator using Anthropic's API\n- **`MockLLMEvaluator`**: Mock evaluator for testing/development\n\n## Command Line Interface\n\nAfter installation, use the `bilateral-truth` command:\n\n```bash\n# Install the package first\npip install -e .\n\n# Interactive mode with GPT-4 (requires OPENAI_API_KEY)\nbilateral-truth --model gpt-4 --interactive\n\n# Single assertion evaluation with Claude (requires ANTHROPIC_API_KEY)\nbilateral-truth --model claude \"The capital of France is Paris\"\n\n# Use OpenRouter with Llama model (requires OPENROUTER_API_KEY)\nbilateral-truth --model llama3-70b \"Climate change is real\"\n\n# Use mock model for testing (no API key needed)\nbilateral-truth --model mock \"The sky is blue\"\n\n# Use majority voting with 5 samples for more robust results\nbilateral-truth --model gpt-4 --samples 5 \"Climate change is real\"\n\n# Use pessimistic tiebreaking with even number of samples\nbilateral-truth --model claude --samples 4 --tiebreak pessimistic \"The Earth is round\"\n\n# List all available models\nbilateral-truth --list-models\n\n# Get information about a specific model\nbilateral-truth --model-info gpt-4\n```\n\n### Running without installation:\n\n```bash\n# Use the standalone script\npython cli.py -m mock \"The Earth is round\"\n\n# Interactive mode with sampling\npython cli.py -m mock -s 3 --tiebreak random -i\n\n# Single evaluation with majority voting\npython cli.py -m llama3 -s 5 \"The sky is blue\"\n\n# Run the demo\npython demo_cli.py\n```\n\n### Supported Models\n\nThe CLI supports models from multiple providers:\n\n- **OpenAI**: GPT-4, GPT-3.5-turbo, etc.\n- **Anthropic**: Claude-4 (Opus, Sonnet)\n- **OpenRouter**: Llama, Mistral, Gemini, and many more models\n- **Mock**: For testing and development\n\n### API Keys\n\nSet environment variables for the providers you want to use:\n\n```bash\nexport OPENAI_API_KEY='your-openai-key'\nexport ANTHROPIC_API_KEY='your-anthropic-key'\nexport OPENROUTER_API_KEY='your-openrouter-key'\n```\n\n### Sampling and Majority Voting\n\nThe CLI supports robust evaluation using multiple samples and majority voting, as described in the ArXiv paper:\n\n```bash\n# Single evaluation (default)\npython cli.py -m gpt4 \"The sky is blue\"\n\n# Majority voting with 5 samples for more robust results\npython cli.py -m gpt4 -s 5 \"Climate change is real\"\n\n# Even number of samples with tiebreaking strategies\npython cli.py -m claude -s 4 --tiebreak conservative \"The Earth is round\"\npython cli.py -m llama3 -s 6 --tiebreak optimistic \"AI will be beneficial\"\npython cli.py -m mixtral -s 4 --tiebreak random \"Democracy is good\"\n```\n\n**Tiebreaking Strategies:**\n\nWhen multiple samples produce tied votes for a component, the tiebreaking strategy determines the outcome:\n\n- **`random`** (default): Randomly choose among tied components\n - Unbiased but unpredictable\n - Example: `[t,t,f,f]` \u2192 randomly pick `t` or `f`\n\n- **`pessimistic`**: Prefer `f` (cannot verify/refute) when in doubt\n - Bias toward epistemic caution: \"Better to admit uncertainty than make false claims\"\n - Tends toward `<f,f>` (paracomplete/unknown) outcomes\n - Example: `[t,t,f,f]` \u2192 choose `f`\n\n- **`optimistic`**: Prefer `t` (verified/refuted) when in doubt \n - Bias toward strong claims: \"Give statements the benefit of the doubt\"\n - Tends toward classical `<t,f>` or `<f,t>` outcomes\n - Example: `[t,t,f,f]` \u2192 choose `t`\n\n**Benefits of Sampling:**\n- Reduces variance in LLM responses\n- More reliable bilateral evaluation results\n- Configurable confidence through sample size\n- Handles ties systematically with multiple strategies\n\n## Examples\n\nRun the included examples:\n\n```bash\npython llm_examples.py # LLM-based bilateral evaluation examples\npython examples.py # Legacy examples (deprecated)\npython demo_cli.py # CLI demonstration\n```\n\n## Testing\n\nRun the test suite:\n\n```bash\npython -m pytest tests/\n```\n\nOr run individual test modules:\n\n```bash\npython -m unittest tests.test_truth_values\npython -m unittest tests.test_assertions\npython -m unittest tests.test_zeta_function\n```\n\n## Mathematical Background\n\nThis implementation is based on bilateral factuality evaluation as described in the research paper. The key mathematical concepts include:\n\n1. **Generalized Truth Values**: <u,v> pairs where both components are from {t, e, f} where:\n - First position (u): t = verifiable, f = not verifiable, e = undefined\n - Second position (v): t = refutable, f = not refutable, e = undefined\n2. **Bilateral Evaluation**: Separate assessment of verifiability (u) and refutability (v)\n3. **Persistent Caching**: Immutable cache updates maintaining consistency across evaluations\n\n## Requirements\n\n- Python 3.9+\n- No external dependencies (uses only Python standard library)\n\n## License\n\nMIT License\n\n## Citation\n\nIf you use this implementation in research, please cite the original paper:\n[ArXiv Paper Link](https://arxiv.org/html/2507.09751v2)\n",
"bugtrack_url": null,
"license": null,
"summary": "Caching bilateral factuality evaluation using generalized truth values",
"version": "0.2.1",
"project_urls": {
"Bug Tracker": "https://github.com/bradleyallen/bilateral-truth/issues",
"Documentation": "https://github.com/bradleyallen/bilateral-truth#readme",
"Homepage": "https://github.com/bradleyallen/bilateral-truth",
"Repository": "https://github.com/bradleyallen/bilateral-truth"
},
"split_keywords": [
"factuality",
" truth-values",
" bilateral-evaluation",
" llm",
" ai-safety",
" logic",
" verification"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "2b9d9e7ef942fe95f29dfce209ee934b9cbbd36e0b09df5a7127ee197c89e236",
"md5": "28de744ec88982d9e575c8a12983fb48",
"sha256": "98ae7066d97c567ba5209bc3894432166e90d8363cb8c8e4ccc345ec5bf34759"
},
"downloads": -1,
"filename": "bilateral_truth-0.2.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "28de744ec88982d9e575c8a12983fb48",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 22839,
"upload_time": "2025-08-11T15:39:34",
"upload_time_iso_8601": "2025-08-11T15:39:34.925487Z",
"url": "https://files.pythonhosted.org/packages/2b/9d/9e7ef942fe95f29dfce209ee934b9cbbd36e0b09df5a7127ee197c89e236/bilateral_truth-0.2.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "04fac8038a39490e949f1e4c6bcf27df4371afb2585adebd76936d5b1bdef6c4",
"md5": "785dc4cb350b57f4ca76ed2fb73e0bfd",
"sha256": "ca9dcbcdfe7f6dc9d9247b34746379f902e99a91a32b62c26bdc23b8a383020b"
},
"downloads": -1,
"filename": "bilateral_truth-0.2.1.tar.gz",
"has_sig": false,
"md5_digest": "785dc4cb350b57f4ca76ed2fb73e0bfd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 35034,
"upload_time": "2025-08-11T15:39:36",
"upload_time_iso_8601": "2025-08-11T15:39:36.352421Z",
"url": "https://files.pythonhosted.org/packages/04/fa/c8038a39490e949f1e4c6bcf27df4371afb2585adebd76936d5b1bdef6c4/bilateral_truth-0.2.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-11 15:39:36",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "bradleyallen",
"github_project": "bilateral-truth",
"github_not_found": true,
"lcname": "bilateral-truth"
}