# Logic Language Documentation

## Overview
The Logic Language is a domain-specific language (DSL) designed for defining soft/fuzzy logic constraints in mammography classification. It allows you to replace hard-coded constraint logic with flexible, interpretable logic scripts that can be modified without changing Python code.
## Installation
Install the package using pip:
```bash
pip install logic-lang
```
### Requirements
- Python 3.8 or higher
- PyTorch 1.9.0 or higher
- NumPy 1.20.0 or higher
### Quick Start
```python
from logic_lang import RuleInterpreter
interpreter = RuleInterpreter()
features = {"model_predictions": torch.tensor([[0.8, 0.1, 0.1]])}
script = """
expect model_predictions as predictions
constraint exactly_one(predictions) weight=1.0
"""
constraint_set = interpreter.execute(script, features)
```
## Syntax Reference
### Comments
Comments start with `#` and continue to the end of the line:
```
# This is a comment
define findings_L = mass_L | mc_L # Inline comment
```
### Variable Definitions
Define new variables using logical combinations of existing features:
```
define variable_name = expression
```
### Constant Definitions
Define constants for reusable literal values:
```
const constant_name = value
```
### Variable Expectations
Declare which variables (features) the script expects to be provided, with optional aliasing using the `as` keyword:
```
expect variable_name
expect variable1, variable2, variable3
expect original_name as alias_name
expect var1 as alias1, var2, var3 as alias3
```
Examples:
```
# Declare expected variables at the beginning of the script
expect left_birads, right_birads, mass_L, mc_L
# Or declare them individually
expect comp
expect risk_score
# Use aliasing to rename variables for consistency
expect left_birads as birads_L, right_birads as birads_R
expect mass_left as mass_L, mass_right as mass_R
# Mix aliased and non-aliased variables
expect predictions as preds, labels, confidence as conf
# Define constants for thresholds
const high_threshold = 0.8
const low_threshold = 0.2
const birads_cutoff = 4
# Basic logical operations with literals
define findings_L = mass_L | mc_L
define high_risk = risk_score > 0.7 # Using literal number
define moderate_risk = risk_score > low_threshold # Using constant
# Function calls with mixed literals and variables
define high_birads = sum(birads_L, [4, 5, 6])
define threshold_check = risk_score >= high_threshold
```
### Constraints
Define constraints that will be enforced during training:
```
constraint expression [weight=value] [transform="type"] [param=value ...]
```
Examples:
```
# Basic constraint
constraint exactly_one(birads_L)
# Constraint with weight and transform
constraint findings_L >> high_birads weight=0.7 transform="logbarrier"
# Constraint with multiple parameters
constraint exactly_one(comp) weight=1.5 transform="hinge" alpha=2.0
```
## Operators
### Logical Operators (in order of precedence, lowest to highest)
1. **Implication (`>>`)**: A >> B (if A then B)
```
constraint findings_L >> high_birads_L
```
2. **OR (`|`)**: A | B (A or B)
```
define findings = mass_L | mc_L
```
3. **XOR (`^`)**: A ^ B (A exclusive or B)
```
define exclusive = mass_L ^ mc_L
```
4. **Comparison Operators**: `>`, `<`, `==`, `>=`, `<=`
```
define high_risk = risk_score > threshold_value
define similar_scores = score_a == score_b
define within_range = score >= min_val & score <= max_val
```
5. **AND (`&`)**: A & B (A and B)
```
define strict_findings = mass_L & high_confidence
```
6. **AND_n (`& variable`)**: AND across all elements in a tensor
```
# All radiologists must agree (consensus)
define consensus = & radiologist_assessments
# All imaging modalities must show findings
define all_modalities_positive = & imaging_results
```
7. **OR_n (`| variable`)**: OR across all elements in a tensor
```
# Any radiologist found something
define any_concern = | radiologist_assessments
# Any imaging modality shows findings
define any_positive = | imaging_results
```
8. **NOT (`~`)**: ~A (not A)
```
define no_findings = ~findings_L
```
9. **Indexing (`variable[...]`)**: Access tensor elements using numpy/pytorch syntax
```
# IMPORTANT: When indexing tensors from RuleMammoLoss, you MUST account for the batch dimension!
# Tensors have shape (batch_size, ...), so the first index is always the batch dimension
# Access specific features for all batch items (CORRECT)
define birads_class_4 = features[:, 4] # All batches, class 4
define high_birads = features[:, 4:7] # All batches, classes 4-6
define view_data = assessments[:, 1, :] # All batches, view 1, all features
# Multi-dimensional indexing with batch preservation
define patient_features = batch_data[:, 0, 2] # All batches, patient 0, feature 2
define cc_view = assessments[:, :, 0] # All batches, all views, radiologist 0
# WRONG - These would try to access specific batch items instead of features:
# define birads_class_4 = features[4] # Would access batch item 4!
# define high_birads = features[4:7] # Would access batch items 4-6!
```
## ⚠️ Important Cautions
### Batch Dimension Handling
When using the `RuleMammoLoss` or `RuleBasedConstraintsLoss` with tensor indexing in your logic scripts, **you must explicitly account for the batch dimension**:
```python
# ✅ CORRECT: Always preserve batch dimension with ':'
define birads_4 = features[:, 4] # Access feature 4 for all batch items
define classes_4to6 = features[:, 4:7] # Access features 4-6 for all batch items
define view_cc = assessments[:, 0, :] # Access CC view for all batch items
# ❌ WRONG: These access batch items, not features!
define birads_4 = features[4] # Accesses batch item 4, not feature 4!
define classes_4to6 = features[4:7] # Accesses batch items 4-6!
```
**Why this matters:**
- `RuleMammoLoss`/`RuleBasedConstraintsLoss` pass tensors with shape `(batch_size, ...)` to the interpreter
- The first dimension is always the batch dimension
- Logic operations need to work across the entire batch
- Incorrect indexing will cause shape mismatches and unexpected behavior
### Tensor Shape Awareness
Always be aware of your tensor shapes when writing logic scripts:
```python
# If your features have shape (B, 7) for 7 BI-RADS classes:
define high_birads = features[:, 4:] # ✅ Classes 4,5,6 for all batches
# If your assessments have shape (B, 2, 3) for 2 views, 3 radiologists:
define cc_radiologist_1 = assessments[:, 0, 1] # ✅ CC view, radiologist 1, all batches
define mlo_consensus = assessments[:, 1, :] # ✅ MLO view, all radiologists, all batches
```
### Parentheses
Use parentheses to override operator precedence:
```
define complex = (mass_L | mc_L) & ~(birads_L >> findings_L)
```
### Negative Numbers
Logic-lang supports negative numbers in all numeric contexts:
```
# Negative constants
const negative_threshold = -0.5
const offset = -10
# Negative literals in expressions
define below_zero = risk_score > -0.1
define centered = features[:, 0] >= -1.0
# Negative weights in constraints
constraint findings_L >> high_birads weight=-0.3
# Complex expressions with negative numbers
define adjusted_score = risk_score > (-threshold + 0.1)
define negative_range = score >= -5 & score <= -1
```
**Note:** Negative numbers work in:
- Constant definitions (`const neg = -5`)
- Literal values in expressions (`score > -0.5`)
- Constraint weights (`weight=-0.3`)
- Constraint parameters (`alpha=-2.0`)
- Complex arithmetic expressions (`value + (-10)`)
The unary minus operator has high precedence, so `-5 + 3` is parsed as `(-5) + 3 = -2`.
### Arithmetic Operations
Logic-lang supports basic arithmetic operations with proper precedence:
```
# Basic arithmetic in constants
const sum_result = 5 + 3 # Addition: 8
const diff_result = 10 - 3 # Subtraction: 7
const prod_result = 4 * 2 # Multiplication: 8
const div_result = 8 / 2 # Division: 4
# Complex expressions with parentheses
const complex = (5 + 3) * 2 - 1 # Result: 15
# Arithmetic with variables (tensors)
define sum_scores = score_a + score_b
define scaled_score = risk_score * 2.0
define normalized = (score - min_val) / (max_val - min_val)
# Mixed arithmetic and logical operations
define high_combined = (score_a + score_b) > threshold
define weighted_decision = prediction * weight > 0.5
```
**Operator Precedence (highest to lowest):**
1. Parentheses `()`
2. Unary operators `-, +, ~`
3. Multiplication and Division `*, /`
4. Addition and Subtraction `+, -`
5. Comparisons `>, <, ==, >=, <=`
6. Logical AND `&`
7. Logical XOR `^`
8. Logical OR `|`
9. Implication `>>`
**Type Handling:**
- **Numbers + Numbers**: Returns number (`5 + 3 = 8`)
- **Tensors + Tensors**: Returns tensor (`tensor([[2]]) + tensor([[3]]) = tensor([[5]])`)
- **Numbers + Tensors**: Returns tensor (broadcasting applies)
- **Truth + Truth**: Returns Truth object with arithmetic on underlying values
## Statement Separation
### Semicolons
You can use semicolons (`;`) to separate multiple statements on a single line, similar to Python:
```
# Multiple statements on one line
expect a, b; define c = a | b; constraint c
# Mix of semicolons and newlines
const threshold = 0.5; expect risk_score
define high_risk = risk_score > threshold
constraint high_risk weight=0.8
# Multiple constants and definitions
const low = 0.2; const high = 0.8; define range_check = value >= low & value <= high
```
### Line-based Separation
Statements can also be separated by newlines (traditional approach):
```
expect findings_L, findings_R
define bilateral = findings_L & findings_R
constraint bilateral weight=0.6
```
### Trailing Semicolons
Trailing semicolons are optional and ignored:
```
expect variables;
define result = expression;
constraint result;
```
## Built-in Functions
### `sum(probabilities, indices)`
Sum probabilities for specified class indices along the last dimension:
```
define high_birads_L = sum(birads_L, [4, 5, 6])
define very_high_birads = sum(birads_L, [5, 6])
```
### `exactly_one(probabilities)`
Create exactly-one constraint for categorical probabilities along the last dimension:
```
constraint exactly_one(birads_L) weight=1.0
```
### `mutual_exclusion(...probabilities)`
Create mutual exclusion constraint between multiple probabilities:
```
constraint mutual_exclusion(mass_L, mc_L) weight=0.5
```
### `at_least_k(probabilities, k)`
Create constraint that at least k elements must be true along the last dimension:
```
define min_two_findings = at_least_k(findings_combined, 2)
constraint min_two_findings weight=0.6
```
**Caution:** `at_least_k` uses combinatorial logic and may be slow for large tensors or high k values.
### `at_most_k(probabilities, k)`
Create constraint that at most k elements can be true along the last dimension:
```
define max_one_high_birads = at_most_k(high_birads_indicators, 1)
constraint max_one_high_birads weight=0.7
```
**Caution:** `at_most_k` uses combinatorial logic and may be slow for large tensors or high k values.
### `exactly_k(probabilities, k)`
Create constraint that exactly k elements must be true along the last dimension:
```
define exactly_two_radiologists = exactly_k(radiologist_agreement, 2)
constraint exactly_two_radiologists weight=0.8
```
**Caution:** `exactly_k` uses combinatorial logic and may be slow for large tensors or high k values.
### `threshold_implication(antecedent, consequent, threshold)`
Create threshold-based implication constraint:
```
define strong_implication = threshold_implication(high_risk_L, findings_L, 0.7)
constraint strong_implication weight=0.9
```
### `conditional_probability(condition, event, target_prob)`
Create conditional probability constraint:
```
define conditional_findings = conditional_probability(high_birads_L, findings_L, 0.85)
constraint conditional_findings weight=0.8
```
### `iff(left, right)`
Create logical biconditional (if and only if) constraint:
```
define balanced_assessment = iff(risk_L, risk_R)
constraint balanced_assessment weight=0.4
```
### `clamp(tensor, min_val, max_val)`
Clamp tensor values to specified range:
```
define clamped_mass = clamp(mass_L, 0.1, 0.9)
```
### `threshold(tensor, threshold)`
Apply threshold to tensor:
```
define binary_mass = threshold(mass_L, 0.5)
```
### `greater_than(left, right)`
Create soft greater than comparison between two tensors:
```
define high_confidence = greater_than(confidence, baseline)
```
### `less_than(left, right)`
Create soft less than comparison between two tensors:
```
define low_risk = less_than(risk_score, threshold_low)
```
### `equals(left, right)`
Create soft equality comparison between two tensors:
```
define similar_scores = equals(score_a, score_b)
```
### `threshold_constraint(tensor, threshold, operator)`
Create threshold constraint with specified comparison operator:
```
define high_birads = threshold_constraint(birads_score, 0.7, ">")
define exact_match = threshold_constraint(prediction, 0.5, "==")
define within_bounds = threshold_constraint(value, 0.3, ">=")
```
## Data Types
### Numbers
Integer or floating-point numbers can be used directly in expressions:
```
define high_risk = risk_score > 0.8
define moderate = value >= 0.3 & value <= 0.7
constraint threshold_check weight=1.5 # Literal number as parameter
```
### Strings
Text values enclosed in quotes:
```
transform="logbarrier"
transform='hinge'
const model_type = "transformer"
```
### Lists
Arrays of values:
```
[1, 2, 3]
[4, 5, 6]
const important_classes = [4, 5, 6] # Can store list constants
```
### Mixed Type Expressions
The logic language automatically handles mixed types in expressions:
```
# Tensor compared with literal number
define high_values = predictions > 0.5
# Tensor compared with constant
const threshold = 0.7
define above_threshold = scores >= threshold
# Combining constants and variables
const low_cut = 0.2
const high_cut = 0.8
define in_range = (values >= low_cut) & (values <= high_cut)
```
## Constraint Parameters
### `weight` (float)
Relative importance of the constraint:
```
constraint exactly_one(birads_L) weight=2.0 # Higher weight = more important
```
### `transform` (string)
Loss transformation method:
- `"logbarrier"`: Logarithmic barrier (default, smooth penalties)
- `"hinge"`: Hinge loss (softer penalties)
- `"linear"`: Linear loss (proportional penalties)
```
constraint findings >> high_birads transform="hinge"
```
### Custom Parameters
Additional parameters specific to constraint types:
```
constraint exactly_one(birads_L) weight=1.0 alpha=2.0 beta=0.5
```
## Complete Example
```
# Mammography Constraint Rules
# ============================
# Declare expected variables from model output
expect mass_L, mass_R, mc_L, mc_R
expect birads_L, birads_R, birads_score_L, birads_score_R
expect comp
# Define constants for reusable thresholds
const high_risk_threshold = 0.7
const low_risk_threshold = 0.3
const birads_high_cutoff = 4
const birads_very_high_cutoff = 5
# Feature definitions - combine findings per breast
define findings_L = mass_L | mc_L
define findings_R = mass_R | mc_R
# BI-RADS probability groups using constants
define high_birads_L = sum(birads_L, [4, 5, 6])
define high_birads_R = sum(birads_R, [4, 5, 6])
define very_high_birads_L = sum(birads_L, [5, 6])
define very_high_birads_R = sum(birads_R, [5, 6])
define low_birads_L = sum(birads_L, [1, 2])
define low_birads_R = sum(birads_R, [1, 2])
# Threshold-based risk assessments using literals and constants
define high_risk_L = birads_score_L > high_risk_threshold
define high_risk_R = birads_score_R > high_risk_threshold
define very_low_risk_L = birads_score_L < low_risk_threshold
define very_low_risk_R = birads_score_R < low_risk_threshold
define balanced_assessment = equals(risk_L, risk_R)
# Range constraints using multiple comparisons with literals
define valid_risk_range_L = (birads_score_L >= 0.0) & (birads_score_L <= 1.0)
define valid_risk_range_R = (birads_score_R >= 0.0) & (birads_score_R <= 1.0)
# No findings (negation of findings)
define no_findings_L = ~findings_L
define no_findings_R = ~findings_R
# Categorical exclusivity constraints
constraint exactly_one(birads_L) weight=1.0 transform="logbarrier"
constraint exactly_one(birads_R) weight=1.0 transform="logbarrier"
constraint exactly_one(comp) weight=0.7 transform="logbarrier"
# Logical implication constraints using threshold variables
constraint high_risk_L >> findings_L weight=0.8 transform="logbarrier"
constraint high_risk_R >> findings_R weight=0.8 transform="logbarrier"
# Very High BI-RADS (5-6) -> Findings
constraint very_high_birads_L >> findings_L weight=0.7 transform="logbarrier"
constraint very_high_birads_R >> findings_R weight=0.7 transform="logbarrier"
# Low BI-RADS with literal thresholds -> No findings (gentle constraint)
constraint very_low_risk_L >> no_findings_L weight=0.3 transform="logbarrier"
constraint very_low_risk_R >> no_findings_R weight=0.3 transform="logbarrier"
# Range validation constraints
constraint valid_risk_range_L weight=2.0 transform="logbarrier"
constraint valid_risk_range_R weight=2.0 transform="logbarrier"
# Comparison-based constraints using constants
constraint balanced_assessment weight=0.4 transform="hinge"
```
## Usage Patterns
### 1. Variable Expectations
Declare required variables at the beginning of scripts for better error handling:
```
# Declare all expected model outputs in one line
expect left_mass_prob, right_mass_prob, left_birads, right_birads, composition
# Now use these variables with confidence
define findings_L = left_mass_prob > 0.5
constraint exactly_one(left_birads)
```
### 2. Categorical Constraints
Ensure exactly one category is selected:
```
constraint exactly_one(birads_L) weight=1.0
constraint exactly_one(composition) weight=0.8
```
### 2. Implication Rules
Model domain knowledge as if-then relationships:
```
# If findings present, then high BI-RADS likely
constraint findings_L >> high_birads_L weight=0.7
# If very high BI-RADS, then findings must be present
constraint very_high_birads_L >> findings_L weight=0.8
```
### 3. Mutual Exclusion
Prevent conflicting classifications:
```
constraint mutual_exclusion(mass_L, calc_L) weight=0.5
```
### 4. Threshold Rules
Apply domain-specific thresholds:
```
define suspicious = threshold(combined_score, 0.7)
constraint suspicious >> high_birads weight=0.6
```
### 5. Comparison Constraints
Use soft comparison operators for ordinal and threshold relationships:
```
# Risk stratification with thresholds
define high_risk = risk_score > 0.8
define low_risk = risk_score < 0.2
constraint high_risk >> findings weight=0.7
```
### 6. Consensus and Agreement (AND_n)
Model situations where all elements must be true:
```
# All radiologists must agree for high confidence
define consensus = & radiologist_assessments
constraint consensus > 0.7 >> definitive_diagnosis weight=0.9
# All imaging modalities must show findings
define multi_modal_positive = & imaging_results
constraint multi_modal_positive >> high_confidence weight=0.8
```
### 7. Any Evidence Detection (OR_n)
Model situations where any element being true is significant:
```
# Any radiologist expressing concern triggers review
define any_concern = | radiologist_assessments
constraint any_concern > 0.5 >> requires_review weight=0.6
# Any modality showing findings suggests pathology
define any_positive = | imaging_modalities
constraint any_positive >> potential_pathology weight=0.7
```
### 8. Tensor Indexing and Slicing
Access specific elements, patients, or subsets of multi-dimensional data:
```
# REMEMBER: First dimension is always batch when using RuleMammoLoss or RuleBasedConstraintsLoss! Use [:, ...] to preserve batch dimension
# Feature-wise access (CORRECT - preserves batch dimension)
define birads_4 = features[:, 4] # Feature 4 for all batch items
define high_classes = features[:, 4:7] # Features 4-6 for all batch items
define first_half = features[:, :3] # Features 0-2 for all batch items
# Multi-dimensional indexing with batch preservation
define cc_assessments = assessments[:, 0, :] # CC view for all patients
define mlo_assessments = assessments[:, 1, :] # MLO view for all patients
define radiologist_1 = assessments[:, :, 0] # Radiologist 1 across all views/patients
# View-specific analysis preserving batch dimension
define cc_consensus = & cc_assessments # Consensus across CC view features
define mlo_consensus = & mlo_assessments # Consensus across MLO view features
constraint cc_consensus & mlo_consensus >> high_confidence weight=0.9
# Feature subset analysis
define feature_subset = features[:, 2:5] # Specific feature range for all batches
define subset_consensus = & feature_subset
constraint subset_consensus >> specialized_finding weight=0.8
# WRONG - These would access batch items instead of features when using RuleMammoLoss or RuleBasedConstraintsLoss:
# define birads_4 = features[4] # Accesses batch item 4!
# define patient_subset = features[2:5] # Accesses batch items 2-4!
```
### 9. Ordinal Relationships
Model ordered classifications with comparison operators:
```
# BI-RADS ordering constraints
define birads_3_higher = birads_3 >= birads_2
define birads_4_higher = birads_4 >= birads_3
constraint birads_3_higher & birads_4_higher weight=0.8
```
## Error Handling
The logic language provides helpful error messages for common issues:
### Syntax Errors
```
define x = mass_L | # Error: Missing right operand
```
### Undefined Variables
```
define x = undefined_var # Error: Variable 'undefined_var' is not defined
```
### Type Mismatches
```
constraint exactly_one(5) # Error: Expected Truth object, got number
```
### Invalid Functions
```
define x = unknown_func() # Error: Unknown function 'unknown_func'
```
### Batch Dimension Errors
```
# Wrong indexing - accessing batch items instead of features
define birads_4 = features[4] # Error: May cause shape mismatch
# Correct indexing - preserving batch dimension
define birads_4 = features[:, 4] # ✅ Correct: Access feature 4 for all batches
```
## Advanced Features
### Custom Functions
Add domain-specific functions to the interpreter:
```python
def custom_risk_score(mass_prob, calc_prob, birads_prob):
# Custom risk calculation
return combined_risk
interpreter.add_builtin_function('risk_score', custom_risk_score)
```
**Note**: Custom functions must handle batch dimensions appropriately and return either a PyTorch tensor or a Truth object. See `soft_logic.py` for reference on Truth objects.
### Dynamic Rule Updates
Modify rules at runtime:
```python
loss_fn.update_rules(new_rules_string)
```
### Multiple Semantics
Choose different logical semantics (the default is "Gödel"):
- **Gödel**: min/max operations (sharp/tunable decision boundaries)
- **Łukasiewicz**: bounded sum operations (smoother but easy to saturate)
- **Product**: multiplication operations (independent probabilities)
```python
loss_fn = RuleMammoLoss(
feature_indices=indices,
rules=rules,
semantics="lukasiewicz" # or "godel", "product"
)
```
## Best Practices
1. **Start Simple**: Begin with basic constraints and add complexity gradually
2. **Use Comments**: Document the medical reasoning behind each constraint
3. **Test Incrementally**: Add constraints one at a time and validate behavior
4. **Meaningful Names**: Use descriptive variable names that reflect medical concepts
5. **Balanced Weights**: Start with equal weights and adjust based on domain importance
6. **Appropriate Transforms**: Use "logbarrier" for strict constraints, "hinge" for softer ones
7. **⚠️ Mind the Batch Dimension**: Always use `[:, ...]` when indexing tensors from `RuleMammoLoss` or `RuleBasedConstraintsLoss`
8. **Validate Tensor Shapes**: Print tensor shapes during development to verify indexing
9. **Test with Different Batch Sizes**: Ensure your logic works with various batch sizes
10. **Leverage Built-in Functions**: Use provided functions like `sum`, `exactly_one`, etc., to make the code cleaner and more efficient
11. **Do Not Use Unbounded Variables**: The package is not designed for values outside $\mathbb{R}^{[0,1]}$ and you might get unexpected results and clipping issues.
12. **Cautious Use of Arithmetic**: Since the logic language is primarily for $x \in \mathbb{R}^{[0,1]}$, be careful when using arithmetic operations to avoid values going out of bounds. Try to keep intermediate results within [0,1] and use built-in functions for common patterns.
## Migration from Hard-coded Constraints
To convert existing hard-coded constraints to logic language:
1. **Identify logical patterns** in your constraint code
2. **Extract variable definitions** for reused expressions
3. **Convert constraints** to logic language syntax
4. **Test equivalence** with the original implementation
5. **Refine and optimize** weights and transforms
Raw data
{
"_id": null,
"home_page": null,
"name": "logic-lang",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "logic, constraints, dsl, medical-imaging, soft-logic, rule-based, soft-constraints",
"author": null,
"author_email": "Mahbod Issaiy <mahbodissaiy2@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/8a/c7/3be8063aa39100bc6fa5fa3be15ae416f7a47a5ea02141a55a2089060a96/logic_lang-0.2.4.tar.gz",
"platform": null,
"description": "# Logic Language Documentation\r\n\r\n\r\n\r\n## Overview\r\n\r\nThe Logic Language is a domain-specific language (DSL) designed for defining soft/fuzzy logic constraints in mammography classification. It allows you to replace hard-coded constraint logic with flexible, interpretable logic scripts that can be modified without changing Python code.\r\n\r\n## Installation\r\n\r\nInstall the package using pip:\r\n\r\n```bash\r\npip install logic-lang\r\n```\r\n\r\n### Requirements\r\n\r\n- Python 3.8 or higher\r\n- PyTorch 1.9.0 or higher\r\n- NumPy 1.20.0 or higher\r\n\r\n### Quick Start\r\n\r\n```python\r\nfrom logic_lang import RuleInterpreter\r\n\r\ninterpreter = RuleInterpreter()\r\nfeatures = {\"model_predictions\": torch.tensor([[0.8, 0.1, 0.1]])} \r\nscript = \"\"\"\r\nexpect model_predictions as predictions\r\nconstraint exactly_one(predictions) weight=1.0\r\n\"\"\"\r\nconstraint_set = interpreter.execute(script, features)\r\n```\r\n\r\n## Syntax Reference\r\n\r\n### Comments\r\n\r\nComments start with `#` and continue to the end of the line:\r\n```\r\n# This is a comment\r\ndefine findings_L = mass_L | mc_L # Inline comment\r\n```\r\n\r\n### Variable Definitions\r\n\r\nDefine new variables using logical combinations of existing features:\r\n```\r\ndefine variable_name = expression\r\n```\r\n\r\n### Constant Definitions\r\n\r\nDefine constants for reusable literal values:\r\n```\r\nconst constant_name = value\r\n```\r\n\r\n### Variable Expectations\r\n\r\nDeclare which variables (features) the script expects to be provided, with optional aliasing using the `as` keyword:\r\n```\r\nexpect variable_name\r\nexpect variable1, variable2, variable3\r\nexpect original_name as alias_name\r\nexpect var1 as alias1, var2, var3 as alias3\r\n```\r\n\r\nExamples:\r\n```\r\n# Declare expected variables at the beginning of the script\r\nexpect left_birads, right_birads, mass_L, mc_L\r\n\r\n# Or declare them individually\r\nexpect comp\r\nexpect risk_score\r\n\r\n# Use aliasing to rename variables for consistency\r\nexpect left_birads as birads_L, right_birads as birads_R\r\nexpect mass_left as mass_L, mass_right as mass_R\r\n\r\n# Mix aliased and non-aliased variables\r\nexpect predictions as preds, labels, confidence as conf\r\n\r\n# Define constants for thresholds\r\nconst high_threshold = 0.8\r\nconst low_threshold = 0.2\r\nconst birads_cutoff = 4\r\n\r\n# Basic logical operations with literals\r\ndefine findings_L = mass_L | mc_L\r\ndefine high_risk = risk_score > 0.7 # Using literal number\r\ndefine moderate_risk = risk_score > low_threshold # Using constant\r\n\r\n# Function calls with mixed literals and variables\r\ndefine high_birads = sum(birads_L, [4, 5, 6])\r\ndefine threshold_check = risk_score >= high_threshold\r\n```\r\n\r\n### Constraints\r\n\r\nDefine constraints that will be enforced during training:\r\n```\r\nconstraint expression [weight=value] [transform=\"type\"] [param=value ...]\r\n```\r\n\r\nExamples:\r\n```\r\n# Basic constraint\r\nconstraint exactly_one(birads_L)\r\n\r\n# Constraint with weight and transform\r\nconstraint findings_L >> high_birads weight=0.7 transform=\"logbarrier\"\r\n\r\n# Constraint with multiple parameters\r\nconstraint exactly_one(comp) weight=1.5 transform=\"hinge\" alpha=2.0\r\n```\r\n\r\n## Operators\r\n\r\n### Logical Operators (in order of precedence, lowest to highest)\r\n\r\n1. **Implication (`>>`)**: A >> B (if A then B)\r\n ```\r\n constraint findings_L >> high_birads_L\r\n ```\r\n\r\n2. **OR (`|`)**: A | B (A or B)\r\n ```\r\n define findings = mass_L | mc_L\r\n ```\r\n\r\n3. **XOR (`^`)**: A ^ B (A exclusive or B)\r\n ```\r\n define exclusive = mass_L ^ mc_L\r\n ```\r\n\r\n4. **Comparison Operators**: `>`, `<`, `==`, `>=`, `<=`\r\n ```\r\n define high_risk = risk_score > threshold_value\r\n define similar_scores = score_a == score_b\r\n define within_range = score >= min_val & score <= max_val\r\n ```\r\n\r\n5. **AND (`&`)**: A & B (A and B)\r\n ```\r\n define strict_findings = mass_L & high_confidence\r\n ```\r\n\r\n6. **AND_n (`& variable`)**: AND across all elements in a tensor\r\n ```\r\n # All radiologists must agree (consensus)\r\n define consensus = & radiologist_assessments\r\n \r\n # All imaging modalities must show findings\r\n define all_modalities_positive = & imaging_results\r\n ```\r\n\r\n7. **OR_n (`| variable`)**: OR across all elements in a tensor \r\n ```\r\n # Any radiologist found something\r\n define any_concern = | radiologist_assessments\r\n \r\n # Any imaging modality shows findings\r\n define any_positive = | imaging_results\r\n ```\r\n\r\n8. **NOT (`~`)**: ~A (not A)\r\n ```\r\n define no_findings = ~findings_L\r\n ```\r\n\r\n9. **Indexing (`variable[...]`)**: Access tensor elements using numpy/pytorch syntax\r\n ```\r\n # IMPORTANT: When indexing tensors from RuleMammoLoss, you MUST account for the batch dimension!\r\n # Tensors have shape (batch_size, ...), so the first index is always the batch dimension\r\n \r\n # Access specific features for all batch items (CORRECT)\r\n define birads_class_4 = features[:, 4] # All batches, class 4\r\n define high_birads = features[:, 4:7] # All batches, classes 4-6\r\n define view_data = assessments[:, 1, :] # All batches, view 1, all features\r\n \r\n # Multi-dimensional indexing with batch preservation\r\n define patient_features = batch_data[:, 0, 2] # All batches, patient 0, feature 2\r\n define cc_view = assessments[:, :, 0] # All batches, all views, radiologist 0\r\n \r\n # WRONG - These would try to access specific batch items instead of features:\r\n # define birads_class_4 = features[4] # Would access batch item 4!\r\n # define high_birads = features[4:7] # Would access batch items 4-6!\r\n ```\r\n\r\n## \u26a0\ufe0f Important Cautions\r\n\r\n### Batch Dimension Handling\r\n\r\nWhen using the `RuleMammoLoss` or `RuleBasedConstraintsLoss` with tensor indexing in your logic scripts, **you must explicitly account for the batch dimension**:\r\n\r\n```python\r\n# \u2705 CORRECT: Always preserve batch dimension with ':' \r\ndefine birads_4 = features[:, 4] # Access feature 4 for all batch items\r\ndefine classes_4to6 = features[:, 4:7] # Access features 4-6 for all batch items\r\ndefine view_cc = assessments[:, 0, :] # Access CC view for all batch items\r\n\r\n# \u274c WRONG: These access batch items, not features!\r\ndefine birads_4 = features[4] # Accesses batch item 4, not feature 4!\r\ndefine classes_4to6 = features[4:7] # Accesses batch items 4-6!\r\n```\r\n\r\n**Why this matters:**\r\n- `RuleMammoLoss`/`RuleBasedConstraintsLoss` pass tensors with shape `(batch_size, ...)` to the interpreter\r\n- The first dimension is always the batch dimension\r\n- Logic operations need to work across the entire batch\r\n- Incorrect indexing will cause shape mismatches and unexpected behavior\r\n\r\n### Tensor Shape Awareness\r\n\r\nAlways be aware of your tensor shapes when writing logic scripts:\r\n\r\n```python\r\n# If your features have shape (B, 7) for 7 BI-RADS classes:\r\ndefine high_birads = features[:, 4:] # \u2705 Classes 4,5,6 for all batches\r\n\r\n# If your assessments have shape (B, 2, 3) for 2 views, 3 radiologists:\r\ndefine cc_radiologist_1 = assessments[:, 0, 1] # \u2705 CC view, radiologist 1, all batches\r\ndefine mlo_consensus = assessments[:, 1, :] # \u2705 MLO view, all radiologists, all batches\r\n```\r\n\r\n### Parentheses\r\n\r\nUse parentheses to override operator precedence:\r\n```\r\ndefine complex = (mass_L | mc_L) & ~(birads_L >> findings_L)\r\n```\r\n\r\n### Negative Numbers\r\n\r\nLogic-lang supports negative numbers in all numeric contexts:\r\n\r\n```\r\n# Negative constants\r\nconst negative_threshold = -0.5\r\nconst offset = -10\r\n\r\n# Negative literals in expressions\r\ndefine below_zero = risk_score > -0.1\r\ndefine centered = features[:, 0] >= -1.0\r\n\r\n# Negative weights in constraints\r\nconstraint findings_L >> high_birads weight=-0.3\r\n\r\n# Complex expressions with negative numbers\r\ndefine adjusted_score = risk_score > (-threshold + 0.1)\r\ndefine negative_range = score >= -5 & score <= -1\r\n```\r\n\r\n**Note:** Negative numbers work in:\r\n- Constant definitions (`const neg = -5`)\r\n- Literal values in expressions (`score > -0.5`)\r\n- Constraint weights (`weight=-0.3`)\r\n- Constraint parameters (`alpha=-2.0`)\r\n- Complex arithmetic expressions (`value + (-10)`)\r\n\r\nThe unary minus operator has high precedence, so `-5 + 3` is parsed as `(-5) + 3 = -2`.\r\n\r\n### Arithmetic Operations\r\n\r\nLogic-lang supports basic arithmetic operations with proper precedence:\r\n\r\n```\r\n# Basic arithmetic in constants\r\nconst sum_result = 5 + 3 # Addition: 8\r\nconst diff_result = 10 - 3 # Subtraction: 7 \r\nconst prod_result = 4 * 2 # Multiplication: 8\r\nconst div_result = 8 / 2 # Division: 4\r\n\r\n# Complex expressions with parentheses\r\nconst complex = (5 + 3) * 2 - 1 # Result: 15\r\n\r\n# Arithmetic with variables (tensors)\r\ndefine sum_scores = score_a + score_b\r\ndefine scaled_score = risk_score * 2.0\r\ndefine normalized = (score - min_val) / (max_val - min_val)\r\n\r\n# Mixed arithmetic and logical operations\r\ndefine high_combined = (score_a + score_b) > threshold\r\ndefine weighted_decision = prediction * weight > 0.5\r\n```\r\n\r\n**Operator Precedence (highest to lowest):**\r\n1. Parentheses `()`\r\n2. Unary operators `-, +, ~`\r\n3. Multiplication and Division `*, /`\r\n4. Addition and Subtraction `+, -`\r\n5. Comparisons `>, <, ==, >=, <=`\r\n6. Logical AND `&`\r\n7. Logical XOR `^`\r\n8. Logical OR `|`\r\n9. Implication `>>`\r\n\r\n**Type Handling:**\r\n- **Numbers + Numbers**: Returns number (`5 + 3 = 8`)\r\n- **Tensors + Tensors**: Returns tensor (`tensor([[2]]) + tensor([[3]]) = tensor([[5]])`)\r\n- **Numbers + Tensors**: Returns tensor (broadcasting applies)\r\n- **Truth + Truth**: Returns Truth object with arithmetic on underlying values\r\n\r\n## Statement Separation\r\n\r\n### Semicolons\r\n\r\nYou can use semicolons (`;`) to separate multiple statements on a single line, similar to Python:\r\n\r\n```\r\n# Multiple statements on one line\r\nexpect a, b; define c = a | b; constraint c\r\n\r\n# Mix of semicolons and newlines\r\nconst threshold = 0.5; expect risk_score\r\ndefine high_risk = risk_score > threshold\r\nconstraint high_risk weight=0.8\r\n\r\n# Multiple constants and definitions\r\nconst low = 0.2; const high = 0.8; define range_check = value >= low & value <= high\r\n```\r\n\r\n### Line-based Separation\r\n\r\nStatements can also be separated by newlines (traditional approach):\r\n```\r\nexpect findings_L, findings_R\r\ndefine bilateral = findings_L & findings_R\r\nconstraint bilateral weight=0.6\r\n```\r\n\r\n### Trailing Semicolons\r\n\r\nTrailing semicolons are optional and ignored:\r\n```\r\nexpect variables;\r\ndefine result = expression;\r\nconstraint result;\r\n```\r\n\r\n## Built-in Functions\r\n\r\n### `sum(probabilities, indices)`\r\n\r\nSum probabilities for specified class indices along the last dimension:\r\n```\r\ndefine high_birads_L = sum(birads_L, [4, 5, 6])\r\ndefine very_high_birads = sum(birads_L, [5, 6])\r\n```\r\n\r\n### `exactly_one(probabilities)`\r\n\r\nCreate exactly-one constraint for categorical probabilities along the last dimension:\r\n```\r\nconstraint exactly_one(birads_L) weight=1.0\r\n```\r\n\r\n### `mutual_exclusion(...probabilities)`\r\n\r\nCreate mutual exclusion constraint between multiple probabilities:\r\n```\r\nconstraint mutual_exclusion(mass_L, mc_L) weight=0.5\r\n```\r\n\r\n### `at_least_k(probabilities, k)`\r\n\r\nCreate constraint that at least k elements must be true along the last dimension:\r\n```\r\ndefine min_two_findings = at_least_k(findings_combined, 2)\r\nconstraint min_two_findings weight=0.6\r\n```\r\n\r\n**Caution:** `at_least_k` uses combinatorial logic and may be slow for large tensors or high k values.\r\n\r\n### `at_most_k(probabilities, k)`\r\n\r\nCreate constraint that at most k elements can be true along the last dimension:\r\n```\r\ndefine max_one_high_birads = at_most_k(high_birads_indicators, 1)\r\nconstraint max_one_high_birads weight=0.7\r\n```\r\n\r\n**Caution:** `at_most_k` uses combinatorial logic and may be slow for large tensors or high k values.\r\n\r\n### `exactly_k(probabilities, k)`\r\n\r\nCreate constraint that exactly k elements must be true along the last dimension:\r\n```\r\ndefine exactly_two_radiologists = exactly_k(radiologist_agreement, 2)\r\nconstraint exactly_two_radiologists weight=0.8\r\n```\r\n\r\n**Caution:** `exactly_k` uses combinatorial logic and may be slow for large tensors or high k values.\r\n\r\n### `threshold_implication(antecedent, consequent, threshold)`\r\n\r\nCreate threshold-based implication constraint:\r\n```\r\ndefine strong_implication = threshold_implication(high_risk_L, findings_L, 0.7)\r\nconstraint strong_implication weight=0.9\r\n```\r\n\r\n### `conditional_probability(condition, event, target_prob)`\r\n\r\nCreate conditional probability constraint:\r\n```\r\ndefine conditional_findings = conditional_probability(high_birads_L, findings_L, 0.85)\r\nconstraint conditional_findings weight=0.8\r\n```\r\n\r\n### `iff(left, right)`\r\n\r\nCreate logical biconditional (if and only if) constraint:\r\n```\r\ndefine balanced_assessment = iff(risk_L, risk_R)\r\nconstraint balanced_assessment weight=0.4\r\n```\r\n\r\n### `clamp(tensor, min_val, max_val)`\r\n\r\nClamp tensor values to specified range:\r\n```\r\ndefine clamped_mass = clamp(mass_L, 0.1, 0.9)\r\n```\r\n\r\n### `threshold(tensor, threshold)`\r\n\r\nApply threshold to tensor:\r\n```\r\ndefine binary_mass = threshold(mass_L, 0.5)\r\n```\r\n\r\n### `greater_than(left, right)`\r\n\r\nCreate soft greater than comparison between two tensors:\r\n```\r\ndefine high_confidence = greater_than(confidence, baseline)\r\n```\r\n\r\n### `less_than(left, right)`\r\n\r\nCreate soft less than comparison between two tensors:\r\n```\r\ndefine low_risk = less_than(risk_score, threshold_low)\r\n```\r\n\r\n### `equals(left, right)`\r\n\r\nCreate soft equality comparison between two tensors:\r\n```\r\ndefine similar_scores = equals(score_a, score_b)\r\n```\r\n\r\n### `threshold_constraint(tensor, threshold, operator)`\r\n\r\nCreate threshold constraint with specified comparison operator:\r\n```\r\ndefine high_birads = threshold_constraint(birads_score, 0.7, \">\")\r\ndefine exact_match = threshold_constraint(prediction, 0.5, \"==\")\r\ndefine within_bounds = threshold_constraint(value, 0.3, \">=\")\r\n```\r\n\r\n## Data Types\r\n\r\n### Numbers\r\n\r\nInteger or floating-point numbers can be used directly in expressions:\r\n```\r\ndefine high_risk = risk_score > 0.8\r\ndefine moderate = value >= 0.3 & value <= 0.7\r\nconstraint threshold_check weight=1.5 # Literal number as parameter\r\n```\r\n\r\n### Strings\r\n\r\nText values enclosed in quotes:\r\n```\r\ntransform=\"logbarrier\"\r\ntransform='hinge'\r\nconst model_type = \"transformer\"\r\n```\r\n\r\n### Lists\r\n\r\nArrays of values:\r\n```\r\n[1, 2, 3]\r\n[4, 5, 6]\r\nconst important_classes = [4, 5, 6] # Can store list constants\r\n```\r\n\r\n### Mixed Type Expressions\r\n\r\nThe logic language automatically handles mixed types in expressions:\r\n```\r\n# Tensor compared with literal number\r\ndefine high_values = predictions > 0.5\r\n\r\n# Tensor compared with constant\r\nconst threshold = 0.7\r\ndefine above_threshold = scores >= threshold\r\n\r\n# Combining constants and variables\r\nconst low_cut = 0.2\r\nconst high_cut = 0.8\r\ndefine in_range = (values >= low_cut) & (values <= high_cut)\r\n```\r\n\r\n## Constraint Parameters\r\n\r\n### `weight` (float)\r\n\r\nRelative importance of the constraint:\r\n```\r\nconstraint exactly_one(birads_L) weight=2.0 # Higher weight = more important\r\n```\r\n\r\n### `transform` (string)\r\n\r\nLoss transformation method:\r\n- `\"logbarrier\"`: Logarithmic barrier (default, smooth penalties)\r\n- `\"hinge\"`: Hinge loss (softer penalties)\r\n- `\"linear\"`: Linear loss (proportional penalties)\r\n\r\n```\r\nconstraint findings >> high_birads transform=\"hinge\"\r\n```\r\n\r\n### Custom Parameters\r\n\r\nAdditional parameters specific to constraint types:\r\n```\r\nconstraint exactly_one(birads_L) weight=1.0 alpha=2.0 beta=0.5\r\n```\r\n\r\n## Complete Example\r\n\r\n```\r\n# Mammography Constraint Rules\r\n# ============================\r\n\r\n# Declare expected variables from model output\r\nexpect mass_L, mass_R, mc_L, mc_R\r\nexpect birads_L, birads_R, birads_score_L, birads_score_R\r\nexpect comp\r\n\r\n# Define constants for reusable thresholds\r\nconst high_risk_threshold = 0.7\r\nconst low_risk_threshold = 0.3\r\nconst birads_high_cutoff = 4\r\nconst birads_very_high_cutoff = 5\r\n\r\n# Feature definitions - combine findings per breast\r\ndefine findings_L = mass_L | mc_L\r\ndefine findings_R = mass_R | mc_R\r\n\r\n# BI-RADS probability groups using constants\r\ndefine high_birads_L = sum(birads_L, [4, 5, 6])\r\ndefine high_birads_R = sum(birads_R, [4, 5, 6])\r\ndefine very_high_birads_L = sum(birads_L, [5, 6])\r\ndefine very_high_birads_R = sum(birads_R, [5, 6])\r\ndefine low_birads_L = sum(birads_L, [1, 2])\r\ndefine low_birads_R = sum(birads_R, [1, 2])\r\n\r\n# Threshold-based risk assessments using literals and constants\r\ndefine high_risk_L = birads_score_L > high_risk_threshold\r\ndefine high_risk_R = birads_score_R > high_risk_threshold \r\ndefine very_low_risk_L = birads_score_L < low_risk_threshold\r\ndefine very_low_risk_R = birads_score_R < low_risk_threshold\r\ndefine balanced_assessment = equals(risk_L, risk_R)\r\n\r\n# Range constraints using multiple comparisons with literals\r\ndefine valid_risk_range_L = (birads_score_L >= 0.0) & (birads_score_L <= 1.0)\r\ndefine valid_risk_range_R = (birads_score_R >= 0.0) & (birads_score_R <= 1.0)\r\n\r\n# No findings (negation of findings)\r\ndefine no_findings_L = ~findings_L\r\ndefine no_findings_R = ~findings_R\r\n\r\n# Categorical exclusivity constraints\r\nconstraint exactly_one(birads_L) weight=1.0 transform=\"logbarrier\"\r\nconstraint exactly_one(birads_R) weight=1.0 transform=\"logbarrier\"\r\nconstraint exactly_one(comp) weight=0.7 transform=\"logbarrier\"\r\n\r\n# Logical implication constraints using threshold variables\r\nconstraint high_risk_L >> findings_L weight=0.8 transform=\"logbarrier\"\r\nconstraint high_risk_R >> findings_R weight=0.8 transform=\"logbarrier\"\r\n\r\n# Very High BI-RADS (5-6) -> Findings \r\nconstraint very_high_birads_L >> findings_L weight=0.7 transform=\"logbarrier\"\r\nconstraint very_high_birads_R >> findings_R weight=0.7 transform=\"logbarrier\"\r\n\r\n# Low BI-RADS with literal thresholds -> No findings (gentle constraint)\r\nconstraint very_low_risk_L >> no_findings_L weight=0.3 transform=\"logbarrier\"\r\nconstraint very_low_risk_R >> no_findings_R weight=0.3 transform=\"logbarrier\"\r\n\r\n# Range validation constraints\r\nconstraint valid_risk_range_L weight=2.0 transform=\"logbarrier\"\r\nconstraint valid_risk_range_R weight=2.0 transform=\"logbarrier\"\r\n\r\n# Comparison-based constraints using constants\r\nconstraint balanced_assessment weight=0.4 transform=\"hinge\"\r\n```\r\n\r\n## Usage Patterns\r\n\r\n### 1. Variable Expectations\r\n\r\nDeclare required variables at the beginning of scripts for better error handling:\r\n```\r\n# Declare all expected model outputs in one line\r\nexpect left_mass_prob, right_mass_prob, left_birads, right_birads, composition\r\n\r\n# Now use these variables with confidence\r\ndefine findings_L = left_mass_prob > 0.5\r\nconstraint exactly_one(left_birads)\r\n```\r\n\r\n### 2. Categorical Constraints\r\n\r\nEnsure exactly one category is selected:\r\n```\r\nconstraint exactly_one(birads_L) weight=1.0\r\nconstraint exactly_one(composition) weight=0.8\r\n```\r\n\r\n### 2. Implication Rules\r\n\r\nModel domain knowledge as if-then relationships:\r\n```\r\n# If findings present, then high BI-RADS likely\r\nconstraint findings_L >> high_birads_L weight=0.7\r\n\r\n# If very high BI-RADS, then findings must be present\r\nconstraint very_high_birads_L >> findings_L weight=0.8\r\n```\r\n\r\n### 3. Mutual Exclusion\r\n\r\nPrevent conflicting classifications:\r\n```\r\nconstraint mutual_exclusion(mass_L, calc_L) weight=0.5\r\n```\r\n\r\n### 4. Threshold Rules\r\n\r\nApply domain-specific thresholds:\r\n```\r\ndefine suspicious = threshold(combined_score, 0.7)\r\nconstraint suspicious >> high_birads weight=0.6\r\n```\r\n\r\n### 5. Comparison Constraints\r\n\r\nUse soft comparison operators for ordinal and threshold relationships:\r\n```\r\n# Risk stratification with thresholds\r\ndefine high_risk = risk_score > 0.8\r\ndefine low_risk = risk_score < 0.2\r\nconstraint high_risk >> findings weight=0.7\r\n```\r\n\r\n### 6. Consensus and Agreement (AND_n)\r\n\r\nModel situations where all elements must be true:\r\n```\r\n# All radiologists must agree for high confidence\r\ndefine consensus = & radiologist_assessments\r\nconstraint consensus > 0.7 >> definitive_diagnosis weight=0.9\r\n\r\n# All imaging modalities must show findings\r\ndefine multi_modal_positive = & imaging_results\r\nconstraint multi_modal_positive >> high_confidence weight=0.8\r\n```\r\n\r\n### 7. Any Evidence Detection (OR_n)\r\n\r\nModel situations where any element being true is significant:\r\n```\r\n# Any radiologist expressing concern triggers review\r\ndefine any_concern = | radiologist_assessments \r\nconstraint any_concern > 0.5 >> requires_review weight=0.6\r\n\r\n# Any modality showing findings suggests pathology\r\ndefine any_positive = | imaging_modalities\r\nconstraint any_positive >> potential_pathology weight=0.7\r\n```\r\n\r\n### 8. Tensor Indexing and Slicing\r\n\r\nAccess specific elements, patients, or subsets of multi-dimensional data:\r\n```\r\n# REMEMBER: First dimension is always batch when using RuleMammoLoss or RuleBasedConstraintsLoss! Use [:, ...] to preserve batch dimension\r\n\r\n# Feature-wise access (CORRECT - preserves batch dimension)\r\ndefine birads_4 = features[:, 4] # Feature 4 for all batch items\r\ndefine high_classes = features[:, 4:7] # Features 4-6 for all batch items\r\ndefine first_half = features[:, :3] # Features 0-2 for all batch items\r\n\r\n# Multi-dimensional indexing with batch preservation\r\ndefine cc_assessments = assessments[:, 0, :] # CC view for all patients\r\ndefine mlo_assessments = assessments[:, 1, :] # MLO view for all patients \r\ndefine radiologist_1 = assessments[:, :, 0] # Radiologist 1 across all views/patients\r\n\r\n# View-specific analysis preserving batch dimension\r\ndefine cc_consensus = & cc_assessments # Consensus across CC view features\r\ndefine mlo_consensus = & mlo_assessments # Consensus across MLO view features\r\nconstraint cc_consensus & mlo_consensus >> high_confidence weight=0.9\r\n\r\n# Feature subset analysis\r\ndefine feature_subset = features[:, 2:5] # Specific feature range for all batches\r\ndefine subset_consensus = & feature_subset\r\nconstraint subset_consensus >> specialized_finding weight=0.8\r\n\r\n# WRONG - These would access batch items instead of features when using RuleMammoLoss or RuleBasedConstraintsLoss:\r\n# define birads_4 = features[4] # Accesses batch item 4!\r\n# define patient_subset = features[2:5] # Accesses batch items 2-4!\r\n```\r\n\r\n### 9. Ordinal Relationships\r\n\r\nModel ordered classifications with comparison operators:\r\n```\r\n# BI-RADS ordering constraints\r\ndefine birads_3_higher = birads_3 >= birads_2\r\ndefine birads_4_higher = birads_4 >= birads_3\r\nconstraint birads_3_higher & birads_4_higher weight=0.8\r\n```\r\n\r\n## Error Handling\r\n\r\nThe logic language provides helpful error messages for common issues:\r\n\r\n### Syntax Errors\r\n\r\n```\r\ndefine x = mass_L | # Error: Missing right operand\r\n```\r\n\r\n### Undefined Variables\r\n\r\n```\r\ndefine x = undefined_var # Error: Variable 'undefined_var' is not defined\r\n```\r\n\r\n### Type Mismatches\r\n\r\n```\r\nconstraint exactly_one(5) # Error: Expected Truth object, got number\r\n```\r\n\r\n### Invalid Functions\r\n\r\n```\r\ndefine x = unknown_func() # Error: Unknown function 'unknown_func'\r\n```\r\n\r\n### Batch Dimension Errors\r\n\r\n```\r\n# Wrong indexing - accessing batch items instead of features\r\ndefine birads_4 = features[4] # Error: May cause shape mismatch\r\n# Correct indexing - preserving batch dimension \r\ndefine birads_4 = features[:, 4] # \u2705 Correct: Access feature 4 for all batches\r\n```\r\n\r\n## Advanced Features\r\n\r\n### Custom Functions\r\n\r\nAdd domain-specific functions to the interpreter:\r\n```python\r\ndef custom_risk_score(mass_prob, calc_prob, birads_prob):\r\n # Custom risk calculation\r\n return combined_risk\r\n\r\ninterpreter.add_builtin_function('risk_score', custom_risk_score)\r\n```\r\n**Note**: Custom functions must handle batch dimensions appropriately and return either a PyTorch tensor or a Truth object. See `soft_logic.py` for reference on Truth objects.\r\n\r\n### Dynamic Rule Updates\r\n\r\nModify rules at runtime:\r\n```python\r\nloss_fn.update_rules(new_rules_string)\r\n```\r\n\r\n### Multiple Semantics\r\n\r\nChoose different logical semantics (the default is \"G\u00f6del\"):\r\n- **G\u00f6del**: min/max operations (sharp/tunable decision boundaries)\r\n- **\u0141ukasiewicz**: bounded sum operations (smoother but easy to saturate)\r\n- **Product**: multiplication operations (independent probabilities)\r\n\r\n```python\r\nloss_fn = RuleMammoLoss(\r\n feature_indices=indices,\r\n rules=rules,\r\n semantics=\"lukasiewicz\" # or \"godel\", \"product\"\r\n)\r\n```\r\n\r\n## Best Practices\r\n\r\n1. **Start Simple**: Begin with basic constraints and add complexity gradually\r\n2. **Use Comments**: Document the medical reasoning behind each constraint\r\n3. **Test Incrementally**: Add constraints one at a time and validate behavior\r\n4. **Meaningful Names**: Use descriptive variable names that reflect medical concepts\r\n5. **Balanced Weights**: Start with equal weights and adjust based on domain importance\r\n6. **Appropriate Transforms**: Use \"logbarrier\" for strict constraints, \"hinge\" for softer ones\r\n7. **\u26a0\ufe0f Mind the Batch Dimension**: Always use `[:, ...]` when indexing tensors from `RuleMammoLoss` or `RuleBasedConstraintsLoss`\r\n8. **Validate Tensor Shapes**: Print tensor shapes during development to verify indexing\r\n9. **Test with Different Batch Sizes**: Ensure your logic works with various batch sizes\r\n10. **Leverage Built-in Functions**: Use provided functions like `sum`, `exactly_one`, etc., to make the code cleaner and more efficient\r\n11. **Do Not Use Unbounded Variables**: The package is not designed for values outside $\\mathbb{R}^{[0,1]}$ and you might get unexpected results and clipping issues.\r\n12. **Cautious Use of Arithmetic**: Since the logic language is primarily for $x \\in \\mathbb{R}^{[0,1]}$, be careful when using arithmetic operations to avoid values going out of bounds. Try to keep intermediate results within [0,1] and use built-in functions for common patterns.\r\n\r\n## Migration from Hard-coded Constraints\r\n\r\nTo convert existing hard-coded constraints to logic language:\r\n\r\n1. **Identify logical patterns** in your constraint code\r\n2. **Extract variable definitions** for reused expressions\r\n3. **Convert constraints** to logic language syntax\r\n4. **Test equivalence** with the original implementation\r\n5. **Refine and optimize** weights and transforms\r\n",
"bugtrack_url": null,
"license": null,
"summary": "A domain-specific language for defining soft logic constraints in medical/general domains",
"version": "0.2.4",
"project_urls": {
"Bug Tracker": "https://github.com/mahbodez/logic-lang-package/issues",
"Documentation": "https://github.com/mahbodez/logic-lang-package#readme",
"Homepage": "https://github.com/mahbodez/logic-lang-package",
"Repository": "https://github.com/mahbodez/logic-lang-package.git"
},
"split_keywords": [
"logic",
" constraints",
" dsl",
" medical-imaging",
" soft-logic",
" rule-based",
" soft-constraints"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "0e1961d425af44305aa9c69d2852dace831550925d2e5a915a08bcb6bedd69f4",
"md5": "ad232cf2c9d7c19e8b91cb6ec125cbb7",
"sha256": "1b7c59379a4e7172b6f7563bc901226f95f8f30e934f8a61a4d6b94471bb1936"
},
"downloads": -1,
"filename": "logic_lang-0.2.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ad232cf2c9d7c19e8b91cb6ec125cbb7",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 43690,
"upload_time": "2025-09-08T05:27:44",
"upload_time_iso_8601": "2025-09-08T05:27:44.716454Z",
"url": "https://files.pythonhosted.org/packages/0e/19/61d425af44305aa9c69d2852dace831550925d2e5a915a08bcb6bedd69f4/logic_lang-0.2.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "8ac73be8063aa39100bc6fa5fa3be15ae416f7a47a5ea02141a55a2089060a96",
"md5": "515b435ae677b1fca12bd3d111d07ddc",
"sha256": "5ae35472ffe32ccf4fa11d17f2a172610701c091dfb3fa2f8e69f3c7ada504de"
},
"downloads": -1,
"filename": "logic_lang-0.2.4.tar.gz",
"has_sig": false,
"md5_digest": "515b435ae677b1fca12bd3d111d07ddc",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 64485,
"upload_time": "2025-09-08T05:27:48",
"upload_time_iso_8601": "2025-09-08T05:27:48.027948Z",
"url": "https://files.pythonhosted.org/packages/8a/c7/3be8063aa39100bc6fa5fa3be15ae416f7a47a5ea02141a55a2089060a96/logic_lang-0.2.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-09-08 05:27:48",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "mahbodez",
"github_project": "logic-lang-package",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "logic-lang"
}