video-comprehension-score


Namevideo-comprehension-score JSON
Version 1.0.0 PyPI version JSON
download
home_pageNone
SummaryVideo Comprehension Score (VCS) - A comprehensive metric for evaluating narrative similarity between reference and generated text
upload_time2025-07-25 04:00:55
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseMIT
keywords text-similarity narrative-analysis nlp video-comprehension text-evaluation semantic-similarity alignment-metrics
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <div align="center">
  <a href="https://github.com/hdubey-debug/vcs">
    <img src=".github/assets/vcs.gif" alt="VCS Process Flow" width="100%"/>
  </a>
  <p align="center">
    <em>A Comprehensive Python Library for Narrative Similarity Evaluation between two very long descriptions </em>
    <br />
  </p>
</div>

<div align="center">

[![PyPI version](https://img.shields.io/pypi/v/video-comprehension-score?color=teal&style=for-the-badge)](https://badge.fury.io/py/video-comprehension-score)
[![Python 3.10+](https://img.shields.io/badge/python-3.10+-teal?style=for-the-badge&logo=python&logoColor=white)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-teal?style=for-the-badge)](https://opensource.org/licenses/MIT)
[![Documentation](https://img.shields.io/badge/docs-github.io-teal?style=for-the-badge&logo=gitbook&logoColor=white)](https://multimodal-intelligence-lab.github.io/Video-Comprehension-Score/)

</div>

<p align="center">
  <a href="https://arxiv.org/abs/placeholder-link">๐Ÿ“„ Research Paper</a>
  ยท
  <a href="https://colab.research.google.com/drive/1l6GXWNBGFM1UwGohnIu1b071bn8ekJIf?usp=sharing">๐Ÿ““ Interactive Notebook</a>
</p>

---

## ๐Ÿค” What is VCS?

Recent advances in Large Video Language Models (LVLMs) have significantly enhanced automated video understanding, enabling detailed, long-form narratives of complex video content. However, accurately evaluating whether these models genuinely comprehend the video's narrativeโ€”its events, entities, interactions, and chronological coherenceโ€”remains challenging.

**Why Existing Metrics Fall Short:**

- **N-gram Metrics (e.g., BLEU, ROUGE, CIDEr)**: Primarily measure lexical overlap, penalizing valid linguistic variations and inadequately evaluating narrative chronology.

- **Embedding-based Metrics (e.g., BERTScore, SBERT)**: Improve semantic sensitivity but struggle with extended context, detailed content alignment, and narrative sequencing.

- **LLM-based Evaluations**: Often inconsistent, lacking clear criteria for narrative structure and chronology assessments.

Moreover, traditional benchmarks largely rely on question-answering tasks, which only test isolated events or entities rather than holistic video comprehension. A model answering specific questions correctly does not necessarily demonstrate understanding of the overall narrative or the intricate interplay of events.

**Introducing VCS (Video Comprehension Score):**

VCS is a Python library specifically designed to overcome these challenges by evaluating narrative comprehension through direct comparison of extensive, detailed video descriptions generated by LVLMs against human-written references. Unlike traditional metrics, VCS assesses whether models capture the overall narrative structure, event sequencing, and thematic coherence, not just lexical or isolated semantic matches.

**Core Components of VCS:**

- ๐ŸŒ **Global Alignment Score (GAS)**: Captures overall thematic alignment, tolerating stylistic variations without penalizing valid linguistic differences.

- ๐ŸŽฏ **Local Alignment Score (LAS)**: Checks detailed semantic correspondence at a chunk-level, allowing minor descriptive variations while penalizing significant inaccuracies or omissions.

- ๐Ÿ“– **Narrative Alignment Score (NAS)**: Evaluates chronological consistency, balancing the need for both strict event sequencing and permissible narrative flexibility.

Initially developed for evaluating video comprehension by comparing generated and human-written video narratives, VCS is versatile enough for broader applications, including document-level narrative comparisons, analysis of extensive narrative content, and various other narrative similarity tasks.

## ๐Ÿš€ **Key Applications**

<div align="center">

**๐ŸŽฏ Transform Your Narrative Analysis Across Every Domain**

</div>

<table align="center" width="100%" style="border-collapse: collapse;">
<tr>
<td width="33%" align="center" style="padding: 15px;">
<div style="background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); border-radius: 15px; padding: 20px; color: white; box-shadow: 0 8px 25px rgba(102, 126, 234, 0.3);">
<h3 style="margin: 0 0 10px 0; font-size: 1.2em;">๐ŸŽฌ Video Intelligence</h3>
<p style="margin: 0; font-size: 0.9em; opacity: 0.9;">Evaluate video language models' narrative comprehension</p>
</div>
</td>
<td width="33%" align="center" style="padding: 15px;">
<div style="background: linear-gradient(135deg, #f093fb 0%, #f5576c 100%); border-radius: 15px; padding: 20px; color: white; box-shadow: 0 8px 25px rgba(240, 147, 251, 0.3);">
<h3 style="margin: 0 0 10px 0; font-size: 1.2em;">๐Ÿ“„ Document Similarity</h3>
<p style="margin: 0; font-size: 0.9em; opacity: 0.9;">Compare semantic alignment between long-form documents</p>
</div>
</td>
<td width="33%" align="center" style="padding: 15px;">
<div style="background: linear-gradient(135deg, #4facfe 0%, #00f2fe 100%); border-radius: 15px; padding: 20px; color: white; box-shadow: 0 8px 25px rgba(79, 172, 254, 0.3);">
<h3 style="margin: 0 0 10px 0; font-size: 1.2em;">๐Ÿ“– Story Analysis</h3>
<p style="margin: 0; font-size: 0.9em; opacity: 0.9;">Measure narrative similarity between different stories</p>
</div>
</td>
</tr>
<tr>
<td width="33%" align="center" style="padding: 15px;">
<div style="background: linear-gradient(135deg, #fa709a 0%, #fee140 100%); border-radius: 15px; padding: 20px; color: white; box-shadow: 0 8px 25px rgba(250, 112, 154, 0.3);">
<h3 style="margin: 0 0 10px 0; font-size: 1.2em;">๐ŸŽ“ Academic Research</h3>
<p style="margin: 0; font-size: 0.9em; opacity: 0.9;">Detect conceptual plagiarism and idea overlap</p>
</div>
</td>
<td width="33%" align="center" style="padding: 15px;">
<div style="background: linear-gradient(135deg, #a8edea 0%, #fed6e3 100%); border-radius: 15px; padding: 20px; color: #2d3748; box-shadow: 0 8px 25px rgba(168, 237, 234, 0.3);">
<h3 style="margin: 0 0 10px 0; font-size: 1.2em;">๐Ÿ“ Paragraph Analysis</h3>
<p style="margin: 0; font-size: 0.9em; opacity: 0.8;">Evaluate narrative coherence within text sections</p>
</div>
</td>
<td width="33%" align="center" style="padding: 15px;">
<div style="background: linear-gradient(135deg, #ffecd2 0%, #fcb69f 100%); border-radius: 15px; padding: 20px; color: #2d3748; box-shadow: 0 8px 25px rgba(255, 236, 210, 0.3);">
<h3 style="margin: 0 0 10px 0; font-size: 1.2em;">๐ŸŽฏ Short Caption Evaluation</h3>
<p style="margin: 0; font-size: 0.9em; opacity: 0.8;">Evaluate short captions and brief descriptions</p>
</div>
</td>
</tr>
</table>


---

## ๐ŸŒŸ Key Features

Explore the comprehensive capabilities that make VCS a powerful narrative evaluation toolkit. **To understand these features in detail, read our [research paper](https://arxiv.org/abs/placeholder-link), then visit our [interactive playground](https://multimodal-intelligence-lab.github.io/Video-Comprehension-Score/) to see them in action.**

<table width="100%" align="center" style="border: none; border-collapse: collapse;">
  <tr style="background-color: transparent;">
    <td style="padding: 10px; border: none; vertical-align: top;">
      <details style="border: 1px solid #14b8a6; border-radius: 12px; padding: 20px; background: linear-gradient(145deg, #1f2937, #111827); color: #e5e7eb; box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);">
        <summary style="cursor: pointer; font-weight: bold; font-size: 1.2em; color: #6ee7b7;">๐Ÿงฎ Comprehensive Metric Suite</summary>
        <p style="padding-top: 10px;">Computes VCS along with detailed breakdowns: GAS (global thematic similarity), LAS with precision/recall components, and NAS with distance-based and line-based sub-metrics. Access all internal calculations including penalty systems, mapping windows, and alignment paths.</p>
      </details>
    </td>
    <td style="padding: 10px; border: none; vertical-align: top;">
      <details style="border: 1px solid #14b8a6; border-radius: 12px; padding: 20px; background: linear-gradient(145deg, #1f2937, #111827); color: #e5e7eb; box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);">
        <summary style="cursor: pointer; font-weight: bold; font-size: 1.2em; color: #6ee7b7;">๐Ÿ“Š Advanced Visualization Engine</summary>
        <p style="padding-top: 10px;">11 specialized visualization functions including similarity heatmaps, alignment analysis, best-match visualizations, narrative flow diagrams, and precision/recall breakdowns. Each metric component can be visualized with publication-quality plots.</p>
      </details>
    </td>
  </tr>
  <tr style="background-color: transparent;">
    <td style="padding: 10px; border: none; vertical-align: top;">
      <details style="border: 1px solid #14b8a6; border-radius: 12px; padding: 20px; background: linear-gradient(145deg, #1f2937, #111827); color: #e5e7eb; box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);">
        <summary style="cursor: pointer; font-weight: bold; font-size: 1.2em; color: #6ee7b7;">๐Ÿ“‹ Professional PDF Reports</summary>
        <p style="padding-top: 10px;">Generate comprehensive multi-page PDF reports with all metrics, visualizations, and analysis details. Supports both complete reports and customizable selective reports. Professional formatting suitable for research publications.</p>
      </details>
    </td>
    <td style="padding: 10px; border: none; vertical-align: top;">
      <details style="border: 1px solid #14b8a6; border-radius: 12px; padding: 20px; background: linear-gradient(145deg, #1f2937, #111827); color: #e5e7eb; box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);">
        <summary style="cursor: pointer; font-weight: bold; font-size: 1.2em; color: #6ee7b7;">โš™๏ธ Flexible Configuration System</summary>
        <p style="padding-top: 10px;">Fine-tune evaluation with configurable parameters: chunk sizes, similarity thresholds, context windows, and Local Chronology Tolerance (LCT). Supports custom segmentation and embedding functions for domain-specific applications.</p>
      </details>
    </td>
  </tr>
</table>

---

## โšก Getting Started

Welcome to VCS Metrics! This guide will walk you through everything you need to start analyzing narrative similarity between texts. We'll cover installation, setup, and your first VCS analysis step by step.


---

### ๐Ÿ“ฆ Step 1: Installation

Choose the installation method that fits your needs:

<table align="center" width="100%">
<tr>
<td width="50%" align="center">

### ๐ŸŽฏ **For Most Users** 
*Recommended if you just want to use VCS*

<details>
<summary><b>๐Ÿ–ฑ๏ธ Click to expand installation steps</b></summary>

<br>

**Terminal Installation:**
```bash
pip install video-comprehension-score
```

**Jupyter/Colab Installation:**
```bash
!pip install video-comprehension-score
```

<div align="center">

โœ… **Ready in 30 seconds**  
๐Ÿ”ฅ **Zero configuration needed**  
โšก **Instant access to all features**

</div>

</details>

</td>
<td width="50%" align="center">

### ๐Ÿ› ๏ธ **For Developers**
*If you want to contribute or modify VCS*

<details>
<summary><b>๐Ÿ–ฑ๏ธ Click to expand development setup</b></summary>

<br>

**Terminal Installation:**
```bash
git clone https://github.com/hdubey-debug/vcs.git
cd vcs
pip install -e ".[dev]"
pre-commit install
```

**Jupyter/Colab Installation:**
```bash
!git clone https://github.com/hdubey-debug/vcs.git
%cd vcs
!pip install -e ".[dev]"
!pre-commit install
```

<div align="center">

๐Ÿ”ง **Latest features first**  
๐Ÿงช **Testing capabilities**  
๐Ÿค **Contribution ready**

</div>

</details>

</td>
</tr>
</table>

---

### ๐Ÿ› ๏ธ System Requirements

Before installing VCS, make sure your system meets these requirements:

<table align="center" width="100%">
<tr>
<td width="50%" align="center">

### ๐Ÿ **Python**

<div style="background: linear-gradient(145deg, #dbeafe, #bfdbfe); padding: 20px; border-radius: 12px; border: 2px solid #3b82f6;">

<img src="https://img.shields.io/badge/Python-3.10+-306998?style=for-the-badge&logo=python&logoColor=white" alt="Python"/>

**Required: Python 3.10 or higher**

VCS Metrics uses modern Python features and requires Python 3.10+. We recommend Python 3.11+ for optimal performance.

</div>

</td>
<td width="50%" align="center">

### ๐Ÿ”ฅ **PyTorch**

<div style="background: linear-gradient(145deg, #fed7d7, #fca5a5); padding: 20px; border-radius: 12px; border: 2px solid #dc2626;">

<img src="https://img.shields.io/badge/PyTorch-1.9+-ee4c2c?style=for-the-badge&logo=pytorch&logoColor=white" alt="PyTorch"/>

**Required: PyTorch 1.9.0+**

VCS needs PyTorch but doesn't install it automatically to avoid conflicts. Get it from the [official PyTorch website](https://pytorch.org/get-started/locally/).

**๐Ÿ’ก Pro Tip:** In Google Colab, PyTorch is pre-installed!

</div>

</td>
</tr>
</table>

<div align="center">
<table style="border: 2px solid #10b981; border-radius: 12px; background: linear-gradient(145deg, #d1fae5, #a7f3d0); padding: 15px; margin: 20px 0;">
<tr>
<td align="center">

**๐Ÿ’ก Note:** VCS automatically installs dependencies: numpyโ‰ฅ1.20.0, matplotlibโ‰ฅ3.5.0, seabornโ‰ฅ0.11.0

</td>
</tr>
</table>
</div>

---

### ๐Ÿ”ง Step 2: Prepare Your Functions

Now that VCS is installed, you need to define two functions before you can use the VCS API. Here's how VCS works:

#### ๐Ÿ“‹ VCS API Overview

```python
from vcs import compute_vcs_score

result = compute_vcs_score(
    reference_text="Your reference text here",
    generated_text="Your generated text here", 
    segmenter_fn=your_segmenter_function,        # โ† You provide this
    embedding_fn_las=your_embedding_function,    # โ† You provide this  
    embedding_fn_gas=your_embedding_function,    # โ† You provide this
    return_all_metrics=True
)

print(f"VCS Score: {result['VCS']:.4f}")
```

As you can see, VCS requires two custom functions from you. Let's understand what each should do:


<table align="center" width="100%">
<tr>
<td width="50%" align="center">

### ๐Ÿ”ช **Segmenter Function**

<div style="background: linear-gradient(145deg, #fef3c7, #fde68a); padding: 20px; border-radius: 12px; border: 2px solid #f59e0b;">

**What it does:** Splits text into meaningful segments (sentences, paragraphs, etc.)

**Required signature:**
```python
def your_segmenter(text: str) -> List[str]:
    # Your implementation here
    return list_of_text_segments
```

**Arguments:** `text` (str) - Input text to be segmented  
**Returns:** `List[str]` - List of text segments  
**You can use:** Any library or model (NLTK, spaCy, custom logic, etc.)

</div>

</td>
<td width="50%" align="center">

### ๐Ÿง  **Embedding Function**

<div style="background: linear-gradient(145deg, #f3e8ff, #e9d5ff); padding: 20px; border-radius: 12px; border: 2px solid #7c3aed;">

**What it does:** Converts text segments into numerical vectors (embeddings)

**Required signature:**
```python
def your_embedder(texts: List[str]) -> torch.Tensor:
    # Your implementation here  
    return tensor_of_embeddings
```

**Arguments:** `texts` (List[str]) - List of text segments to embed  
**Returns:** `torch.Tensor` - Tensor of shape `(len(texts), embedding_dim)`  
**You can use:** Any embedding model (sentence-transformers, OpenAI, etc.)

</div>

</td>
</tr>
</table>

#### ๐ŸŒŸ Author Recommendations (2025)

For best results, we recommend these state-of-the-art models:

<div align="center">
<table style="border: 2px solid #dc2626; border-radius: 12px; background: linear-gradient(145deg, #fecaca, #fca5a5); padding: 15px; margin: 20px 0;">
<tr>
<td align="center">

โš ๏ธ **Note:** These recommendations are current as of 2025. Always research the latest SOTA options.

</td>
</tr>
</table>
</div>

<table align="center" width="100%">
<tr>
<td width="50%" align="center">

### ๐Ÿ”ช **Segmentation Champion**

<div align="center">
<img src="https://img.shields.io/badge/Segment_Any_Text-SAT-ff6b6b?style=for-the-badge&logo=artificial-intelligence&logoColor=white" alt="SAT"/>
</div>

<div style="background: linear-gradient(145deg, #fff5f5, #fed7d7); padding: 20px; border-radius: 12px; border: 2px solid #e53e3e;">

**๐Ÿ† Recommended: Segment Any Text (SAT)**

โœจ **Why we recommend SAT:**
- ๐ŸŽฏ State-of-the-art segmentation accuracy  
- โšก Intelligent boundary detection  
- ๐Ÿง  Context-aware text splitting  
- ๐Ÿ”ฌ Research-grade performance  

๐Ÿ”— **Repository:** [github.com/segment-any-text/wtpsplit](https://github.com/segment-any-text/wtpsplit)

</div>

</td>
<td width="50%" align="center">

### ๐Ÿง  **Embedding Powerhouse**

<div align="center">
<img src="https://img.shields.io/badge/NVIDIA-NV--Embed--v2-76b900?style=for-the-badge&logo=nvidia&logoColor=white" alt="NV-Embed"/>
</div>

<div style="background: linear-gradient(145deg, #f0fff4, #c6f6d5); padding: 20px; border-radius: 12px; border: 2px solid #38a169;">

**๐Ÿฅ‡ Recommended: nv-embed-v2**

๐ŸŒŸ **Why we recommend nv-embed-v2:**
- ๐Ÿ“Š Top performer on [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard)  
- ๐Ÿš€ Superior semantic understanding  
- ๐Ÿ’ช Robust multilingual support  
- โšก Excellent for VCS analysis  

๐Ÿ”— **Model:** [nvidia/NV-Embed-v2](https://huggingface.co/nvidia/NV-Embed-v2)

</div>

</td>
</tr>
</table>

<div align="center">
<table style="border: 2px solid #3182ce; border-radius: 12px; background: linear-gradient(145deg, #bee3f8, #90cdf4); padding: 15px; margin: 20px 0;">
<tr>
<td align="center">

**๐Ÿ’ก Alternative Options:** NLTK, spaCy, sentence-transformers, or build your own custom functions!

</td>
</tr>
</table>
</div>

---

### ๐Ÿ’ป Step 3: Run Your First VCS Analysis

Now let's see VCS in action with a complete working example:

<div align="center">
<table style="border: 2px solid #059669; border-radius: 12px; background: linear-gradient(145deg, #d1fae5, #a7f3d0); padding: 15px; margin: 20px 0;">
<tr>
<td align="center">

**โšก Performance Notes**  
*SOTA models require GPU. For CPU testing, this example uses lightweight alternatives.*

</td>
</tr>
</table>
</div>

<details>
<summary><h3><b>๐Ÿš€ Quick Example</b> - Click to expand complete tutorial</h3></summary>

<div style="background: linear-gradient(145deg, #1f2937, #111827); padding: 25px; border-radius: 15px; border: 2px solid #6366f1;">

### ๐ŸŽฏ **Complete Working Example**
*Copy, paste, and run this code to see VCS in action*

```python
# Fix import path issue if running from vcs/ root directory
import sys
import os
if os.path.basename(os.getcwd()) == 'vcs' and os.path.exists('src/vcs'):
    sys.path.insert(0, 'src')
    print("๐Ÿ”ง Fixed import path for development directory")

# Test the installation
try:
    import vcs
    print("โœ… VCS package imported successfully!")
    
    # Test main function availability
    if hasattr(vcs, 'compute_vcs_score'):
        print("โœ… Main function 'compute_vcs_score' is available!")
    else:
        print("โš ๏ธ Main function not found - there might be an installation issue")
        
    # Try to get version
    try:
        print(f"๐Ÿ“ฆ Version: {vcs.__version__}")
    except AttributeError:
        print("๐Ÿ“ฆ Version: Unable to determine (this is normal for development installs)")
        
except ImportError as e:
    print(f"โŒ Import failed: {e}")
    print("๐Ÿ’ก Make sure you:")
    print("   1. Installed VCS correctly: pip install -e .[dev]")
    print("   2. Restarted your notebook kernel") 
    print("   3. You're NOT in the root vcs/ directory (this causes import conflicts)")

# Import required libraries
import torch
from typing import List

# Define lightweight segmenter function
def simple_segmenter(text: str) -> List[str]:
    """
    Simple sentence segmenter using period splitting.
    
    Args:
        text: Input text to segment
        
    Returns:
        List of text segments
    """
    # Split by periods and clean up
    segments = [s.strip() for s in text.split('.') if s.strip()]
    return segments

# Define lightweight embedding function using sentence-transformers
def lightweight_embedding_function(texts: List[str]) -> torch.Tensor:
    """
    Lightweight embedding function using sentence-transformers.
    
    Args:
        texts: List of text segments to embed
        
    Returns:
        PyTorch tensor of shape (len(texts), embedding_dim)
    """
    try:
        from sentence_transformers import SentenceTransformer
        
        # Use a lightweight model (only downloads ~80MB)
        model = SentenceTransformer('all-MiniLM-L6-v2')
        
        # Generate embeddings
        embeddings = model.encode(texts)
        return torch.tensor(embeddings, dtype=torch.float32)
        
    except ImportError:
        print("โš ๏ธ sentence-transformers not found. Installing...")
        import subprocess
        import sys
        subprocess.check_call([sys.executable, "-m", "pip", "install", "sentence-transformers"])
        
        # Try again after installation
        from sentence_transformers import SentenceTransformer
        model = SentenceTransformer('all-MiniLM-L6-v2')
        embeddings = model.encode(texts)
        return torch.tensor(embeddings, dtype=torch.float32)

# Example texts
reference_text = """
The quick brown fox jumps over the lazy dog.
It was a beautiful sunny day in the forest.
The fox was looking for food for its family.
"""

generated_text = """
A brown fox jumped over a sleeping dog.
The weather was nice and sunny in the woods.
The fox needed to find food for its cubs.
"""

# Compute VCS score
print("๐Ÿง  Computing VCS score...")
try:
    result = vcs.compute_vcs_score(
        reference_text=reference_text,
        generated_text=generated_text,
        segmenter_fn=simple_segmenter,
        embedding_fn_las=lightweight_embedding_function,
        embedding_fn_gas=lightweight_embedding_function,
        return_all_metrics=True,
        return_internals=True
    )
    
    print("๐ŸŽฏ VCS Results:")
    print(f"VCS Score: {result['VCS']:.4f}")
    print(f"GAS Score: {result['GAS']:.4f}")
    print(f"LAS Score: {result['LAS']:.4f}")
    print(f"NAS Score: {result['NAS']:.4f}")
    print("โœ… VCS is working correctly!")
    
    # Generate visualization (optional)
    if 'internals' in result:
        try:
            fig = vcs.visualize_metrics_summary(result['internals'])
            print("๐Ÿ“Š Visualization generated successfully!")
            # fig.show()  # Uncomment to display
        except Exception as viz_error:
            print(f"โš ๏ธ Visualization failed (this is normal in some environments): {viz_error}")
    
except Exception as e:
    print(f"โŒ Error running VCS: {e}")
    print("๐Ÿ’ก Make sure PyTorch is installed and try restarting your kernel")
```

<div align="center">
<table style="border: 2px solid #3b82f6; border-radius: 12px; background: linear-gradient(145deg, #dbeafe, #bfdbfe); padding: 15px; margin: 20px 0;">
<tr>
<td align="center">

**๐Ÿ“ Scale Note:** This example uses small text for illustration - VCS excels with long-form content!  
**โš ๏ธ Import Tip:** Running from `vcs/` root? The example includes an automatic path fix.

</td>
</tr>
</table>
</div>

</div>

</details>

---

## โš™๏ธ Advanced Configuration

Once you're comfortable with the basics, you can fine-tune VCS behavior for your specific use case:

<table align="center" width="100%">
<tr>
<td width="33%" align="center">

### ๐ŸŽฏ **Core Parameters**

<div style="background: linear-gradient(145deg, #ede9fe, #ddd6fe); padding: 20px; border-radius: 12px; border: 2px solid #7c3aed;">

**๐ŸŽ›๏ธ Essential Controls:**

| Parameter | Default | Purpose |
|:----------|:-------:|:--------|
| `chunk_size` | 1 | Segment grouping |
| `context_cutoff_value` | 0.6 | Similarity threshold |
| `context_window_control` | 4.0 | Context window size |
| `lct` | 0 | Narrative reordering tolerance |

</div>

</td>
<td width="33%" align="center">

### ๐Ÿ“Š **Return All Metrics**

<div style="background: linear-gradient(145deg, #fef3c7, #fde68a); padding: 20px; border-radius: 12px; border: 2px solid #f59e0b;">

**๐ŸŽ›๏ธ Control Parameter:**

| Parameter | Default | Purpose |
|:----------|:-------:|:--------|
| `return_all_metrics` | False | Return detailed metric breakdown |

**When set to `True`, you get:**
- Individual GAS, LAS, NAS scores
- LAS precision and recall components
- Distance-based and line-based NAS sub-metrics
- Complete metric breakdown for analysis

</div>

</td>
<td width="33%" align="center">

### ๐Ÿ” **Return Internals**

<div style="background: linear-gradient(145deg, #e0f2fe, #b3e5fc); padding: 20px; border-radius: 12px; border: 2px solid #0288d1;">

**๐ŸŽ›๏ธ Control Parameter:**

| Parameter | Default | Purpose |
|:----------|:-------:|:--------|
| `return_internals` | False | Return internal computation data |

**When set to `True`, you get:**
- Similarity matrices and alignment paths
- Mapping windows and penalty calculations
- Text chunks and segmentation details
- All data needed for visualization

</div>

</td>
</tr>
</table>

<table align="center" width="100%" style="margin-top: 20px;">
<tr>
<td width="100%">
<div align="center">
<h3>๐Ÿš€ **Example Configuration**</h3>
</div>
<div style="background: linear-gradient(145deg, #ecfdf5, #d1fae5); padding: 20px; border-radius: 12px; border: 2px solid #059669; text-align: left;">

```python
# ๐ŸŽฏ Comprehensive configuration with all features enabled
result = compute_vcs_score(
    reference_text=ref_text,
    generated_text=gen_text,
    segmenter_fn=segmenter,
    embedding_fn_las=embedder,
    embedding_fn_gas=embedder,
    chunk_size=2,                  # Group segments
    context_cutoff_value=0.7,      # Higher threshold
    context_window_control=3.0,    # Tighter windows
    lct=1,                         # Some reordering OK
    return_all_metrics=True,       # Get detailed breakdown
    return_internals=True          # Get visualization data
)
```

</div>

</td>
</tr>
</table>

<div align="center">
<table style="border: 2px solid #3b82f6; border-radius: 12px; background: linear-gradient(145deg, #dbeafe, #bfdbfe); padding: 15px; margin: 20px 0;">
<tr>
<td align="center">

**๐Ÿ“š For complete API documentation and visualization guides, visit our [API Documentation](https://multimodal-intelligence-lab.github.io/Video-Comprehension-Score/)**

</td>
</tr>
</table>
</div>

---

## โ“ Frequently Asked Questions

<details>
<summary><strong>๐Ÿค” How does VCS differ from BLEU/ROUGE?</strong></summary>
<p>Unlike BLEU and ROUGE which rely on hard n-gram matching, VCS utilizes latent space matching by comparing embeddings at both global and local chunk levels. VCS also evaluates the chronological order of content chunks and combines these three dimensions to generate a comprehensive final score that better captures semantic similarity and narrative structure.</p>
</details>

<details>
<summary><strong>โšก What's the minimum text length for VCS?</strong></summary>
<p>VCS works with any text length, but it's optimized for longer texts (100+ words) where narrative structure is important. For very short texts, simpler metrics might be more appropriate.</p>
</details>

<details>
<summary><strong>๐Ÿ“ What's the maximum text length for VCS?</strong></summary>
<p>There is no upper limit on text length for VCS. The framework is designed to handle texts of any size, from short paragraphs to extensive documents, making it suitable for large-scale narrative evaluation tasks.</p>
</details>

<details>
<summary><strong>๐Ÿง  Which embedding models work best?</strong></summary>
<p>We recommend checking the <a href="https://huggingface.co/spaces/mteb/leaderboard">MTEB leaderboard</a> for the latest SOTA models. As of 2024, nv-embed-v2 and similar transformer-based models provide excellent results.</p>
</details>

<details>
<summary><strong>๐ŸŽฏ How do I control the granularity of comparison?</strong></summary>
<p>Use the <code>chunk_size</code> parameter to control the granularity of text comparison. A smaller chunk size provides more fine-grained analysis, while a larger chunk size offers broader, more general comparisons. The default value is 1 for maximum granularity.</p>
</details>

<details>
<summary><strong>โฑ๏ธ How do I control the strictness of chronological matching?</strong></summary>
<p>Use the <code>lct</code> (Local Chronology Tolerance) parameter to control chronological matching strictness. A higher LCT value means more lenient chronological ordering, allowing for greater flexibility in narrative sequence evaluation. The default value is 0 for strict chronological matching.</p>
</details>

<details>
<summary><strong>๐Ÿ”— Can I use different embedding functions for GAS and LAS?</strong></summary>
<p>Yes, you can specify different embedding functions for Global Alignment Score (GAS) and Local Alignment Score (LAS) using the <code>embedding_fn_gas</code> and <code>embedding_fn_las</code> parameters respectively. This allows you to optimize each component with models best suited for their specific evaluation tasks.</p>
</details>

---

## ๐Ÿ—๏ธ Project Structure

```
vcs/
โ”œโ”€โ”€ ๐Ÿ“ src/vcs/                  # Main package source code
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ __init__.py           # Package initialization
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ scorer.py             # Main VCS API entry point
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ _config.py            # Configuration settings
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _metrics/             # Core VCS metrics implementations
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _gas/             # Global Alignment Score
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _las/             # Local Alignment Score  
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _nas/             # Narrative Alignment Score with components
โ”‚   โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“ _nas_components/  # Distance NAS, Line NAS, Regularize NAS
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“ _vcs/             # Combined VCS computation
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _visualize_vcs/       # Comprehensive visualization suite
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _similarity_matrix/  # Similarity matrix visualizations
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _best_match/      # Best match analysis plots
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _distance_nas/    # Distance-based NAS visualizations
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _line_nas/        # Line-based NAS visualizations
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _mapping_windows/ # Context window visualizations
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _metrics_summary/ # Overall metrics summary plots
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _pdf_report/      # PDF report generation
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _text_chunks/     # Text chunk visualizations
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _window_regularizer/ # Window regularizer plots
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _las/             # LAS-specific visualizations
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“ _config/          # Visualization configuration
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _segmenting/          # Text segmentation utilities
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _matching/            # Optimal text matching algorithms
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ _mapping_windows/     # Context window management
โ”‚   โ””โ”€โ”€ ๐Ÿ“ _utils/               # Helper utilities
โ”œโ”€โ”€ ๐Ÿ“ docs/                     # Documentation and interactive demos
โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ index.html            # Main documentation website
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ pages/                # Documentation pages
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ api.html          # API reference
โ”‚   โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ playground.html   # Interactive playground
โ”‚   โ”‚   โ””โ”€โ”€ ๐Ÿ“„ example.html      # Usage examples
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ widgets/              # Interactive visualization widgets
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ sphinx/               # Sphinx documentation source
โ”‚   โ””โ”€โ”€ ๐Ÿ“ assets/               # Documentation assets (CSS, JS, videos)
โ”œโ”€โ”€ ๐Ÿ“ .github/                  # GitHub configuration
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ assets/               # README assets (images, gifs)
โ”‚   โ”œโ”€โ”€ ๐Ÿ“ scripts/              # GitHub automation scripts
โ”‚   โ””โ”€โ”€ ๐Ÿ“ workflows/            # CI/CD automation pipelines
โ”‚       โ”œโ”€โ”€ ๐Ÿ“„ test.yml          # Continuous testing
โ”‚       โ”œโ”€โ”€ ๐Ÿ“„ publish.yml       # Package publishing
โ”‚       โ””โ”€โ”€ ๐Ÿ“„ deploy-docs.yml   # Documentation deployment
โ”œโ”€โ”€ ๐Ÿ“„ pyproject.toml           # Package configuration & dependencies
โ”œโ”€โ”€ ๐Ÿ“„ CONTRIBUTING.md          # Development contribution guide
โ”œโ”€โ”€ ๐Ÿ“„ DEPLOYMENT.md            # Release and deployment guide
โ”œโ”€โ”€ ๐Ÿ“„ CHANGELOG.md             # Version history and changes
โ”œโ”€โ”€ ๐Ÿ“„ MANIFEST.in              # Package manifest
โ”œโ”€โ”€ ๐Ÿ“„ tag_version.py           # Version tagging script
โ”œโ”€โ”€ ๐Ÿ“„ LICENSE                  # MIT license
โ””โ”€โ”€ ๐Ÿ“„ README.md                # This documentation
```

---

## ๐Ÿš€ Development & Contributing

We welcome contributions to VCS Metrics! Whether you're fixing bugs, adding features, or improving documentation, here's how to get started.

### ๐Ÿ› ๏ธ Quick Development Setup

<details>
<summary><b>๐Ÿ–ฑ๏ธ Click to expand development setup</b></summary>

<br>

```bash
# 1. Clone and setup
git clone https://github.com/hdubey-debug/vcs.git
cd vcs
python -m venv venv
source venv/bin/activate  # Windows: venv\Scripts\activate

# 2. Install development dependencies
pip install -e .[dev]

# 3. Create your feature branch
git checkout -b feature/your-feature-name

# 4. Make your changes
# Edit files in src/vcs/
# Add tests if needed
# Update docs if necessary

# 5. Run quality checks
black src/ && isort src/ && flake8 src/ && mypy src/

# 6. Commit with semantic format
git commit -m "minor: add new awesome feature"

# 7. Push and create PR
git push origin feature/your-feature-name
```

</details>

### ๐Ÿ“‹ Contribution Workflow

<table align="center" width="100%">
<tr>
<td width="50%" align="center">

### ๐Ÿ”„ **Development Process**

<div style="background: linear-gradient(145deg, #dbeafe, #bfdbfe); padding: 20px; border-radius: 12px; border: 2px solid #3b82f6;">

**1. Fork & Clone**  
**2. Create Feature Branch**  
**3. Make Changes**  
**4. Write Tests**  
**5. Submit PR**  
**6. Code Review**  
**7. Merge to Main**  

โœ… **Automated testing on every PR**  
โœ… **Fast feedback in ~2-3 minutes**

</div>

</td>
<td width="50%" align="center">

### ๐Ÿ“ฆ **Release Process**

<div style="background: linear-gradient(145deg, #ecfdf5, #d1fae5); padding: 20px; border-radius: 12px; border: 2px solid #059669;">

**1. Semantic Commit Messages**  
**2. GitHub Release Creation**  
**3. Automated Version Calculation**  
**4. Package Building**  
**5. TestPyPI Publishing**  
**6. Production Release**  

๐Ÿš€ **Industry-standard CI/CD pipeline**  
โšก **Zero manual version management**

</div>

</td>
</tr>
</table>

### ๐Ÿ’ก Semantic Commit Format

We use semantic commits for automatic version bumping:

<div align="center">
<table style="border: 2px solid #7c3aed; border-radius: 12px; background: linear-gradient(145deg, #f3e8ff, #e9d5ff); padding: 15px; margin: 20px 0;">
<tr>
<td align="center">

| **Commit Type** | **Version Bump** | **Example** |
|:---|:---:|:---|
| `minor: description` | New features | `1.0.4 โ†’ 1.1.0` |
| `major: description` | Breaking changes | `1.0.4 โ†’ 2.0.0` |
| `anything else` | Bug fixes (default) | `1.0.4 โ†’ 1.0.5` |

</td>
</tr>
</table>
</div>

### ๐Ÿ”ง Automated Testing & CI/CD

Our comprehensive CI/CD pipeline ensures code quality and reliability on every commit:

<div align="center">
<table style="border: 2px solid #059669; border-radius: 12px; background: linear-gradient(145deg, #ecfdf5, #d1fae5); padding: 20px; margin: 20px 0;">
<tr>
<td align="center">

### ๐Ÿš€ **What Gets Tested**

**โœ… Matrix Testing** - Python 3.11 & 3.12 compatibility  
**โœ… Package Validation** - Import testing & API availability  
**โœ… Integration Testing** - Full getting-started example  
**โœ… Code Quality** - Flake8 linting & complexity checks  
**โœ… Build Testing** - Package build verification  

**๐Ÿ”„ Triggers:** Every push and pull request to `main`

</td>
</tr>
</table>
</div>

<div align="center">
<table style="border: 2px solid #059669; border-radius: 12px; background: linear-gradient(145deg, #ecfdf5, #d1fae5); padding: 15px; margin: 20px 0;">
<tr>
<td align="center">

[![Tests](https://img.shields.io/github/actions/workflow/status/hdubey-debug/vcs/test.yml?branch=main&label=Tests&logo=github-actions&logoColor=white&style=for-the-badge)](https://github.com/hdubey-debug/vcs/actions/workflows/test.yml)
[![Build](https://img.shields.io/github/actions/workflow/status/hdubey-debug/vcs/publish.yml?label=Build&logo=github-actions&logoColor=white&style=for-the-badge)](https://github.com/hdubey-debug/vcs/actions/workflows/publish.yml)

**โœ… Automated testing ensures every change is production-ready**

</td>
</tr>
</table>
</div>

### ๐Ÿ“– Detailed Guides

For comprehensive information about contributing and development:

<div align="center">

[![Contributing Guide](https://img.shields.io/badge/๐Ÿ“–_Full_Contributing_Guide-2563eb?style=for-the-badge&logo=gitbook&logoColor=white)](./CONTRIBUTING.md)
[![Deployment Guide](https://img.shields.io/badge/๐Ÿš€_Deployment_Guide-059669?style=for-the-badge&logo=rocket&logoColor=white)](./DEPLOYMENT.md)

</div>

### ๐Ÿค Getting Help

<table align="center" width="100%">
<tr>
<td width="33%" align="center">

**๐Ÿ› Bug Reports**  
[Create GitHub Issue](https://github.com/hdubey-debug/vcs/issues)

</td>
<td width="33%" align="center">

**๐Ÿ’ฌ Questions**  
[GitHub Discussions](https://github.com/hdubey-debug/vcs/discussions)

</td>
<td width="33%" align="center">

**๐Ÿ’ก Feature Requests**  
[Feature Request Issue](https://github.com/hdubey-debug/vcs/issues/new)

</td>
</tr>
</table>

---

## ๐Ÿ“š Citation

If you use VCS Metrics in your research, please cite:

```bibtex
@software{vcs_metrics_2024,
  title = {VCS Metrics: Video Comprehension Score for Text Similarity Evaluation},
  author = {Dubey, Harsh and Ali, Mukhtiar and Mishra, Sugam and Pack, Chulwoo},
  year = {2024},
  institution = {South Dakota State University},
  url = {https://github.com/hdubey-debug/vcs},
  note = {Python package for narrative similarity evaluation}
}
```

---

## ๐Ÿค– CLIP-CC Ecosystem Integration

VCS is designed to work seamlessly with [CLIP-CC Dataset](https://github.com/hdubey-debug/CLIP-CC) for comprehensive video understanding evaluation.

<div align="center">
<table style="border: 2px solid #7c3aed; border-radius: 12px; background: linear-gradient(145deg, #f3e8ff, #e9d5ff); padding: 20px; margin: 20px 0;">
<tr>
<td align="center">

[![CLIP-CC Dataset](https://img.shields.io/badge/๐Ÿค–_Companion_Dataset-CLIP--CC-9333ea?style=for-the-badge&logo=python&logoColor=white)](https://github.com/hdubey-debug/CLIP-CC)

**๐Ÿ”„ Perfect Integration: VCS + CLIP-CC**
- ๐ŸŽฅ **CLIP-CC provides the data** โ†’ Rich video dataset with human summaries
- ๐Ÿ” **VCS provides the evaluation** โ†’ Advanced narrative comprehension metrics  
- ๐Ÿ† **Together: Complete research pipeline** โ†’ From data loading to evaluation

</td>
</tr>
</table>
</div>

---

## ๐Ÿ† Meet Our Contributors

<div align="center">

### ๐ŸŒŸ **The VCS Team - Building the Future of Text Similarity**

</div>

<table>
<tr>
<td align="center">

<a href="https://github.com/hdubey-debug">
  <img src="https://github.com/hdubey-debug.png" width="100" height="100" style="border-radius: 50%;"/>
</a>

**Harsh Dubey**  
*Lead Developer & Research Scientist*  
*South Dakota State University*

| Commits | Lines | Files |
|:---:|:---:|:---:|
| **2** | **49K** | **171** |

**๐Ÿ“‹ Key Work:**
โ€ข VCS Algorithm Architecture  
โ€ข Visualization Engine  
โ€ข LAS, GAS, and NAS Metrics  

[![GitHub](https://img.shields.io/badge/-GitHub-14b8a6?style=flat&logo=github)](https://github.com/hdubey-debug)

</td>
</tr>
</table>

<div align="center">

### ๐Ÿค– **Automated Contributors**

| **Contributor** | **Role** | **Contributions** | **Badge** |
|:---:|:---:|:---:|:---:|
| ๐Ÿค– **GitHub Actions** | CI/CD Automation | Clean history setup | [![Bot](https://img.shields.io/badge/Bot-Automated_Testing-6c5ce7?style=flat&logo=github-actions&logoColor=white)](#) |

### ๐Ÿ“Š **Contribution Analytics**

[![Contributors](https://img.shields.io/github/contributors/hdubey-debug/vcs?style=for-the-badge&color=14b8a6&labelColor=0f172a)](https://github.com/hdubey-debug/vcs/graphs/contributors)
[![Commit Activity](https://img.shields.io/github/commit-activity/m/hdubey-debug/vcs?style=for-the-badge&color=ff6b6b&labelColor=0f172a)](https://github.com/hdubey-debug/vcs/pulse)
[![Last Commit](https://img.shields.io/github/last-commit/hdubey-debug/vcs?style=for-the-badge&color=4ecdc4&labelColor=0f172a)](https://github.com/hdubey-debug/vcs/commits)
[![Code Frequency](https://img.shields.io/github/languages/count/hdubey-debug/vcs?style=for-the-badge&color=f9ca24&labelColor=0f172a)](https://github.com/hdubey-debug/vcs)

</div>

---

## ๐Ÿ“„ License

This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.

---

### ๐ŸŒŸ **Made with โค๏ธ by the VCS Team**

**Authors**: Harsh Dubey, Mukhtiar Ali, Sugam Mishra, and Chulwoo Pack  
**Institution**: South Dakota State University  
**Year**: 2024

[โญ Star this repo](https://github.com/hdubey-debug/vcs) โ€ข [๐Ÿ› Report Bug](https://github.com/hdubey-debug/vcs/issues) โ€ข [๐Ÿ’ก Request Feature](https://github.com/hdubey-debug/vcs/issues) โ€ข [๐Ÿ’ฌ Community Q&A](https://github.com/hdubey-debug/vcs/discussions)

</div>

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "video-comprehension-score",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": "Chulwoo Pack <chulwoo.pack@sdstate.edu>",
    "keywords": "text-similarity, narrative-analysis, nlp, video-comprehension, text-evaluation, semantic-similarity, alignment-metrics",
    "author": null,
    "author_email": "Harsh Dubey <had7143@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/42/3c/e44e8b66aef053b5e36b86dee3bd1574a39b17ab57915dfc9757ff22a1e5/video_comprehension_score-1.0.0.tar.gz",
    "platform": null,
    "description": "<div align=\"center\">\n  <a href=\"https://github.com/hdubey-debug/vcs\">\n    <img src=\".github/assets/vcs.gif\" alt=\"VCS Process Flow\" width=\"100%\"/>\n  </a>\n  <p align=\"center\">\n    <em>A Comprehensive Python Library for Narrative Similarity Evaluation between two very long descriptions </em>\n    <br />\n  </p>\n</div>\n\n<div align=\"center\">\n\n[![PyPI version](https://img.shields.io/pypi/v/video-comprehension-score?color=teal&style=for-the-badge)](https://badge.fury.io/py/video-comprehension-score)\n[![Python 3.10+](https://img.shields.io/badge/python-3.10+-teal?style=for-the-badge&logo=python&logoColor=white)](https://www.python.org/downloads/)\n[![License: MIT](https://img.shields.io/badge/License-MIT-teal?style=for-the-badge)](https://opensource.org/licenses/MIT)\n[![Documentation](https://img.shields.io/badge/docs-github.io-teal?style=for-the-badge&logo=gitbook&logoColor=white)](https://multimodal-intelligence-lab.github.io/Video-Comprehension-Score/)\n\n</div>\n\n<p align=\"center\">\n  <a href=\"https://arxiv.org/abs/placeholder-link\">\ud83d\udcc4 Research Paper</a>\n  \u00b7\n  <a href=\"https://colab.research.google.com/drive/1l6GXWNBGFM1UwGohnIu1b071bn8ekJIf?usp=sharing\">\ud83d\udcd3 Interactive Notebook</a>\n</p>\n\n---\n\n## \ud83e\udd14 What is VCS?\n\nRecent advances in Large Video Language Models (LVLMs) have significantly enhanced automated video understanding, enabling detailed, long-form narratives of complex video content. However, accurately evaluating whether these models genuinely comprehend the video's narrative\u2014its events, entities, interactions, and chronological coherence\u2014remains challenging.\n\n**Why Existing Metrics Fall Short:**\n\n- **N-gram Metrics (e.g., BLEU, ROUGE, CIDEr)**: Primarily measure lexical overlap, penalizing valid linguistic variations and inadequately evaluating narrative chronology.\n\n- **Embedding-based Metrics (e.g., BERTScore, SBERT)**: Improve semantic sensitivity but struggle with extended context, detailed content alignment, and narrative sequencing.\n\n- **LLM-based Evaluations**: Often inconsistent, lacking clear criteria for narrative structure and chronology assessments.\n\nMoreover, traditional benchmarks largely rely on question-answering tasks, which only test isolated events or entities rather than holistic video comprehension. A model answering specific questions correctly does not necessarily demonstrate understanding of the overall narrative or the intricate interplay of events.\n\n**Introducing VCS (Video Comprehension Score):**\n\nVCS is a Python library specifically designed to overcome these challenges by evaluating narrative comprehension through direct comparison of extensive, detailed video descriptions generated by LVLMs against human-written references. Unlike traditional metrics, VCS assesses whether models capture the overall narrative structure, event sequencing, and thematic coherence, not just lexical or isolated semantic matches.\n\n**Core Components of VCS:**\n\n- \ud83c\udf0d **Global Alignment Score (GAS)**: Captures overall thematic alignment, tolerating stylistic variations without penalizing valid linguistic differences.\n\n- \ud83c\udfaf **Local Alignment Score (LAS)**: Checks detailed semantic correspondence at a chunk-level, allowing minor descriptive variations while penalizing significant inaccuracies or omissions.\n\n- \ud83d\udcd6 **Narrative Alignment Score (NAS)**: Evaluates chronological consistency, balancing the need for both strict event sequencing and permissible narrative flexibility.\n\nInitially developed for evaluating video comprehension by comparing generated and human-written video narratives, VCS is versatile enough for broader applications, including document-level narrative comparisons, analysis of extensive narrative content, and various other narrative similarity tasks.\n\n## \ud83d\ude80 **Key Applications**\n\n<div align=\"center\">\n\n**\ud83c\udfaf Transform Your Narrative Analysis Across Every Domain**\n\n</div>\n\n<table align=\"center\" width=\"100%\" style=\"border-collapse: collapse;\">\n<tr>\n<td width=\"33%\" align=\"center\" style=\"padding: 15px;\">\n<div style=\"background: linear-gradient(135deg, #667eea 0%, #764ba2 100%); border-radius: 15px; padding: 20px; color: white; box-shadow: 0 8px 25px rgba(102, 126, 234, 0.3);\">\n<h3 style=\"margin: 0 0 10px 0; font-size: 1.2em;\">\ud83c\udfac Video Intelligence</h3>\n<p style=\"margin: 0; font-size: 0.9em; opacity: 0.9;\">Evaluate video language models' narrative comprehension</p>\n</div>\n</td>\n<td width=\"33%\" align=\"center\" style=\"padding: 15px;\">\n<div style=\"background: linear-gradient(135deg, #f093fb 0%, #f5576c 100%); border-radius: 15px; padding: 20px; color: white; box-shadow: 0 8px 25px rgba(240, 147, 251, 0.3);\">\n<h3 style=\"margin: 0 0 10px 0; font-size: 1.2em;\">\ud83d\udcc4 Document Similarity</h3>\n<p style=\"margin: 0; font-size: 0.9em; opacity: 0.9;\">Compare semantic alignment between long-form documents</p>\n</div>\n</td>\n<td width=\"33%\" align=\"center\" style=\"padding: 15px;\">\n<div style=\"background: linear-gradient(135deg, #4facfe 0%, #00f2fe 100%); border-radius: 15px; padding: 20px; color: white; box-shadow: 0 8px 25px rgba(79, 172, 254, 0.3);\">\n<h3 style=\"margin: 0 0 10px 0; font-size: 1.2em;\">\ud83d\udcd6 Story Analysis</h3>\n<p style=\"margin: 0; font-size: 0.9em; opacity: 0.9;\">Measure narrative similarity between different stories</p>\n</div>\n</td>\n</tr>\n<tr>\n<td width=\"33%\" align=\"center\" style=\"padding: 15px;\">\n<div style=\"background: linear-gradient(135deg, #fa709a 0%, #fee140 100%); border-radius: 15px; padding: 20px; color: white; box-shadow: 0 8px 25px rgba(250, 112, 154, 0.3);\">\n<h3 style=\"margin: 0 0 10px 0; font-size: 1.2em;\">\ud83c\udf93 Academic Research</h3>\n<p style=\"margin: 0; font-size: 0.9em; opacity: 0.9;\">Detect conceptual plagiarism and idea overlap</p>\n</div>\n</td>\n<td width=\"33%\" align=\"center\" style=\"padding: 15px;\">\n<div style=\"background: linear-gradient(135deg, #a8edea 0%, #fed6e3 100%); border-radius: 15px; padding: 20px; color: #2d3748; box-shadow: 0 8px 25px rgba(168, 237, 234, 0.3);\">\n<h3 style=\"margin: 0 0 10px 0; font-size: 1.2em;\">\ud83d\udcdd Paragraph Analysis</h3>\n<p style=\"margin: 0; font-size: 0.9em; opacity: 0.8;\">Evaluate narrative coherence within text sections</p>\n</div>\n</td>\n<td width=\"33%\" align=\"center\" style=\"padding: 15px;\">\n<div style=\"background: linear-gradient(135deg, #ffecd2 0%, #fcb69f 100%); border-radius: 15px; padding: 20px; color: #2d3748; box-shadow: 0 8px 25px rgba(255, 236, 210, 0.3);\">\n<h3 style=\"margin: 0 0 10px 0; font-size: 1.2em;\">\ud83c\udfaf Short Caption Evaluation</h3>\n<p style=\"margin: 0; font-size: 0.9em; opacity: 0.8;\">Evaluate short captions and brief descriptions</p>\n</div>\n</td>\n</tr>\n</table>\n\n\n---\n\n## \ud83c\udf1f Key Features\n\nExplore the comprehensive capabilities that make VCS a powerful narrative evaluation toolkit. **To understand these features in detail, read our [research paper](https://arxiv.org/abs/placeholder-link), then visit our [interactive playground](https://multimodal-intelligence-lab.github.io/Video-Comprehension-Score/) to see them in action.**\n\n<table width=\"100%\" align=\"center\" style=\"border: none; border-collapse: collapse;\">\n  <tr style=\"background-color: transparent;\">\n    <td style=\"padding: 10px; border: none; vertical-align: top;\">\n      <details style=\"border: 1px solid #14b8a6; border-radius: 12px; padding: 20px; background: linear-gradient(145deg, #1f2937, #111827); color: #e5e7eb; box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);\">\n        <summary style=\"cursor: pointer; font-weight: bold; font-size: 1.2em; color: #6ee7b7;\">\ud83e\uddee Comprehensive Metric Suite</summary>\n        <p style=\"padding-top: 10px;\">Computes VCS along with detailed breakdowns: GAS (global thematic similarity), LAS with precision/recall components, and NAS with distance-based and line-based sub-metrics. Access all internal calculations including penalty systems, mapping windows, and alignment paths.</p>\n      </details>\n    </td>\n    <td style=\"padding: 10px; border: none; vertical-align: top;\">\n      <details style=\"border: 1px solid #14b8a6; border-radius: 12px; padding: 20px; background: linear-gradient(145deg, #1f2937, #111827); color: #e5e7eb; box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);\">\n        <summary style=\"cursor: pointer; font-weight: bold; font-size: 1.2em; color: #6ee7b7;\">\ud83d\udcca Advanced Visualization Engine</summary>\n        <p style=\"padding-top: 10px;\">11 specialized visualization functions including similarity heatmaps, alignment analysis, best-match visualizations, narrative flow diagrams, and precision/recall breakdowns. Each metric component can be visualized with publication-quality plots.</p>\n      </details>\n    </td>\n  </tr>\n  <tr style=\"background-color: transparent;\">\n    <td style=\"padding: 10px; border: none; vertical-align: top;\">\n      <details style=\"border: 1px solid #14b8a6; border-radius: 12px; padding: 20px; background: linear-gradient(145deg, #1f2937, #111827); color: #e5e7eb; box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);\">\n        <summary style=\"cursor: pointer; font-weight: bold; font-size: 1.2em; color: #6ee7b7;\">\ud83d\udccb Professional PDF Reports</summary>\n        <p style=\"padding-top: 10px;\">Generate comprehensive multi-page PDF reports with all metrics, visualizations, and analysis details. Supports both complete reports and customizable selective reports. Professional formatting suitable for research publications.</p>\n      </details>\n    </td>\n    <td style=\"padding: 10px; border: none; vertical-align: top;\">\n      <details style=\"border: 1px solid #14b8a6; border-radius: 12px; padding: 20px; background: linear-gradient(145deg, #1f2937, #111827); color: #e5e7eb; box-shadow: 0 4px 6px rgba(0, 0, 0, 0.1);\">\n        <summary style=\"cursor: pointer; font-weight: bold; font-size: 1.2em; color: #6ee7b7;\">\u2699\ufe0f Flexible Configuration System</summary>\n        <p style=\"padding-top: 10px;\">Fine-tune evaluation with configurable parameters: chunk sizes, similarity thresholds, context windows, and Local Chronology Tolerance (LCT). Supports custom segmentation and embedding functions for domain-specific applications.</p>\n      </details>\n    </td>\n  </tr>\n</table>\n\n---\n\n## \u26a1 Getting Started\n\nWelcome to VCS Metrics! This guide will walk you through everything you need to start analyzing narrative similarity between texts. We'll cover installation, setup, and your first VCS analysis step by step.\n\n\n---\n\n### \ud83d\udce6 Step 1: Installation\n\nChoose the installation method that fits your needs:\n\n<table align=\"center\" width=\"100%\">\n<tr>\n<td width=\"50%\" align=\"center\">\n\n### \ud83c\udfaf **For Most Users** \n*Recommended if you just want to use VCS*\n\n<details>\n<summary><b>\ud83d\uddb1\ufe0f Click to expand installation steps</b></summary>\n\n<br>\n\n**Terminal Installation:**\n```bash\npip install video-comprehension-score\n```\n\n**Jupyter/Colab Installation:**\n```bash\n!pip install video-comprehension-score\n```\n\n<div align=\"center\">\n\n\u2705 **Ready in 30 seconds**  \n\ud83d\udd25 **Zero configuration needed**  \n\u26a1 **Instant access to all features**\n\n</div>\n\n</details>\n\n</td>\n<td width=\"50%\" align=\"center\">\n\n### \ud83d\udee0\ufe0f **For Developers**\n*If you want to contribute or modify VCS*\n\n<details>\n<summary><b>\ud83d\uddb1\ufe0f Click to expand development setup</b></summary>\n\n<br>\n\n**Terminal Installation:**\n```bash\ngit clone https://github.com/hdubey-debug/vcs.git\ncd vcs\npip install -e \".[dev]\"\npre-commit install\n```\n\n**Jupyter/Colab Installation:**\n```bash\n!git clone https://github.com/hdubey-debug/vcs.git\n%cd vcs\n!pip install -e \".[dev]\"\n!pre-commit install\n```\n\n<div align=\"center\">\n\n\ud83d\udd27 **Latest features first**  \n\ud83e\uddea **Testing capabilities**  \n\ud83e\udd1d **Contribution ready**\n\n</div>\n\n</details>\n\n</td>\n</tr>\n</table>\n\n---\n\n### \ud83d\udee0\ufe0f System Requirements\n\nBefore installing VCS, make sure your system meets these requirements:\n\n<table align=\"center\" width=\"100%\">\n<tr>\n<td width=\"50%\" align=\"center\">\n\n### \ud83d\udc0d **Python**\n\n<div style=\"background: linear-gradient(145deg, #dbeafe, #bfdbfe); padding: 20px; border-radius: 12px; border: 2px solid #3b82f6;\">\n\n<img src=\"https://img.shields.io/badge/Python-3.10+-306998?style=for-the-badge&logo=python&logoColor=white\" alt=\"Python\"/>\n\n**Required: Python 3.10 or higher**\n\nVCS Metrics uses modern Python features and requires Python 3.10+. We recommend Python 3.11+ for optimal performance.\n\n</div>\n\n</td>\n<td width=\"50%\" align=\"center\">\n\n### \ud83d\udd25 **PyTorch**\n\n<div style=\"background: linear-gradient(145deg, #fed7d7, #fca5a5); padding: 20px; border-radius: 12px; border: 2px solid #dc2626;\">\n\n<img src=\"https://img.shields.io/badge/PyTorch-1.9+-ee4c2c?style=for-the-badge&logo=pytorch&logoColor=white\" alt=\"PyTorch\"/>\n\n**Required: PyTorch 1.9.0+**\n\nVCS needs PyTorch but doesn't install it automatically to avoid conflicts. Get it from the [official PyTorch website](https://pytorch.org/get-started/locally/).\n\n**\ud83d\udca1 Pro Tip:** In Google Colab, PyTorch is pre-installed!\n\n</div>\n\n</td>\n</tr>\n</table>\n\n<div align=\"center\">\n<table style=\"border: 2px solid #10b981; border-radius: 12px; background: linear-gradient(145deg, #d1fae5, #a7f3d0); padding: 15px; margin: 20px 0;\">\n<tr>\n<td align=\"center\">\n\n**\ud83d\udca1 Note:** VCS automatically installs dependencies: numpy\u22651.20.0, matplotlib\u22653.5.0, seaborn\u22650.11.0\n\n</td>\n</tr>\n</table>\n</div>\n\n---\n\n### \ud83d\udd27 Step 2: Prepare Your Functions\n\nNow that VCS is installed, you need to define two functions before you can use the VCS API. Here's how VCS works:\n\n#### \ud83d\udccb VCS API Overview\n\n```python\nfrom vcs import compute_vcs_score\n\nresult = compute_vcs_score(\n    reference_text=\"Your reference text here\",\n    generated_text=\"Your generated text here\", \n    segmenter_fn=your_segmenter_function,        # \u2190 You provide this\n    embedding_fn_las=your_embedding_function,    # \u2190 You provide this  \n    embedding_fn_gas=your_embedding_function,    # \u2190 You provide this\n    return_all_metrics=True\n)\n\nprint(f\"VCS Score: {result['VCS']:.4f}\")\n```\n\nAs you can see, VCS requires two custom functions from you. Let's understand what each should do:\n\n\n<table align=\"center\" width=\"100%\">\n<tr>\n<td width=\"50%\" align=\"center\">\n\n### \ud83d\udd2a **Segmenter Function**\n\n<div style=\"background: linear-gradient(145deg, #fef3c7, #fde68a); padding: 20px; border-radius: 12px; border: 2px solid #f59e0b;\">\n\n**What it does:** Splits text into meaningful segments (sentences, paragraphs, etc.)\n\n**Required signature:**\n```python\ndef your_segmenter(text: str) -> List[str]:\n    # Your implementation here\n    return list_of_text_segments\n```\n\n**Arguments:** `text` (str) - Input text to be segmented  \n**Returns:** `List[str]` - List of text segments  \n**You can use:** Any library or model (NLTK, spaCy, custom logic, etc.)\n\n</div>\n\n</td>\n<td width=\"50%\" align=\"center\">\n\n### \ud83e\udde0 **Embedding Function**\n\n<div style=\"background: linear-gradient(145deg, #f3e8ff, #e9d5ff); padding: 20px; border-radius: 12px; border: 2px solid #7c3aed;\">\n\n**What it does:** Converts text segments into numerical vectors (embeddings)\n\n**Required signature:**\n```python\ndef your_embedder(texts: List[str]) -> torch.Tensor:\n    # Your implementation here  \n    return tensor_of_embeddings\n```\n\n**Arguments:** `texts` (List[str]) - List of text segments to embed  \n**Returns:** `torch.Tensor` - Tensor of shape `(len(texts), embedding_dim)`  \n**You can use:** Any embedding model (sentence-transformers, OpenAI, etc.)\n\n</div>\n\n</td>\n</tr>\n</table>\n\n#### \ud83c\udf1f Author Recommendations (2025)\n\nFor best results, we recommend these state-of-the-art models:\n\n<div align=\"center\">\n<table style=\"border: 2px solid #dc2626; border-radius: 12px; background: linear-gradient(145deg, #fecaca, #fca5a5); padding: 15px; margin: 20px 0;\">\n<tr>\n<td align=\"center\">\n\n\u26a0\ufe0f **Note:** These recommendations are current as of 2025. Always research the latest SOTA options.\n\n</td>\n</tr>\n</table>\n</div>\n\n<table align=\"center\" width=\"100%\">\n<tr>\n<td width=\"50%\" align=\"center\">\n\n### \ud83d\udd2a **Segmentation Champion**\n\n<div align=\"center\">\n<img src=\"https://img.shields.io/badge/Segment_Any_Text-SAT-ff6b6b?style=for-the-badge&logo=artificial-intelligence&logoColor=white\" alt=\"SAT\"/>\n</div>\n\n<div style=\"background: linear-gradient(145deg, #fff5f5, #fed7d7); padding: 20px; border-radius: 12px; border: 2px solid #e53e3e;\">\n\n**\ud83c\udfc6 Recommended: Segment Any Text (SAT)**\n\n\u2728 **Why we recommend SAT:**\n- \ud83c\udfaf State-of-the-art segmentation accuracy  \n- \u26a1 Intelligent boundary detection  \n- \ud83e\udde0 Context-aware text splitting  \n- \ud83d\udd2c Research-grade performance  \n\n\ud83d\udd17 **Repository:** [github.com/segment-any-text/wtpsplit](https://github.com/segment-any-text/wtpsplit)\n\n</div>\n\n</td>\n<td width=\"50%\" align=\"center\">\n\n### \ud83e\udde0 **Embedding Powerhouse**\n\n<div align=\"center\">\n<img src=\"https://img.shields.io/badge/NVIDIA-NV--Embed--v2-76b900?style=for-the-badge&logo=nvidia&logoColor=white\" alt=\"NV-Embed\"/>\n</div>\n\n<div style=\"background: linear-gradient(145deg, #f0fff4, #c6f6d5); padding: 20px; border-radius: 12px; border: 2px solid #38a169;\">\n\n**\ud83e\udd47 Recommended: nv-embed-v2**\n\n\ud83c\udf1f **Why we recommend nv-embed-v2:**\n- \ud83d\udcca Top performer on [MTEB Leaderboard](https://huggingface.co/spaces/mteb/leaderboard)  \n- \ud83d\ude80 Superior semantic understanding  \n- \ud83d\udcaa Robust multilingual support  \n- \u26a1 Excellent for VCS analysis  \n\n\ud83d\udd17 **Model:** [nvidia/NV-Embed-v2](https://huggingface.co/nvidia/NV-Embed-v2)\n\n</div>\n\n</td>\n</tr>\n</table>\n\n<div align=\"center\">\n<table style=\"border: 2px solid #3182ce; border-radius: 12px; background: linear-gradient(145deg, #bee3f8, #90cdf4); padding: 15px; margin: 20px 0;\">\n<tr>\n<td align=\"center\">\n\n**\ud83d\udca1 Alternative Options:** NLTK, spaCy, sentence-transformers, or build your own custom functions!\n\n</td>\n</tr>\n</table>\n</div>\n\n---\n\n### \ud83d\udcbb Step 3: Run Your First VCS Analysis\n\nNow let's see VCS in action with a complete working example:\n\n<div align=\"center\">\n<table style=\"border: 2px solid #059669; border-radius: 12px; background: linear-gradient(145deg, #d1fae5, #a7f3d0); padding: 15px; margin: 20px 0;\">\n<tr>\n<td align=\"center\">\n\n**\u26a1 Performance Notes**  \n*SOTA models require GPU. For CPU testing, this example uses lightweight alternatives.*\n\n</td>\n</tr>\n</table>\n</div>\n\n<details>\n<summary><h3><b>\ud83d\ude80 Quick Example</b> - Click to expand complete tutorial</h3></summary>\n\n<div style=\"background: linear-gradient(145deg, #1f2937, #111827); padding: 25px; border-radius: 15px; border: 2px solid #6366f1;\">\n\n### \ud83c\udfaf **Complete Working Example**\n*Copy, paste, and run this code to see VCS in action*\n\n```python\n# Fix import path issue if running from vcs/ root directory\nimport sys\nimport os\nif os.path.basename(os.getcwd()) == 'vcs' and os.path.exists('src/vcs'):\n    sys.path.insert(0, 'src')\n    print(\"\ud83d\udd27 Fixed import path for development directory\")\n\n# Test the installation\ntry:\n    import vcs\n    print(\"\u2705 VCS package imported successfully!\")\n    \n    # Test main function availability\n    if hasattr(vcs, 'compute_vcs_score'):\n        print(\"\u2705 Main function 'compute_vcs_score' is available!\")\n    else:\n        print(\"\u26a0\ufe0f Main function not found - there might be an installation issue\")\n        \n    # Try to get version\n    try:\n        print(f\"\ud83d\udce6 Version: {vcs.__version__}\")\n    except AttributeError:\n        print(\"\ud83d\udce6 Version: Unable to determine (this is normal for development installs)\")\n        \nexcept ImportError as e:\n    print(f\"\u274c Import failed: {e}\")\n    print(\"\ud83d\udca1 Make sure you:\")\n    print(\"   1. Installed VCS correctly: pip install -e .[dev]\")\n    print(\"   2. Restarted your notebook kernel\") \n    print(\"   3. You're NOT in the root vcs/ directory (this causes import conflicts)\")\n\n# Import required libraries\nimport torch\nfrom typing import List\n\n# Define lightweight segmenter function\ndef simple_segmenter(text: str) -> List[str]:\n    \"\"\"\n    Simple sentence segmenter using period splitting.\n    \n    Args:\n        text: Input text to segment\n        \n    Returns:\n        List of text segments\n    \"\"\"\n    # Split by periods and clean up\n    segments = [s.strip() for s in text.split('.') if s.strip()]\n    return segments\n\n# Define lightweight embedding function using sentence-transformers\ndef lightweight_embedding_function(texts: List[str]) -> torch.Tensor:\n    \"\"\"\n    Lightweight embedding function using sentence-transformers.\n    \n    Args:\n        texts: List of text segments to embed\n        \n    Returns:\n        PyTorch tensor of shape (len(texts), embedding_dim)\n    \"\"\"\n    try:\n        from sentence_transformers import SentenceTransformer\n        \n        # Use a lightweight model (only downloads ~80MB)\n        model = SentenceTransformer('all-MiniLM-L6-v2')\n        \n        # Generate embeddings\n        embeddings = model.encode(texts)\n        return torch.tensor(embeddings, dtype=torch.float32)\n        \n    except ImportError:\n        print(\"\u26a0\ufe0f sentence-transformers not found. Installing...\")\n        import subprocess\n        import sys\n        subprocess.check_call([sys.executable, \"-m\", \"pip\", \"install\", \"sentence-transformers\"])\n        \n        # Try again after installation\n        from sentence_transformers import SentenceTransformer\n        model = SentenceTransformer('all-MiniLM-L6-v2')\n        embeddings = model.encode(texts)\n        return torch.tensor(embeddings, dtype=torch.float32)\n\n# Example texts\nreference_text = \"\"\"\nThe quick brown fox jumps over the lazy dog.\nIt was a beautiful sunny day in the forest.\nThe fox was looking for food for its family.\n\"\"\"\n\ngenerated_text = \"\"\"\nA brown fox jumped over a sleeping dog.\nThe weather was nice and sunny in the woods.\nThe fox needed to find food for its cubs.\n\"\"\"\n\n# Compute VCS score\nprint(\"\ud83e\udde0 Computing VCS score...\")\ntry:\n    result = vcs.compute_vcs_score(\n        reference_text=reference_text,\n        generated_text=generated_text,\n        segmenter_fn=simple_segmenter,\n        embedding_fn_las=lightweight_embedding_function,\n        embedding_fn_gas=lightweight_embedding_function,\n        return_all_metrics=True,\n        return_internals=True\n    )\n    \n    print(\"\ud83c\udfaf VCS Results:\")\n    print(f\"VCS Score: {result['VCS']:.4f}\")\n    print(f\"GAS Score: {result['GAS']:.4f}\")\n    print(f\"LAS Score: {result['LAS']:.4f}\")\n    print(f\"NAS Score: {result['NAS']:.4f}\")\n    print(\"\u2705 VCS is working correctly!\")\n    \n    # Generate visualization (optional)\n    if 'internals' in result:\n        try:\n            fig = vcs.visualize_metrics_summary(result['internals'])\n            print(\"\ud83d\udcca Visualization generated successfully!\")\n            # fig.show()  # Uncomment to display\n        except Exception as viz_error:\n            print(f\"\u26a0\ufe0f Visualization failed (this is normal in some environments): {viz_error}\")\n    \nexcept Exception as e:\n    print(f\"\u274c Error running VCS: {e}\")\n    print(\"\ud83d\udca1 Make sure PyTorch is installed and try restarting your kernel\")\n```\n\n<div align=\"center\">\n<table style=\"border: 2px solid #3b82f6; border-radius: 12px; background: linear-gradient(145deg, #dbeafe, #bfdbfe); padding: 15px; margin: 20px 0;\">\n<tr>\n<td align=\"center\">\n\n**\ud83d\udcdd Scale Note:** This example uses small text for illustration - VCS excels with long-form content!  \n**\u26a0\ufe0f Import Tip:** Running from `vcs/` root? The example includes an automatic path fix.\n\n</td>\n</tr>\n</table>\n</div>\n\n</div>\n\n</details>\n\n---\n\n## \u2699\ufe0f Advanced Configuration\n\nOnce you're comfortable with the basics, you can fine-tune VCS behavior for your specific use case:\n\n<table align=\"center\" width=\"100%\">\n<tr>\n<td width=\"33%\" align=\"center\">\n\n### \ud83c\udfaf **Core Parameters**\n\n<div style=\"background: linear-gradient(145deg, #ede9fe, #ddd6fe); padding: 20px; border-radius: 12px; border: 2px solid #7c3aed;\">\n\n**\ud83c\udf9b\ufe0f Essential Controls:**\n\n| Parameter | Default | Purpose |\n|:----------|:-------:|:--------|\n| `chunk_size` | 1 | Segment grouping |\n| `context_cutoff_value` | 0.6 | Similarity threshold |\n| `context_window_control` | 4.0 | Context window size |\n| `lct` | 0 | Narrative reordering tolerance |\n\n</div>\n\n</td>\n<td width=\"33%\" align=\"center\">\n\n### \ud83d\udcca **Return All Metrics**\n\n<div style=\"background: linear-gradient(145deg, #fef3c7, #fde68a); padding: 20px; border-radius: 12px; border: 2px solid #f59e0b;\">\n\n**\ud83c\udf9b\ufe0f Control Parameter:**\n\n| Parameter | Default | Purpose |\n|:----------|:-------:|:--------|\n| `return_all_metrics` | False | Return detailed metric breakdown |\n\n**When set to `True`, you get:**\n- Individual GAS, LAS, NAS scores\n- LAS precision and recall components\n- Distance-based and line-based NAS sub-metrics\n- Complete metric breakdown for analysis\n\n</div>\n\n</td>\n<td width=\"33%\" align=\"center\">\n\n### \ud83d\udd0d **Return Internals**\n\n<div style=\"background: linear-gradient(145deg, #e0f2fe, #b3e5fc); padding: 20px; border-radius: 12px; border: 2px solid #0288d1;\">\n\n**\ud83c\udf9b\ufe0f Control Parameter:**\n\n| Parameter | Default | Purpose |\n|:----------|:-------:|:--------|\n| `return_internals` | False | Return internal computation data |\n\n**When set to `True`, you get:**\n- Similarity matrices and alignment paths\n- Mapping windows and penalty calculations\n- Text chunks and segmentation details\n- All data needed for visualization\n\n</div>\n\n</td>\n</tr>\n</table>\n\n<table align=\"center\" width=\"100%\" style=\"margin-top: 20px;\">\n<tr>\n<td width=\"100%\">\n<div align=\"center\">\n<h3>\ud83d\ude80 **Example Configuration**</h3>\n</div>\n<div style=\"background: linear-gradient(145deg, #ecfdf5, #d1fae5); padding: 20px; border-radius: 12px; border: 2px solid #059669; text-align: left;\">\n\n```python\n# \ud83c\udfaf Comprehensive configuration with all features enabled\nresult = compute_vcs_score(\n    reference_text=ref_text,\n    generated_text=gen_text,\n    segmenter_fn=segmenter,\n    embedding_fn_las=embedder,\n    embedding_fn_gas=embedder,\n    chunk_size=2,                  # Group segments\n    context_cutoff_value=0.7,      # Higher threshold\n    context_window_control=3.0,    # Tighter windows\n    lct=1,                         # Some reordering OK\n    return_all_metrics=True,       # Get detailed breakdown\n    return_internals=True          # Get visualization data\n)\n```\n\n</div>\n\n</td>\n</tr>\n</table>\n\n<div align=\"center\">\n<table style=\"border: 2px solid #3b82f6; border-radius: 12px; background: linear-gradient(145deg, #dbeafe, #bfdbfe); padding: 15px; margin: 20px 0;\">\n<tr>\n<td align=\"center\">\n\n**\ud83d\udcda For complete API documentation and visualization guides, visit our [API Documentation](https://multimodal-intelligence-lab.github.io/Video-Comprehension-Score/)**\n\n</td>\n</tr>\n</table>\n</div>\n\n---\n\n## \u2753 Frequently Asked Questions\n\n<details>\n<summary><strong>\ud83e\udd14 How does VCS differ from BLEU/ROUGE?</strong></summary>\n<p>Unlike BLEU and ROUGE which rely on hard n-gram matching, VCS utilizes latent space matching by comparing embeddings at both global and local chunk levels. VCS also evaluates the chronological order of content chunks and combines these three dimensions to generate a comprehensive final score that better captures semantic similarity and narrative structure.</p>\n</details>\n\n<details>\n<summary><strong>\u26a1 What's the minimum text length for VCS?</strong></summary>\n<p>VCS works with any text length, but it's optimized for longer texts (100+ words) where narrative structure is important. For very short texts, simpler metrics might be more appropriate.</p>\n</details>\n\n<details>\n<summary><strong>\ud83d\udccf What's the maximum text length for VCS?</strong></summary>\n<p>There is no upper limit on text length for VCS. The framework is designed to handle texts of any size, from short paragraphs to extensive documents, making it suitable for large-scale narrative evaluation tasks.</p>\n</details>\n\n<details>\n<summary><strong>\ud83e\udde0 Which embedding models work best?</strong></summary>\n<p>We recommend checking the <a href=\"https://huggingface.co/spaces/mteb/leaderboard\">MTEB leaderboard</a> for the latest SOTA models. As of 2024, nv-embed-v2 and similar transformer-based models provide excellent results.</p>\n</details>\n\n<details>\n<summary><strong>\ud83c\udfaf How do I control the granularity of comparison?</strong></summary>\n<p>Use the <code>chunk_size</code> parameter to control the granularity of text comparison. A smaller chunk size provides more fine-grained analysis, while a larger chunk size offers broader, more general comparisons. The default value is 1 for maximum granularity.</p>\n</details>\n\n<details>\n<summary><strong>\u23f1\ufe0f How do I control the strictness of chronological matching?</strong></summary>\n<p>Use the <code>lct</code> (Local Chronology Tolerance) parameter to control chronological matching strictness. A higher LCT value means more lenient chronological ordering, allowing for greater flexibility in narrative sequence evaluation. The default value is 0 for strict chronological matching.</p>\n</details>\n\n<details>\n<summary><strong>\ud83d\udd17 Can I use different embedding functions for GAS and LAS?</strong></summary>\n<p>Yes, you can specify different embedding functions for Global Alignment Score (GAS) and Local Alignment Score (LAS) using the <code>embedding_fn_gas</code> and <code>embedding_fn_las</code> parameters respectively. This allows you to optimize each component with models best suited for their specific evaluation tasks.</p>\n</details>\n\n---\n\n## \ud83c\udfd7\ufe0f Project Structure\n\n```\nvcs/\n\u251c\u2500\u2500 \ud83d\udcc1 src/vcs/                  # Main package source code\n\u2502   \u251c\u2500\u2500 \ud83d\udcc4 __init__.py           # Package initialization\n\u2502   \u251c\u2500\u2500 \ud83d\udcc4 scorer.py             # Main VCS API entry point\n\u2502   \u251c\u2500\u2500 \ud83d\udcc4 _config.py            # Configuration settings\n\u2502   \u251c\u2500\u2500 \ud83d\udcc1 _metrics/             # Core VCS metrics implementations\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _gas/             # Global Alignment Score\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _las/             # Local Alignment Score  \n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _nas/             # Narrative Alignment Score with components\n\u2502   \u2502   \u2502   \u2514\u2500\u2500 \ud83d\udcc1 _nas_components/  # Distance NAS, Line NAS, Regularize NAS\n\u2502   \u2502   \u2514\u2500\u2500 \ud83d\udcc1 _vcs/             # Combined VCS computation\n\u2502   \u251c\u2500\u2500 \ud83d\udcc1 _visualize_vcs/       # Comprehensive visualization suite\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _similarity_matrix/  # Similarity matrix visualizations\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _best_match/      # Best match analysis plots\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _distance_nas/    # Distance-based NAS visualizations\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _line_nas/        # Line-based NAS visualizations\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _mapping_windows/ # Context window visualizations\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _metrics_summary/ # Overall metrics summary plots\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _pdf_report/      # PDF report generation\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _text_chunks/     # Text chunk visualizations\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _window_regularizer/ # Window regularizer plots\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc1 _las/             # LAS-specific visualizations\n\u2502   \u2502   \u2514\u2500\u2500 \ud83d\udcc1 _config/          # Visualization configuration\n\u2502   \u251c\u2500\u2500 \ud83d\udcc1 _segmenting/          # Text segmentation utilities\n\u2502   \u251c\u2500\u2500 \ud83d\udcc1 _matching/            # Optimal text matching algorithms\n\u2502   \u251c\u2500\u2500 \ud83d\udcc1 _mapping_windows/     # Context window management\n\u2502   \u2514\u2500\u2500 \ud83d\udcc1 _utils/               # Helper utilities\n\u251c\u2500\u2500 \ud83d\udcc1 docs/                     # Documentation and interactive demos\n\u2502   \u251c\u2500\u2500 \ud83d\udcc4 index.html            # Main documentation website\n\u2502   \u251c\u2500\u2500 \ud83d\udcc1 pages/                # Documentation pages\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc4 api.html          # API reference\n\u2502   \u2502   \u251c\u2500\u2500 \ud83d\udcc4 playground.html   # Interactive playground\n\u2502   \u2502   \u2514\u2500\u2500 \ud83d\udcc4 example.html      # Usage examples\n\u2502   \u251c\u2500\u2500 \ud83d\udcc1 widgets/              # Interactive visualization widgets\n\u2502   \u251c\u2500\u2500 \ud83d\udcc1 sphinx/               # Sphinx documentation source\n\u2502   \u2514\u2500\u2500 \ud83d\udcc1 assets/               # Documentation assets (CSS, JS, videos)\n\u251c\u2500\u2500 \ud83d\udcc1 .github/                  # GitHub configuration\n\u2502   \u251c\u2500\u2500 \ud83d\udcc1 assets/               # README assets (images, gifs)\n\u2502   \u251c\u2500\u2500 \ud83d\udcc1 scripts/              # GitHub automation scripts\n\u2502   \u2514\u2500\u2500 \ud83d\udcc1 workflows/            # CI/CD automation pipelines\n\u2502       \u251c\u2500\u2500 \ud83d\udcc4 test.yml          # Continuous testing\n\u2502       \u251c\u2500\u2500 \ud83d\udcc4 publish.yml       # Package publishing\n\u2502       \u2514\u2500\u2500 \ud83d\udcc4 deploy-docs.yml   # Documentation deployment\n\u251c\u2500\u2500 \ud83d\udcc4 pyproject.toml           # Package configuration & dependencies\n\u251c\u2500\u2500 \ud83d\udcc4 CONTRIBUTING.md          # Development contribution guide\n\u251c\u2500\u2500 \ud83d\udcc4 DEPLOYMENT.md            # Release and deployment guide\n\u251c\u2500\u2500 \ud83d\udcc4 CHANGELOG.md             # Version history and changes\n\u251c\u2500\u2500 \ud83d\udcc4 MANIFEST.in              # Package manifest\n\u251c\u2500\u2500 \ud83d\udcc4 tag_version.py           # Version tagging script\n\u251c\u2500\u2500 \ud83d\udcc4 LICENSE                  # MIT license\n\u2514\u2500\u2500 \ud83d\udcc4 README.md                # This documentation\n```\n\n---\n\n## \ud83d\ude80 Development & Contributing\n\nWe welcome contributions to VCS Metrics! Whether you're fixing bugs, adding features, or improving documentation, here's how to get started.\n\n### \ud83d\udee0\ufe0f Quick Development Setup\n\n<details>\n<summary><b>\ud83d\uddb1\ufe0f Click to expand development setup</b></summary>\n\n<br>\n\n```bash\n# 1. Clone and setup\ngit clone https://github.com/hdubey-debug/vcs.git\ncd vcs\npython -m venv venv\nsource venv/bin/activate  # Windows: venv\\Scripts\\activate\n\n# 2. Install development dependencies\npip install -e .[dev]\n\n# 3. Create your feature branch\ngit checkout -b feature/your-feature-name\n\n# 4. Make your changes\n# Edit files in src/vcs/\n# Add tests if needed\n# Update docs if necessary\n\n# 5. Run quality checks\nblack src/ && isort src/ && flake8 src/ && mypy src/\n\n# 6. Commit with semantic format\ngit commit -m \"minor: add new awesome feature\"\n\n# 7. Push and create PR\ngit push origin feature/your-feature-name\n```\n\n</details>\n\n### \ud83d\udccb Contribution Workflow\n\n<table align=\"center\" width=\"100%\">\n<tr>\n<td width=\"50%\" align=\"center\">\n\n### \ud83d\udd04 **Development Process**\n\n<div style=\"background: linear-gradient(145deg, #dbeafe, #bfdbfe); padding: 20px; border-radius: 12px; border: 2px solid #3b82f6;\">\n\n**1. Fork & Clone**  \n**2. Create Feature Branch**  \n**3. Make Changes**  \n**4. Write Tests**  \n**5. Submit PR**  \n**6. Code Review**  \n**7. Merge to Main**  \n\n\u2705 **Automated testing on every PR**  \n\u2705 **Fast feedback in ~2-3 minutes**\n\n</div>\n\n</td>\n<td width=\"50%\" align=\"center\">\n\n### \ud83d\udce6 **Release Process**\n\n<div style=\"background: linear-gradient(145deg, #ecfdf5, #d1fae5); padding: 20px; border-radius: 12px; border: 2px solid #059669;\">\n\n**1. Semantic Commit Messages**  \n**2. GitHub Release Creation**  \n**3. Automated Version Calculation**  \n**4. Package Building**  \n**5. TestPyPI Publishing**  \n**6. Production Release**  \n\n\ud83d\ude80 **Industry-standard CI/CD pipeline**  \n\u26a1 **Zero manual version management**\n\n</div>\n\n</td>\n</tr>\n</table>\n\n### \ud83d\udca1 Semantic Commit Format\n\nWe use semantic commits for automatic version bumping:\n\n<div align=\"center\">\n<table style=\"border: 2px solid #7c3aed; border-radius: 12px; background: linear-gradient(145deg, #f3e8ff, #e9d5ff); padding: 15px; margin: 20px 0;\">\n<tr>\n<td align=\"center\">\n\n| **Commit Type** | **Version Bump** | **Example** |\n|:---|:---:|:---|\n| `minor: description` | New features | `1.0.4 \u2192 1.1.0` |\n| `major: description` | Breaking changes | `1.0.4 \u2192 2.0.0` |\n| `anything else` | Bug fixes (default) | `1.0.4 \u2192 1.0.5` |\n\n</td>\n</tr>\n</table>\n</div>\n\n### \ud83d\udd27 Automated Testing & CI/CD\n\nOur comprehensive CI/CD pipeline ensures code quality and reliability on every commit:\n\n<div align=\"center\">\n<table style=\"border: 2px solid #059669; border-radius: 12px; background: linear-gradient(145deg, #ecfdf5, #d1fae5); padding: 20px; margin: 20px 0;\">\n<tr>\n<td align=\"center\">\n\n### \ud83d\ude80 **What Gets Tested**\n\n**\u2705 Matrix Testing** - Python 3.11 & 3.12 compatibility  \n**\u2705 Package Validation** - Import testing & API availability  \n**\u2705 Integration Testing** - Full getting-started example  \n**\u2705 Code Quality** - Flake8 linting & complexity checks  \n**\u2705 Build Testing** - Package build verification  \n\n**\ud83d\udd04 Triggers:** Every push and pull request to `main`\n\n</td>\n</tr>\n</table>\n</div>\n\n<div align=\"center\">\n<table style=\"border: 2px solid #059669; border-radius: 12px; background: linear-gradient(145deg, #ecfdf5, #d1fae5); padding: 15px; margin: 20px 0;\">\n<tr>\n<td align=\"center\">\n\n[![Tests](https://img.shields.io/github/actions/workflow/status/hdubey-debug/vcs/test.yml?branch=main&label=Tests&logo=github-actions&logoColor=white&style=for-the-badge)](https://github.com/hdubey-debug/vcs/actions/workflows/test.yml)\n[![Build](https://img.shields.io/github/actions/workflow/status/hdubey-debug/vcs/publish.yml?label=Build&logo=github-actions&logoColor=white&style=for-the-badge)](https://github.com/hdubey-debug/vcs/actions/workflows/publish.yml)\n\n**\u2705 Automated testing ensures every change is production-ready**\n\n</td>\n</tr>\n</table>\n</div>\n\n### \ud83d\udcd6 Detailed Guides\n\nFor comprehensive information about contributing and development:\n\n<div align=\"center\">\n\n[![Contributing Guide](https://img.shields.io/badge/\ud83d\udcd6_Full_Contributing_Guide-2563eb?style=for-the-badge&logo=gitbook&logoColor=white)](./CONTRIBUTING.md)\n[![Deployment Guide](https://img.shields.io/badge/\ud83d\ude80_Deployment_Guide-059669?style=for-the-badge&logo=rocket&logoColor=white)](./DEPLOYMENT.md)\n\n</div>\n\n### \ud83e\udd1d Getting Help\n\n<table align=\"center\" width=\"100%\">\n<tr>\n<td width=\"33%\" align=\"center\">\n\n**\ud83d\udc1b Bug Reports**  \n[Create GitHub Issue](https://github.com/hdubey-debug/vcs/issues)\n\n</td>\n<td width=\"33%\" align=\"center\">\n\n**\ud83d\udcac Questions**  \n[GitHub Discussions](https://github.com/hdubey-debug/vcs/discussions)\n\n</td>\n<td width=\"33%\" align=\"center\">\n\n**\ud83d\udca1 Feature Requests**  \n[Feature Request Issue](https://github.com/hdubey-debug/vcs/issues/new)\n\n</td>\n</tr>\n</table>\n\n---\n\n## \ud83d\udcda Citation\n\nIf you use VCS Metrics in your research, please cite:\n\n```bibtex\n@software{vcs_metrics_2024,\n  title = {VCS Metrics: Video Comprehension Score for Text Similarity Evaluation},\n  author = {Dubey, Harsh and Ali, Mukhtiar and Mishra, Sugam and Pack, Chulwoo},\n  year = {2024},\n  institution = {South Dakota State University},\n  url = {https://github.com/hdubey-debug/vcs},\n  note = {Python package for narrative similarity evaluation}\n}\n```\n\n---\n\n## \ud83e\udd16 CLIP-CC Ecosystem Integration\n\nVCS is designed to work seamlessly with [CLIP-CC Dataset](https://github.com/hdubey-debug/CLIP-CC) for comprehensive video understanding evaluation.\n\n<div align=\"center\">\n<table style=\"border: 2px solid #7c3aed; border-radius: 12px; background: linear-gradient(145deg, #f3e8ff, #e9d5ff); padding: 20px; margin: 20px 0;\">\n<tr>\n<td align=\"center\">\n\n[![CLIP-CC Dataset](https://img.shields.io/badge/\ud83e\udd16_Companion_Dataset-CLIP--CC-9333ea?style=for-the-badge&logo=python&logoColor=white)](https://github.com/hdubey-debug/CLIP-CC)\n\n**\ud83d\udd04 Perfect Integration: VCS + CLIP-CC**\n- \ud83c\udfa5 **CLIP-CC provides the data** \u2192 Rich video dataset with human summaries\n- \ud83d\udd0d **VCS provides the evaluation** \u2192 Advanced narrative comprehension metrics  \n- \ud83c\udfc6 **Together: Complete research pipeline** \u2192 From data loading to evaluation\n\n</td>\n</tr>\n</table>\n</div>\n\n---\n\n## \ud83c\udfc6 Meet Our Contributors\n\n<div align=\"center\">\n\n### \ud83c\udf1f **The VCS Team - Building the Future of Text Similarity**\n\n</div>\n\n<table>\n<tr>\n<td align=\"center\">\n\n<a href=\"https://github.com/hdubey-debug\">\n  <img src=\"https://github.com/hdubey-debug.png\" width=\"100\" height=\"100\" style=\"border-radius: 50%;\"/>\n</a>\n\n**Harsh Dubey**  \n*Lead Developer & Research Scientist*  \n*South Dakota State University*\n\n| Commits | Lines | Files |\n|:---:|:---:|:---:|\n| **2** | **49K** | **171** |\n\n**\ud83d\udccb Key Work:**\n\u2022 VCS Algorithm Architecture  \n\u2022 Visualization Engine  \n\u2022 LAS, GAS, and NAS Metrics  \n\n[![GitHub](https://img.shields.io/badge/-GitHub-14b8a6?style=flat&logo=github)](https://github.com/hdubey-debug)\n\n</td>\n</tr>\n</table>\n\n<div align=\"center\">\n\n### \ud83e\udd16 **Automated Contributors**\n\n| **Contributor** | **Role** | **Contributions** | **Badge** |\n|:---:|:---:|:---:|:---:|\n| \ud83e\udd16 **GitHub Actions** | CI/CD Automation | Clean history setup | [![Bot](https://img.shields.io/badge/Bot-Automated_Testing-6c5ce7?style=flat&logo=github-actions&logoColor=white)](#) |\n\n### \ud83d\udcca **Contribution Analytics**\n\n[![Contributors](https://img.shields.io/github/contributors/hdubey-debug/vcs?style=for-the-badge&color=14b8a6&labelColor=0f172a)](https://github.com/hdubey-debug/vcs/graphs/contributors)\n[![Commit Activity](https://img.shields.io/github/commit-activity/m/hdubey-debug/vcs?style=for-the-badge&color=ff6b6b&labelColor=0f172a)](https://github.com/hdubey-debug/vcs/pulse)\n[![Last Commit](https://img.shields.io/github/last-commit/hdubey-debug/vcs?style=for-the-badge&color=4ecdc4&labelColor=0f172a)](https://github.com/hdubey-debug/vcs/commits)\n[![Code Frequency](https://img.shields.io/github/languages/count/hdubey-debug/vcs?style=for-the-badge&color=f9ca24&labelColor=0f172a)](https://github.com/hdubey-debug/vcs)\n\n</div>\n\n---\n\n## \ud83d\udcc4 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n---\n\n### \ud83c\udf1f **Made with \u2764\ufe0f by the VCS Team**\n\n**Authors**: Harsh Dubey, Mukhtiar Ali, Sugam Mishra, and Chulwoo Pack  \n**Institution**: South Dakota State University  \n**Year**: 2024\n\n[\u2b50 Star this repo](https://github.com/hdubey-debug/vcs) \u2022 [\ud83d\udc1b Report Bug](https://github.com/hdubey-debug/vcs/issues) \u2022 [\ud83d\udca1 Request Feature](https://github.com/hdubey-debug/vcs/issues) \u2022 [\ud83d\udcac Community Q&A](https://github.com/hdubey-debug/vcs/discussions)\n\n</div>\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Video Comprehension Score (VCS) - A comprehensive metric for evaluating narrative similarity between reference and generated text",
    "version": "1.0.0",
    "project_urls": {
        "Bug Tracker": "https://github.com/Multimodal-Intelligence-Lab/Video-Comprehension-Score/issues",
        "Changelog": "https://github.com/Multimodal-Intelligence-Lab/Video-Comprehension-Score/blob/main/CHANGELOG.md",
        "Documentation": "https://multimodal-intelligence-lab.github.io/Video-Comprehension-Score/",
        "Homepage": "https://github.com/Multimodal-Intelligence-Lab/Video-Comprehension-Score",
        "Repository": "https://github.com/Multimodal-Intelligence-Lab/Video-Comprehension-Score.git"
    },
    "split_keywords": [
        "text-similarity",
        " narrative-analysis",
        " nlp",
        " video-comprehension",
        " text-evaluation",
        " semantic-similarity",
        " alignment-metrics"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1e3e2b36e491b33edcedf25be60c4fe384ea18d52b4355d4b0b9f82b52b81813",
                "md5": "690ffca23da2f95ec719ca39a2d672c4",
                "sha256": "bac13a30e56e20e499938619de08b566c85a475f581903fb96d93f342f59a31c"
            },
            "downloads": -1,
            "filename": "video_comprehension_score-1.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "690ffca23da2f95ec719ca39a2d672c4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 103585,
            "upload_time": "2025-07-25T04:00:53",
            "upload_time_iso_8601": "2025-07-25T04:00:53.628905Z",
            "url": "https://files.pythonhosted.org/packages/1e/3e/2b36e491b33edcedf25be60c4fe384ea18d52b4355d4b0b9f82b52b81813/video_comprehension_score-1.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "423ce44e8b66aef053b5e36b86dee3bd1574a39b17ab57915dfc9757ff22a1e5",
                "md5": "4d37ac11db5c4d399904fadbd5ac5580",
                "sha256": "cdb6bef762d034418e2fa7a9a27e889a6a34716c3a49a98eccffd4c1d18cf7aa"
            },
            "downloads": -1,
            "filename": "video_comprehension_score-1.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "4d37ac11db5c4d399904fadbd5ac5580",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 94045,
            "upload_time": "2025-07-25T04:00:55",
            "upload_time_iso_8601": "2025-07-25T04:00:55.400986Z",
            "url": "https://files.pythonhosted.org/packages/42/3c/e44e8b66aef053b5e36b86dee3bd1574a39b17ab57915dfc9757ff22a1e5/video_comprehension_score-1.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-25 04:00:55",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Multimodal-Intelligence-Lab",
    "github_project": "Video-Comprehension-Score",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "video-comprehension-score"
}
        
Elapsed time: 1.35835s