dexter-cqa


Namedexter-cqa JSON
Version 1.0.9 PyPI version JSON
download
home_pagehttps://github.com/VenkteshV/BCQA
SummaryA Benchmark for Complex Heterogeneous Question answering
upload_time2024-06-18 17:31:06
maintainerNone
docs_urlNone
authorVenktesh V, Deepali Prabhu
requires_python>=3.8
licenseApache License 2.0
keywords information retrieval transformer networks complex question answering bert pytorch question answering ir nlp deep learning
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            <p align="center">
  <img src="dexter.png" />
</p>

<p align="center">
  <img src="bcqa_neurips.001.jpeg" />
</p>


# DEXTER (Benchmarking Complex QA)

Answering complex questions is a difficult task that requires knowledge retrieval. 
To address this, we propose our easy to use and  extensible benchmark composing diverse complex QA tasks and provide a toolkit to evaluate zero-shot retrieval capabilities of state-of-the-art dense and sparse retrieval models in an open-domain setting. Additionally, since context-based reasoning is key to complex QA tasks, we extend our toolkit with various LLM engines. Both the above components together allow our users to evaluate the various components in the Retrieval Augmented Generation pipeline.

For components in retrieval we draw inspiration from BEIR (https://github.com/beir-cellar/beir) and reuse some parts of implementation with modification suited to our setup. We thank the authors for open-sourcing their code.

# Colab notebook
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1UOZ_JuDcWGKvwcPs4ygCEoGCUUgC1PUs?usp=sharing)

# Setup (from source)
1) Clone the repo <br />
2) Create a conda environment conda create -n bcqa  <br />
3) pip install -e .<br />

# From pip
pip install dexter-cqa
# Datasets

|  Dataset Name  |  Dataset alias |                  Homepage                 |                Characteristics               | #Questions | Corpus Size |
|:--------------:|:--------------:|:-----------------------------------------:|:--------------------------------------------:|:----------:|:-----------:|
| MusiqueQA      | musiqueqa (2-hop only)      | [Link](https://github.com/StonyBrookNLP/musique)  | Connected multi-hop reasoning   | 16.8k       | 570k       |
| WikiMultiHopQA | wikimultihopqa | [Link](https://github.com/Alab-NII/2wikimultihop) | Comparative multi-hop reasoning              | 190k       | 570k        |
| StrategyQA     | strategyqa     | [Link](https://allenai.org/data/strategyqa)       | Multi-hop reasoning, Implicit Reasoning      | 2.7k       | 26.6M       |
| AmbigQA        | ambignq        | [Link](https://nlp.cs.washington.edu/ambigqa/)    | Ambiguous Questions                          | 12k        | 24.3M       |
| OTT-QA         | ottqa          | [Link](https://ott-qa.github.io/)                 | Table and Text multi-hop reasoning           | 2.1k       | 6.5M        |
| TAT-QA         | tatqa          | [Link](https://nextplusplus.github.io/TAT-QA/)    | Financial Table and Text multi-hop reasoning | 2.9k       | 7000        |
| FinQA          | finqa          | [Link](https://github.com/czyssrs/FinQA)          | Financial Table and Text multi-hop reasoning | 8k         | 24.8k       |

## Important!! All datasets can be found at one place at [Datasets](https://gitlab.tudelft.nl/venkteshviswan/bcqa_data)
# Retrievers
|    Name    | Paradigm | More |
|:----------:|:--------:|:----:|
| BM25       | Lexical  | [Link](https://www.staff.city.ac.uk/~sbrp622/papers/foundations_bm25_review.pdf) |
| SPLADE     | Sparse   | [Link](https://github.com/naver/splade) |
| DPR        | Dense    | [Link](https://github.com/facebookresearch/DPR) |
| ANCE       | Dense    | [Link](https://github.com/microsoft/ANCE) |
| tas-b      | Dense    | [Link](https://github.com/sebastian-hofstaetter/tas-balanced-dense-retrieval) |
| MPNet      | Dense    | [Link](https://github.com/microsoft/MPNet) |
| Contriever | Dense    | [Link](https://github.com/facebookresearch/contriever) |
| ColBERTv2  | Late-Interaction    | [Link](https://github.com/stanford-futuredata/ColBERT) |

# Retrieving over large corpus collections
Since some of the datasets have corpus collection with large sizes (millions), we also support chunking of corpus when doing retrieval. To avoid storing docs in memory inspired by the issue https://github.com/beir-cellar/beir/pull/117 we maintain a list of top-k docs with scores when computing scores chunkwise using heapq.

# LLM Engines




# Project Structure
- data
    - datastructures: Basic data classes for question, answer and others needed in the pipeline.
    - dataloaders: Loaders that take raw json/zip file data and convert them to the format needed in the pipeline
- retriever: Retrievers that take the data loaders and perform retrieval to produce results.
    - dense : dense retrievers like ColBERTv2,ANCE, Contriever, MpNet, DPR and Tas-B
    - lexical: lexical retrievers like BM25
    - sparse: Sparse retrievers like SPLADE
- llms: LLM engine orchestrator and implementation for inference using LLama2, Mistral, OpenAI models and Flan-T5 ( more models to come soon.)
- config: Configuration files with constants and initialization.
- tests: test cases for the above components
- utils: utilities needed in the pipeline like retrieval accuracy calculation and matching.

# Running Evaluation
Below is an example script demonstrating how to load a dataset from our benchmark (ambignq here), feed it into one of our retrievers(ANCE here), and evaluate the retrieval quality against the relevance labels provided by the dataset.
```python
from dexter.config.constants import Split
from dexter.data.loaders.RetrieverDataset import RetrieverDataset
from dexter.retriever.dense.ANCE import ANCE
from dexter.utils.metrics.SimilarityMatch import CosineSimilarity
from dexter.utils.metrics.retrieval.RetrievalMetrics import RetrievalMetrics

if __name__ == "__main__":
    # Ensure in config.ini the path to the raw data files are linked under [Data-Path]
    # ambignq = '<path to the data file>
    # ambignq-corpus = '<path to the corpus file>'

    # You can set the split to one of Split.DEV, Split.TEST or Split.TRAIN
    # Setting tokenizer=None only loads only the raw data processed into our standard data classes, if tokenizer is set, the data is also tokenized and stored in the loader.
    loader = RetrieverDataset("ambignq","ambignq-corpus",
                               "config.ini", Split.DEV,tokenizer=None)

    # Initialize your retriever configuration
    config_instance = DenseHyperParams(query_encoder_path="facebook/contriever",
                                     document_encoder_path="facebook/contriever"
                                     ,batch_size=32,show_progress_bar=True)

    # From data loader loads list of queries, corpus and relevance labels.
    queries, qrels, corpus = loader.qrels()

    #Perform Retrieval
    contrvr_search = Contriever(config_instance)   
    similarity_measure = CosineSimilarity()
    response = contrvr_search.retrieve(corpus,queries,100,similarity_measure,chunk=True,chunksize=400000)


    #Evaluate retrieval metrics
    metrics = RetrievalMetrics(k_values=[1,10,100])
    print(metrics.evaluate_retrieval(qrels=qrels,results=response))
```
# Running Evaluation for Results in Paper
All evaluation scripts dataset wise can be found in the evaluation folder
## Example TAT-QA ( When building from source)
```
curl https://gitlab.tudelft.nl/venkteshviswan/bcqa_data/-/raw/main/tatqa.zip -o tatqa.zip
```
In evaluation/config.ini configure the corresponding paths to downloaded files
configure project root directory to PYTHONPATH variable
```
export PYTHONPATH=/path

export OPENAI_KEY=<your openai key>

export huggingface_token = <your huggingface token to access llama2  >

```
## To reproduce dpr results run
```
python3 evaluation/tatqa/run_dpr_inference.py
```

## To reproduce colbert results run
```
python3 evaluation/tatqa/test_tctcolbert_inference.py
```
Similarly other retrievers can be also run using other scripts in the folder

# To Reproduce LLm Results
```
export OPENAI_KEY="<you key here>"
```
To run openAI model using colbert docs, run:
```
python3 evaluation/tatqa/llms/run_rag_few_shot_cot.py
```
Above experiment would help get numbers for FEW-SHOT-COT for gpt-3.5-turbo which can be checked with Table 3.
# Building your own custom dataset

You can quickly build your own dataset in three steps:

### 1) Loading the question, answer and evidence records

The base data loader by default takes a json file of the format

```
[{'id':'..','question':'..','answer':'..'}]
```
Each of the train, test and val splits should under their own json files named under your dir
- /dir_path/train.json
- /dir_path/test.json
- /dir_path/validation.json
  
If you want to create your custom loader:
Within the directory data/dataloaders, Create your Dataloader by extending from BaseDataLoader
```python

class MyDataLoader(BaseDataLoader):
    def load_raw_dataset(self,split):
        dataset = self.load_json(split)
        
        records =  '''your code to transform the elements in json to List[Sample(idx:str,question:Question,answer:Answer,evidence:Evidence)]'''
        # If needed you can also extend from Question,Answer and Evidence dataclasses to form your own types
        self.raw_data = records
    def load_tokenized(self):
        ''' If required overwrite this function to build custom tkenization method of your dataset '''

```

Under config.ini:
```
my-dataset = 'dir_path'
```
### 1) Loading the corpus
To load your own corpus you can provide a json file of the standard format:
```
{"idx":{"text":"...","title":"..",'type":"table/text"}}
```

Under config.ini add:
```
my-dataset-corpus = '< path to the json file of above format >'
```
### 3) Add your dataset alias to constants

Within config.constants:
```python
class Dataset:
    AMBIGQA = "ambignq"
    WIKIMULTIHOPQA = "wikimultihopqa"
    ...
    MY_DATASET = "my-dataset"
```

and within data/loader/DataLoaderFactory.py:

```python
   def create_dataloader(
...
        if Dataset.AMBIGQA in dataloader_name:
            loader = AmbigQADataLoader
        elif Dataset.FINQA in dataloader_name:
            loader = FinQADataLoader
        ..
        elif Dataset.MY_DATASET in dataloader_name:
            loader = MyDataLoader
    
```


Your dataset is now ready to be loaded and used.

a) You can load the dataloader as:
```python
loader_factory = DataLoaderFactory()
loader = loader_factory.create_dataloader("my-dataset", config_path="config.ini", split=Split.DEV, batch_size=10)
```

b) You can load the corpus as:
```python
loader = PassageDataLoader(dataset="my-dataset-corpus",subset_ids=None,config_path="config.ini",tokenizer=None)
```

c) You can load RetrieverDataset as:
```python
loader = RetrieverDataset("my-dataset","my-dataset-corpus",
                               "config.ini", Split.DEV,tokenizer=None)
```


# Bulding your own retrievers

To build your own retriever you can extend from the class bcqa/retriever/BaseRetriever.py and use it in your evaluation script.

# Citing & Authors
Thanks to the following collaborators: <br />
<b> Venktesh Viswanathan </b> <br />
<b> Deepali Prabhu </b> <br />
<b> Avishek Anand </b> <br />







             

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/VenkteshV/BCQA",
    "name": "dexter-cqa",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "Information Retrieval Transformer Networks Complex Question Answering BERT PyTorch Question Answering IR NLP deep learning",
    "author": "Venktesh V, Deepali Prabhu",
    "author_email": "venkyviswa12@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/c5/fa/a7e373ffaf3f071af10b26ded5348316e527013043b514f8ea9dfb468e26/dexter_cqa-1.0.9.tar.gz",
    "platform": null,
    "description": "<p align=\"center\">\n  <img src=\"dexter.png\" />\n</p>\n\n<p align=\"center\">\n  <img src=\"bcqa_neurips.001.jpeg\" />\n</p>\n\n\n# DEXTER (Benchmarking Complex QA)\n\nAnswering complex questions is a difficult task that requires knowledge retrieval. \nTo address this, we propose our easy to use and  extensible benchmark composing diverse complex QA tasks and provide a toolkit to evaluate zero-shot retrieval capabilities of state-of-the-art dense and sparse retrieval models in an open-domain setting. Additionally, since context-based reasoning is key to complex QA tasks, we extend our toolkit with various LLM engines. Both the above components together allow our users to evaluate the various components in the Retrieval Augmented Generation pipeline.\n\nFor components in retrieval we draw inspiration from BEIR (https://github.com/beir-cellar/beir) and reuse some parts of implementation with modification suited to our setup. We thank the authors for open-sourcing their code.\n\n# Colab notebook\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1UOZ_JuDcWGKvwcPs4ygCEoGCUUgC1PUs?usp=sharing)\n\n# Setup (from source)\n1) Clone the repo <br />\n2) Create a conda environment conda create -n bcqa  <br />\n3) pip install -e .<br />\n\n# From pip\npip install dexter-cqa\n# Datasets\n\n|  Dataset Name  |  Dataset alias |                  Homepage                 |                Characteristics               | #Questions | Corpus Size |\n|:--------------:|:--------------:|:-----------------------------------------:|:--------------------------------------------:|:----------:|:-----------:|\n| MusiqueQA      | musiqueqa (2-hop only)      | [Link](https://github.com/StonyBrookNLP/musique)  | Connected multi-hop reasoning   | 16.8k       | 570k       |\n| WikiMultiHopQA | wikimultihopqa | [Link](https://github.com/Alab-NII/2wikimultihop) | Comparative multi-hop reasoning              | 190k       | 570k        |\n| StrategyQA     | strategyqa     | [Link](https://allenai.org/data/strategyqa)       | Multi-hop reasoning, Implicit Reasoning      | 2.7k       | 26.6M       |\n| AmbigQA        | ambignq        | [Link](https://nlp.cs.washington.edu/ambigqa/)    | Ambiguous Questions                          | 12k        | 24.3M       |\n| OTT-QA         | ottqa          | [Link](https://ott-qa.github.io/)                 | Table and Text multi-hop reasoning           | 2.1k       | 6.5M        |\n| TAT-QA         | tatqa          | [Link](https://nextplusplus.github.io/TAT-QA/)    | Financial Table and Text multi-hop reasoning | 2.9k       | 7000        |\n| FinQA          | finqa          | [Link](https://github.com/czyssrs/FinQA)          | Financial Table and Text multi-hop reasoning | 8k         | 24.8k       |\n\n## Important!! All datasets can be found at one place at [Datasets](https://gitlab.tudelft.nl/venkteshviswan/bcqa_data)\n# Retrievers\n|    Name    | Paradigm | More |\n|:----------:|:--------:|:----:|\n| BM25       | Lexical  | [Link](https://www.staff.city.ac.uk/~sbrp622/papers/foundations_bm25_review.pdf) |\n| SPLADE     | Sparse   | [Link](https://github.com/naver/splade) |\n| DPR        | Dense    | [Link](https://github.com/facebookresearch/DPR) |\n| ANCE       | Dense    | [Link](https://github.com/microsoft/ANCE) |\n| tas-b      | Dense    | [Link](https://github.com/sebastian-hofstaetter/tas-balanced-dense-retrieval) |\n| MPNet      | Dense    | [Link](https://github.com/microsoft/MPNet) |\n| Contriever | Dense    | [Link](https://github.com/facebookresearch/contriever) |\n| ColBERTv2  | Late-Interaction    | [Link](https://github.com/stanford-futuredata/ColBERT) |\n\n# Retrieving over large corpus collections\nSince some of the datasets have corpus collection with large sizes (millions), we also support chunking of corpus when doing retrieval. To avoid storing docs in memory inspired by the issue https://github.com/beir-cellar/beir/pull/117 we maintain a list of top-k docs with scores when computing scores chunkwise using heapq.\n\n# LLM Engines\n\n\n\n\n# Project Structure\n- data\n    - datastructures: Basic data classes for question, answer and others needed in the pipeline.\n    - dataloaders: Loaders that take raw json/zip file data and convert them to the format needed in the pipeline\n- retriever: Retrievers that take the data loaders and perform retrieval to produce results.\n    - dense : dense retrievers like ColBERTv2,ANCE, Contriever, MpNet, DPR and Tas-B\n    - lexical: lexical retrievers like BM25\n    - sparse: Sparse retrievers like SPLADE\n- llms: LLM engine orchestrator and implementation for inference using LLama2, Mistral, OpenAI models and Flan-T5 ( more models to come soon.)\n- config: Configuration files with constants and initialization.\n- tests: test cases for the above components\n- utils: utilities needed in the pipeline like retrieval accuracy calculation and matching.\n\n# Running Evaluation\nBelow is an example script demonstrating how to load a dataset from our benchmark (ambignq here), feed it into one of our retrievers(ANCE here), and evaluate the retrieval quality against the relevance labels provided by the dataset.\n```python\nfrom dexter.config.constants import Split\nfrom dexter.data.loaders.RetrieverDataset import RetrieverDataset\nfrom dexter.retriever.dense.ANCE import ANCE\nfrom dexter.utils.metrics.SimilarityMatch import CosineSimilarity\nfrom dexter.utils.metrics.retrieval.RetrievalMetrics import RetrievalMetrics\n\nif __name__ == \"__main__\":\n    # Ensure in config.ini the path to the raw data files are linked under [Data-Path]\n    # ambignq = '<path to the data file>\n    # ambignq-corpus = '<path to the corpus file>'\n\n    # You can set the split to one of Split.DEV, Split.TEST or Split.TRAIN\n    # Setting tokenizer=None only loads only the raw data processed into our standard data classes, if tokenizer is set, the data is also tokenized and stored in the loader.\n    loader = RetrieverDataset(\"ambignq\",\"ambignq-corpus\",\n                               \"config.ini\", Split.DEV,tokenizer=None)\n\n    # Initialize your retriever configuration\n    config_instance = DenseHyperParams(query_encoder_path=\"facebook/contriever\",\n                                     document_encoder_path=\"facebook/contriever\"\n                                     ,batch_size=32,show_progress_bar=True)\n\n    # From data loader loads list of queries, corpus and relevance labels.\n    queries, qrels, corpus = loader.qrels()\n\n    #Perform Retrieval\n    contrvr_search = Contriever(config_instance)   \n    similarity_measure = CosineSimilarity()\n    response = contrvr_search.retrieve(corpus,queries,100,similarity_measure,chunk=True,chunksize=400000)\n\n\n    #Evaluate retrieval metrics\n    metrics = RetrievalMetrics(k_values=[1,10,100])\n    print(metrics.evaluate_retrieval(qrels=qrels,results=response))\n```\n# Running Evaluation for Results in Paper\nAll evaluation scripts dataset wise can be found in the evaluation folder\n## Example TAT-QA ( When building from source)\n```\ncurl https://gitlab.tudelft.nl/venkteshviswan/bcqa_data/-/raw/main/tatqa.zip -o tatqa.zip\n```\nIn evaluation/config.ini configure the corresponding paths to downloaded files\nconfigure project root directory to PYTHONPATH variable\n```\nexport PYTHONPATH=/path\n\nexport OPENAI_KEY=<your openai key>\n\nexport huggingface_token = <your huggingface token to access llama2  >\n\n```\n## To reproduce dpr results run\n```\npython3 evaluation/tatqa/run_dpr_inference.py\n```\n\n## To reproduce colbert results run\n```\npython3 evaluation/tatqa/test_tctcolbert_inference.py\n```\nSimilarly other retrievers can be also run using other scripts in the folder\n\n# To Reproduce LLm Results\n```\nexport OPENAI_KEY=\"<you key here>\"\n```\nTo run openAI model using colbert docs, run:\n```\npython3 evaluation/tatqa/llms/run_rag_few_shot_cot.py\n```\nAbove experiment would help get numbers for FEW-SHOT-COT for gpt-3.5-turbo which can be checked with Table 3.\n# Building your own custom dataset\n\nYou can quickly build your own dataset in three steps:\n\n### 1) Loading the question, answer and evidence records\n\nThe base data loader by default takes a json file of the format\n\n```\n[{'id':'..','question':'..','answer':'..'}]\n```\nEach of the train, test and val splits should under their own json files named under your dir\n- /dir_path/train.json\n- /dir_path/test.json\n- /dir_path/validation.json\n  \nIf you want to create your custom loader:\nWithin the directory data/dataloaders, Create your Dataloader by extending from BaseDataLoader\n```python\n\nclass MyDataLoader(BaseDataLoader):\n    def load_raw_dataset(self,split):\n        dataset = self.load_json(split)\n        \n        records =  '''your code to transform the elements in json to List[Sample(idx:str,question:Question,answer:Answer,evidence:Evidence)]'''\n        # If needed you can also extend from Question,Answer and Evidence dataclasses to form your own types\n        self.raw_data = records\n    def load_tokenized(self):\n        ''' If required overwrite this function to build custom tkenization method of your dataset '''\n\n```\n\nUnder config.ini:\n```\nmy-dataset = 'dir_path'\n```\n### 1) Loading the corpus\nTo load your own corpus you can provide a json file of the standard format:\n```\n{\"idx\":{\"text\":\"...\",\"title\":\"..\",'type\":\"table/text\"}}\n```\n\nUnder config.ini add:\n```\nmy-dataset-corpus = '< path to the json file of above format >'\n```\n### 3) Add your dataset alias to constants\n\nWithin config.constants:\n```python\nclass Dataset:\n    AMBIGQA = \"ambignq\"\n    WIKIMULTIHOPQA = \"wikimultihopqa\"\n    ...\n    MY_DATASET = \"my-dataset\"\n```\n\nand within data/loader/DataLoaderFactory.py:\n\n```python\n   def create_dataloader(\n...\n        if Dataset.AMBIGQA in dataloader_name:\n            loader = AmbigQADataLoader\n        elif Dataset.FINQA in dataloader_name:\n            loader = FinQADataLoader\n        ..\n        elif Dataset.MY_DATASET in dataloader_name:\n            loader = MyDataLoader\n    \n```\n\n\nYour dataset is now ready to be loaded and used.\n\na) You can load the dataloader as:\n```python\nloader_factory = DataLoaderFactory()\nloader = loader_factory.create_dataloader(\"my-dataset\", config_path=\"config.ini\", split=Split.DEV, batch_size=10)\n```\n\nb) You can load the corpus as:\n```python\nloader = PassageDataLoader(dataset=\"my-dataset-corpus\",subset_ids=None,config_path=\"config.ini\",tokenizer=None)\n```\n\nc) You can load RetrieverDataset as:\n```python\nloader = RetrieverDataset(\"my-dataset\",\"my-dataset-corpus\",\n                               \"config.ini\", Split.DEV,tokenizer=None)\n```\n\n\n# Bulding your own retrievers\n\nTo build your own retriever you can extend from the class bcqa/retriever/BaseRetriever.py and use it in your evaluation script.\n\n# Citing & Authors\nThanks to the following collaborators: <br />\n<b> Venktesh Viswanathan </b> <br />\n<b> Deepali Prabhu </b> <br />\n<b> Avishek Anand </b> <br />\n\n\n\n\n\n\n\n             \n",
    "bugtrack_url": null,
    "license": "Apache License 2.0",
    "summary": "A Benchmark for Complex Heterogeneous Question answering",
    "version": "1.0.9",
    "project_urls": {
        "Homepage": "https://github.com/VenkteshV/BCQA"
    },
    "split_keywords": [
        "information",
        "retrieval",
        "transformer",
        "networks",
        "complex",
        "question",
        "answering",
        "bert",
        "pytorch",
        "question",
        "answering",
        "ir",
        "nlp",
        "deep",
        "learning"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6391881fd657ac10a4d3d843f20c81abfe06ffa95a3b9b37d448b8b0c3344a14",
                "md5": "9095476b2b80b7d4d361a4941f5b8321",
                "sha256": "6b5ff1a51b086bcfeedac2c2ae73131545a0d16fb9f51c4a149b753373940fff"
            },
            "downloads": -1,
            "filename": "dexter_cqa-1.0.9-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9095476b2b80b7d4d361a4941f5b8321",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 294035,
            "upload_time": "2024-06-18T17:31:02",
            "upload_time_iso_8601": "2024-06-18T17:31:02.381934Z",
            "url": "https://files.pythonhosted.org/packages/63/91/881fd657ac10a4d3d843f20c81abfe06ffa95a3b9b37d448b8b0c3344a14/dexter_cqa-1.0.9-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "c5faa7e373ffaf3f071af10b26ded5348316e527013043b514f8ea9dfb468e26",
                "md5": "24180c0cf1f027cb3912f608828d43e4",
                "sha256": "472d58a0af0cc208c2a5a35cd6a5299087c39bab14b6f349ad3db21941380dd6"
            },
            "downloads": -1,
            "filename": "dexter_cqa-1.0.9.tar.gz",
            "has_sig": false,
            "md5_digest": "24180c0cf1f027cb3912f608828d43e4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8",
            "size": 192444,
            "upload_time": "2024-06-18T17:31:06",
            "upload_time_iso_8601": "2024-06-18T17:31:06.978913Z",
            "url": "https://files.pythonhosted.org/packages/c5/fa/a7e373ffaf3f071af10b26ded5348316e527013043b514f8ea9dfb468e26/dexter_cqa-1.0.9.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-18 17:31:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "VenkteshV",
    "github_project": "BCQA",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "dexter-cqa"
}
        
Elapsed time: 0.29725s