Name | atlas-rag JSON |
Version |
0.0.4.post1
JSON |
| download |
home_page | None |
Summary | AutoSchemaKG, a framework for fully autonomous knowledge graph construction that eliminates the need for predefined schemas. |
upload_time | 2025-08-03 13:57:55 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | None |
keywords |
retrieval-augmented-generation
knowledge-graph
nlp
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# AutoSchemaKG: A Knowledge Graph Construction Framework with Schema Generation and Knowledge Graph Completion
This repository contains the implementation of AutoSchemaKG, a novel framework for automatic knowledge graph construction that combines schema generation via conceptualization. The framework is designed to address the challenges of constructing high-quality knowledge graphs from unstructured text.
This project uses the following paper and data:
* **Paper:** [Read the paper](https://arxiv.org/abs/2505.23628)
* **Full Data:** [Download the dataset](https://hkustconnect-my.sharepoint.com/:f:/g/personal/jbai_connect_ust_hk/EgJCqoU91KpAlSSOi6dzgccB6SCL4YBpsCyEtGiRBV4WNg) (onedrive)
* **Neo4j CSV Dumps:** [Download the dataset](https://huggingface.co/datasets/AlexFanWei/AutoSchemaKG) (huggingface dataset)
### Update
- (05/07) Update with batch generation and refactor the codebase. Add PDF-md-json instruction. [See PDF support](#pdf-support)
- (24/06) Add: ToG, Chinese KG construction (refer to example_scripts for KG construction with different language). Separate NV-embed-v2 transformers dependency.
## AutoSchemaKG Overview
AutoSchemaKG introduces a two-stage approach:
1. **Knowledge Graph Triple Extraction**: Extract triples comprising entities and events from text by using LLMs
2. **Schema Induction**: Automatically generate schema for the knowledge graph by using conceptualization and create semantic bridges between seemingly disparate information to enable zero-shot inferencing across domains
The framework achieves state-of-the-art performance on multiple benchmarks and demonstrates strong generalization capabilities across different domains.
## ATLAS Knowledge Graphs
ATLAS (Automated Triple Linking And Schema induction) is a family of knowledge graphs created through the AutoSchemaKG framework, which enables fully autonomous knowledge graph construction without predefined schemas. Here's a summary of what ATLAS is and how it works:
### Key Features of ATLAS Knowledge Graphs
- **Scale**: Consists of 900+ million nodes connected by 5.9 billion edges
- **Autonomous Construction**: Built without predefined schemas or manual intervention
- **Three Variants**: ATLAS-Wiki (from Wikipedia), ATLAS-Pes2o (from academic papers), and ATLAS-CC (from Common Crawl)
## Project Structure
```
AutoSchemaKG/
├── atlas_rag/ # Main package directory
│ ├── kg_construction/ # Knowledge graph construction modules
│ ├── llm_generator/ # Components for large language model generation
│ ├── retriever/ # Retrieval components for RAG
│ ├── utils/ # Utility functions for various tasks
│ └── vectorstore/ # Components for managing vector storage and embeddings
├── example_data/ # Sample data for testing and examples
├── example_scripts/ # Example scripts for usage demonstrations
├── log/ # Log files for tracking processes
├── neo4j_api_host/ # API hosting for Neo4j
├── neo4j_scripts/ # Scripts for managing Neo4j databases
├── tests/ # Unit tests for the project
├── .gitignore # Git ignore file
├── README.md # Main documentation for the project
├── atlas_billion_kg_usage.ipynb # Example for hosting and RAG with ATLAS
├── atlas_full_pipeline.ipynb # Full pipeline for constructing knowledge graphs
└── atlas_multihopqa.ipynb # Example for benchmarking multi-hop QA datasets
```
The project is organized into several key components:
- `atlas_rag/`: Core package containing the main functionality
- Evaluation directories for different aspects of the system
- Database-related scripts and API hosting
- Example notebooks demonstrating usage
- Import and distribution directories for data management
## Install atlas-rag through pip
```bash
pip install atlas-rag
```
To support NV-embed-v2, install the transformers package with the version constraint >=4.42.4,<=4.47.1 by running:
```bash
pip install atlas-rag[nvembed]
```
### KG Construction with ATLAS
```python
from atlas_rag.kg_construction.triple_extraction import KnowledgeGraphExtractor
from atlas_rag.kg_construction.triple_config import ProcessingConfig
from atlas_rag.llm_generator import LLMGenerator
from openai import OpenAI
from transformers import pipeline
# client = OpenAI(api_key='<your_api_key>',base_url="<your_api_base_url>")
# model_name = "meta-llama/llama-3.1-8b-instruct"
model_name = "meta-llama/Llama-3.1-8B-Instruct"
client = pipeline(
"text-generation",
model=model_name,
device_map="auto",
)
keyword = 'Dulce'
output_directory = f'import/{keyword}'
triple_generator = LLMGenerator(client, model_name=model_name)
kg_extraction_config = ProcessingConfig(
model_path=model_name,
data_directory="example_data",
filename_pattern=filename_pattern,
batch_size_triple=3, # batch size for triple extraction
batch_size_concept=16, # batch size for concept generation
output_directory=f"{output_directory}",
max_new_tokens=2048,
max_workers=3,
remove_doc_spaces=True, # For removing duplicated spaces in the document text
)
kg_extractor = KnowledgeGraphExtractor(model=triple_generator, config=kg_extraction_config)
# Construct entity&event graph
kg_extractor.run_extraction() # Involved LLM Generation
# Convert Triples Json to CSV
kg_extractor.convert_json_to_csv()
# Concept Generation
kg_extractor.generate_concept_csv(batch_size=64) # Involved LLM Generation
# Create Concept CSV
kg_extractor.create_concept_csv()
# Convert csv to graphml for networkx
kg_extractor.convert_to_graphml()
```
## Large Knowledge Graph Hosting and Retrieval Augmented Generation
This repository provides support for hosting and implementing Retrieval Augmented Generation (RAG) over our constructed knowledge graphs: `ATLAS-wiki`, `ATLAS-pes2o`, and `ATLAS-cc`. For detailed instructions on hosting and running these knowledge graphs, please refer to the `atlas_billion_kg_usage.ipynb` notebook.
## Building New Knowledge Graphs and Implementing RAG
The `atlas_full_pipeline.ipynb` notebook demonstrates how to:
- Build new knowledge graphs using AutoschemaKG
- Implement Retrieval Augmented Generation on your custom knowledge graphs
## Multi-hop Question Answering Evaluation
To replicate our multi-hop question answering evaluation results on benchmark datasets:
- `MuSiQue`
- `HotpotQA`
- `2WikiMultiHopQA`
Please follow the instructions in the `atlas_multihopqa.ipynb` notebook, which contains all necessary code and configuration details.
## General Evaluation
The framework includes comprehensive evaluation metrics across three dimensions:
- Knowledge Graph Quality (`EvaluateKGC`)
- Factual Consistency on FELM (`EvaluateFactuality`)
- General Performance on MMLU (`EValuateGeneralTask`)
Detailed evaluation procedures can be found in the respective evaluation directories.
## PDF Support
Creator: [swgj](https://github.com/Swgj)
Due to the version requirement of marker-pdf, we suggest you to create a new conda environment for PDF-to-Markdown Transformation.
Git clone PDF transform repo.
``` bash
git clone https://github.com/Swgj/pdf_process
cd pdf_process
conda create --name pdf-marker pip python=3.10
conda activate pdf-marker
pip install 'marker-pdf[full]'
pip install google-genai
```
Modify the config.yaml file.
``` yaml
processing_config:
llm_service: "marker.services.azure_openai.AzureOpenAIService" # to use Azure OpenAI Service. To use default Gemini server, you can comment this line
other_config:
use_llm: true
extract_images: false # false means not to extract images and use LLM for text description; true means extract images but not generate descriptions
page_range: null # null means process all pages, or use List[int] format like [9, 10, 11, 12]
max_concurrency: 2 # maximum number of concurrent processes
#Azure OpenAI API configuration
azure_endpoint: <your endpoint>
azure_api_version: "2024-10-21"
deployment_name: "gpt-4o"
# API configuration
api:
# api_key_env: "GEMINI_API_KEY" # Uncomment this line for Gemini API key
api_key_env: "AZURE_API_KEY"
# Input path configuration - can be a file or folder path
input:
# Supports relative and absolute paths
path: "test_data" # Can be a single file path or folder path
# path: "data/Apple_Environmental_Progress_Report_2024.pdf" # Example of a single file
# If it's a folder, you can set file filtering conditions
file_filters:
extensions: [".pdf"] # Only process PDF files
recursive: true # Whether to recursively process subfolders
exclude_patterns: # Exclude files that match these patterns
- "*temp*"
- "*~*"
# Output configuration
output:
base_dir: "md_output" # Output directory
create_subdirs: true # Whether to create a subdirectory for each input file
format: "md" # Output format (md, txt)
# Logging configuration
logging:
level: "INFO" # DEBUG, INFO, WARNING, ERROR
show_progress: true
```
Run
```bash
bash run.sh
```
Cheers! You have a Markdown version of your PDF file. You can now change directories back to your parent directory and run the command below to obtain your JSON file for further Atlas-RAG KG construction.
```
python -m atlas_rag.kg_construction.utils.md_processing.markdown_to_json --input example_data/md_data --output example_data
```
## Citation
If you use this code in your research, please cite our paper:
```
@misc{bai2025autoschemakgautonomousknowledgegraph,
title={AutoSchemaKG: Autonomous Knowledge Graph Construction through Dynamic Schema Induction from Web-Scale Corpora},
author={Jiaxin Bai and Wei Fan and Qi Hu and Qing Zong and Chunyang Li and Hong Ting Tsang and Hongyu Luo and Yauwai Yim and Haoyu Huang and Xiao Zhou and Feng Qin and Tianshi Zheng and Xi Peng and Xin Yao and Huiwen Yang and Leijie Wu and Yi Ji and Gong Zhang and Renhai Chen and Yangqiu Song},
year={2025},
eprint={2505.23628},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.23628},
}
```
## Contact
Jiaxin Bai: jbai@connect.ust.hk
Dennis Hong Ting TSANG : httsangaj@connect.ust.hk
Haoyu Huang: haoyuhuang@link.cuhk.edu.hk
Raw data
{
"_id": null,
"home_page": null,
"name": "atlas-rag",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "retrieval-augmented-generation, knowledge-graph, nlp",
"author": null,
"author_email": "Jiaxin Bai <jbai@connect.ust.hk>, Hong Ting TSANG <httsangaj@connect.ust.hk>, Haoyu Huang <haoyuhuang@link.cuhk.edu.hk>",
"download_url": "https://files.pythonhosted.org/packages/07/b2/439245b4b9df262802d7345fc3c7376071707550614fc1c9830da1ee23a7/atlas_rag-0.0.4.post1.tar.gz",
"platform": null,
"description": "# AutoSchemaKG: A Knowledge Graph Construction Framework with Schema Generation and Knowledge Graph Completion\n\nThis repository contains the implementation of AutoSchemaKG, a novel framework for automatic knowledge graph construction that combines schema generation via conceptualization. The framework is designed to address the challenges of constructing high-quality knowledge graphs from unstructured text.\n\nThis project uses the following paper and data:\n\n* **Paper:** [Read the paper](https://arxiv.org/abs/2505.23628)\n* **Full Data:** [Download the dataset](https://hkustconnect-my.sharepoint.com/:f:/g/personal/jbai_connect_ust_hk/EgJCqoU91KpAlSSOi6dzgccB6SCL4YBpsCyEtGiRBV4WNg) (onedrive)\n* **Neo4j CSV Dumps:** [Download the dataset](https://huggingface.co/datasets/AlexFanWei/AutoSchemaKG) (huggingface dataset)\n\n### Update\n- (05/07) Update with batch generation and refactor the codebase. Add PDF-md-json instruction. [See PDF support](#pdf-support)\n- (24/06) Add: ToG, Chinese KG construction (refer to example_scripts for KG construction with different language). Separate NV-embed-v2 transformers dependency.\n\n## AutoSchemaKG Overview\n\nAutoSchemaKG introduces a two-stage approach:\n1. **Knowledge Graph Triple Extraction**: Extract triples comprising entities and events from text by using LLMs \n2. **Schema Induction**: Automatically generate schema for the knowledge graph by using conceptualization and create semantic bridges between seemingly disparate information to enable zero-shot inferencing across domains\n\n\nThe framework achieves state-of-the-art performance on multiple benchmarks and demonstrates strong generalization capabilities across different domains.\n\n## ATLAS Knowledge Graphs\n\nATLAS (Automated Triple Linking And Schema induction) is a family of knowledge graphs created through the AutoSchemaKG framework, which enables fully autonomous knowledge graph construction without predefined schemas. Here's a summary of what ATLAS is and how it works:\n\n### Key Features of ATLAS Knowledge Graphs\n\n- **Scale**: Consists of 900+ million nodes connected by 5.9 billion edges\n- **Autonomous Construction**: Built without predefined schemas or manual intervention\n- **Three Variants**: ATLAS-Wiki (from Wikipedia), ATLAS-Pes2o (from academic papers), and ATLAS-CC (from Common Crawl)\n\n\n\n\n## Project Structure\n\n```\nAutoSchemaKG/\n\u251c\u2500\u2500 atlas_rag/ # Main package directory\n\u2502 \u251c\u2500\u2500 kg_construction/ # Knowledge graph construction modules\n\u2502 \u251c\u2500\u2500 llm_generator/ # Components for large language model generation\n\u2502 \u251c\u2500\u2500 retriever/ # Retrieval components for RAG\n\u2502 \u251c\u2500\u2500 utils/ # Utility functions for various tasks\n\u2502 \u2514\u2500\u2500 vectorstore/ # Components for managing vector storage and embeddings\n\u251c\u2500\u2500 example_data/ # Sample data for testing and examples\n\u251c\u2500\u2500 example_scripts/ # Example scripts for usage demonstrations\n\u251c\u2500\u2500 log/ # Log files for tracking processes\n\u251c\u2500\u2500 neo4j_api_host/ # API hosting for Neo4j\n\u251c\u2500\u2500 neo4j_scripts/ # Scripts for managing Neo4j databases\n\u251c\u2500\u2500 tests/ # Unit tests for the project\n\u251c\u2500\u2500 .gitignore # Git ignore file\n\u251c\u2500\u2500 README.md # Main documentation for the project\n\u251c\u2500\u2500 atlas_billion_kg_usage.ipynb # Example for hosting and RAG with ATLAS\n\u251c\u2500\u2500 atlas_full_pipeline.ipynb # Full pipeline for constructing knowledge graphs\n\u2514\u2500\u2500 atlas_multihopqa.ipynb # Example for benchmarking multi-hop QA datasets\n```\n\nThe project is organized into several key components:\n- `atlas_rag/`: Core package containing the main functionality\n- Evaluation directories for different aspects of the system\n- Database-related scripts and API hosting\n- Example notebooks demonstrating usage\n- Import and distribution directories for data management\n\n## Install atlas-rag through pip\n```bash\npip install atlas-rag\n```\nTo support NV-embed-v2, install the transformers package with the version constraint >=4.42.4,<=4.47.1 by running:\n```bash\npip install atlas-rag[nvembed]\n```\n\n### KG Construction with ATLAS\n\n```python\nfrom atlas_rag.kg_construction.triple_extraction import KnowledgeGraphExtractor\nfrom atlas_rag.kg_construction.triple_config import ProcessingConfig\nfrom atlas_rag.llm_generator import LLMGenerator\nfrom openai import OpenAI\nfrom transformers import pipeline\n# client = OpenAI(api_key='<your_api_key>',base_url=\"<your_api_base_url>\") \n# model_name = \"meta-llama/llama-3.1-8b-instruct\"\n\nmodel_name = \"meta-llama/Llama-3.1-8B-Instruct\"\nclient = pipeline(\n \"text-generation\",\n model=model_name,\n device_map=\"auto\",\n)\nkeyword = 'Dulce'\noutput_directory = f'import/{keyword}'\ntriple_generator = LLMGenerator(client, model_name=model_name)\nkg_extraction_config = ProcessingConfig(\n model_path=model_name,\n data_directory=\"example_data\",\n filename_pattern=filename_pattern,\n batch_size_triple=3, # batch size for triple extraction\n batch_size_concept=16, # batch size for concept generation\n output_directory=f\"{output_directory}\",\n max_new_tokens=2048,\n max_workers=3,\n remove_doc_spaces=True, # For removing duplicated spaces in the document text\n)\nkg_extractor = KnowledgeGraphExtractor(model=triple_generator, config=kg_extraction_config)\n\n# Construct entity&event graph\nkg_extractor.run_extraction() # Involved LLM Generation\n# Convert Triples Json to CSV\nkg_extractor.convert_json_to_csv()\n# Concept Generation\nkg_extractor.generate_concept_csv(batch_size=64) # Involved LLM Generation\n# Create Concept CSV\nkg_extractor.create_concept_csv()\n# Convert csv to graphml for networkx\nkg_extractor.convert_to_graphml()\n```\n\n## Large Knowledge Graph Hosting and Retrieval Augmented Generation\n\nThis repository provides support for hosting and implementing Retrieval Augmented Generation (RAG) over our constructed knowledge graphs: `ATLAS-wiki`, `ATLAS-pes2o`, and `ATLAS-cc`. For detailed instructions on hosting and running these knowledge graphs, please refer to the `atlas_billion_kg_usage.ipynb` notebook. \n\n## Building New Knowledge Graphs and Implementing RAG\n\nThe `atlas_full_pipeline.ipynb` notebook demonstrates how to:\n- Build new knowledge graphs using AutoschemaKG\n- Implement Retrieval Augmented Generation on your custom knowledge graphs\n\n\n## Multi-hop Question Answering Evaluation\n\nTo replicate our multi-hop question answering evaluation results on benchmark datasets:\n- `MuSiQue`\n- `HotpotQA` \n- `2WikiMultiHopQA`\n\nPlease follow the instructions in the `atlas_multihopqa.ipynb` notebook, which contains all necessary code and configuration details.\n\n## General Evaluation\n\n\nThe framework includes comprehensive evaluation metrics across three dimensions:\n- Knowledge Graph Quality (`EvaluateKGC`)\n- Factual Consistency on FELM (`EvaluateFactuality`)\n- General Performance on MMLU (`EValuateGeneralTask`)\n\nDetailed evaluation procedures can be found in the respective evaluation directories.\n\n## PDF Support\nCreator: [swgj](https://github.com/Swgj)\n\nDue to the version requirement of marker-pdf, we suggest you to create a new conda environment for PDF-to-Markdown Transformation.\n\nGit clone PDF transform repo.\n``` bash\ngit clone https://github.com/Swgj/pdf_process\ncd pdf_process\nconda create --name pdf-marker pip python=3.10\nconda activate pdf-marker\npip install 'marker-pdf[full]'\npip install google-genai\n```\nModify the config.yaml file.\n``` yaml\nprocessing_config:\n llm_service: \"marker.services.azure_openai.AzureOpenAIService\" # to use Azure OpenAI Service. To use default Gemini server, you can comment this line\n other_config:\n use_llm: true\n extract_images: false # false means not to extract images and use LLM for text description; true means extract images but not generate descriptions\n page_range: null # null means process all pages, or use List[int] format like [9, 10, 11, 12]\n max_concurrency: 2 # maximum number of concurrent processes\n #Azure OpenAI API configuration\n azure_endpoint: <your endpoint>\n azure_api_version: \"2024-10-21\"\n deployment_name: \"gpt-4o\"\n\n# API configuration\napi:\n # api_key_env: \"GEMINI_API_KEY\" # Uncomment this line for Gemini API key\n api_key_env: \"AZURE_API_KEY\"\n\n# Input path configuration - can be a file or folder path\ninput:\n # Supports relative and absolute paths\n path: \"test_data\" # Can be a single file path or folder path\n # path: \"data/Apple_Environmental_Progress_Report_2024.pdf\" # Example of a single file\n \n # If it's a folder, you can set file filtering conditions\n file_filters:\n extensions: [\".pdf\"] # Only process PDF files\n recursive: true # Whether to recursively process subfolders\n exclude_patterns: # Exclude files that match these patterns\n - \"*temp*\"\n - \"*~*\"\n\n# Output configuration\noutput:\n base_dir: \"md_output\" # Output directory\n create_subdirs: true # Whether to create a subdirectory for each input file\n format: \"md\" # Output format (md, txt)\n \n# Logging configuration\nlogging:\n level: \"INFO\" # DEBUG, INFO, WARNING, ERROR\n show_progress: true\n```\n\nRun\n```bash\nbash run.sh\n```\nCheers! You have a Markdown version of your PDF file. You can now change directories back to your parent directory and run the command below to obtain your JSON file for further Atlas-RAG KG construction.\n```\npython -m atlas_rag.kg_construction.utils.md_processing.markdown_to_json --input example_data/md_data --output example_data\n```\n\n## Citation\n\nIf you use this code in your research, please cite our paper:\n\n```\n@misc{bai2025autoschemakgautonomousknowledgegraph,\n title={AutoSchemaKG: Autonomous Knowledge Graph Construction through Dynamic Schema Induction from Web-Scale Corpora}, \n author={Jiaxin Bai and Wei Fan and Qi Hu and Qing Zong and Chunyang Li and Hong Ting Tsang and Hongyu Luo and Yauwai Yim and Haoyu Huang and Xiao Zhou and Feng Qin and Tianshi Zheng and Xi Peng and Xin Yao and Huiwen Yang and Leijie Wu and Yi Ji and Gong Zhang and Renhai Chen and Yangqiu Song},\n year={2025},\n eprint={2505.23628},\n archivePrefix={arXiv},\n primaryClass={cs.CL},\n url={https://arxiv.org/abs/2505.23628}, \n}\n```\n\n\n\n\n\n## Contact\n\nJiaxin Bai: jbai@connect.ust.hk \n\nDennis Hong Ting TSANG : httsangaj@connect.ust.hk\n\nHaoyu Huang: haoyuhuang@link.cuhk.edu.hk\n\n",
"bugtrack_url": null,
"license": null,
"summary": "AutoSchemaKG, a framework for fully autonomous knowledge graph construction that eliminates the need for predefined schemas.",
"version": "0.0.4.post1",
"project_urls": null,
"split_keywords": [
"retrieval-augmented-generation",
" knowledge-graph",
" nlp"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "7ec6f0cb0e483ca2e07c1efa56e322759a4665997214d2f297f047f862d4851a",
"md5": "740e864c41ef3572161f1526f3bcc624",
"sha256": "26ef87849b3b94e1634338fefca55efd3197a2e86fb7c5db9a8b35c04ba59c33"
},
"downloads": -1,
"filename": "atlas_rag-0.0.4.post1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "740e864c41ef3572161f1526f3bcc624",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 92320,
"upload_time": "2025-08-03T13:57:16",
"upload_time_iso_8601": "2025-08-03T13:57:16.642747Z",
"url": "https://files.pythonhosted.org/packages/7e/c6/f0cb0e483ca2e07c1efa56e322759a4665997214d2f297f047f862d4851a/atlas_rag-0.0.4.post1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "07b2439245b4b9df262802d7345fc3c7376071707550614fc1c9830da1ee23a7",
"md5": "a490419dc5c473e3b57d0c47704c230e",
"sha256": "e3afceaee59f9da56c07f8b4b476ef4438f399d3477501994bba6e5255619199"
},
"downloads": -1,
"filename": "atlas_rag-0.0.4.post1.tar.gz",
"has_sig": false,
"md5_digest": "a490419dc5c473e3b57d0c47704c230e",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 27524093,
"upload_time": "2025-08-03T13:57:55",
"upload_time_iso_8601": "2025-08-03T13:57:55.484706Z",
"url": "https://files.pythonhosted.org/packages/07/b2/439245b4b9df262802d7345fc3c7376071707550614fc1c9830da1ee23a7/atlas_rag-0.0.4.post1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-03 13:57:55",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "atlas-rag"
}