# kfe
Cross-platform File Explorer and Search Engine for Multimedia.
# Features
- Full privacy. Data never leaves your machine.
- Text query-based search that accounts for:
- visual aspects of images and videos based on [CLIP](https://huggingface.co/docs/transformers/en/model_doc/clip) embeddings,
- transcriptions that are automatically generated for audio and video files,
- text that is automatically extracted from images,
- descriptions that you can manually write using GUI.
- Similarity search capabilities:
- find images or videos similar to one of images from your directory,
- find images similar to any image pasted from the clipboard,
- find files with semantically similar metadata (descriptions, transcriptions or text extracted from image).
- Browser GUI that lets you easily use all of those search options, browse files and edit file metadata.
- Standalone program that depends only on ffmpeg. Project includes all the necessary database and search features.
- Works offline, can work with and without GPU.
- Works on Mac, Linux and Windows.
- Supports English and Polish languages.
## Intended use cases
The application was designed for directories containing up to 10k images, short (<5 minutes) videos, or audio files. File names are assumed to be non-descriptive. Examples of such directories are:
- phone gallery or audio recordings copied to PC,
- data dumps from messaging apps like Messenger (Messenger built-in search works only for text messages, but they allow downloading all media, which can be searched using this app),
- saved memes.
# YouTube Demo
<div align="center">
<a href="https://www.youtube.com/watch?v=LSe0QB6dzEY">
<img src="https://img.youtube.com/vi/LSe0QB6dzEY/0.jpg" alt="Project Demo" />
</a>
<p><a href="https://www.youtube.com/watch?v=LSe0QB6dzEY">Watch the demo on YouTube</a></p>
</div>
# Installation
1. Make sure that you have `python>=3.10` and `ffmpeg` with `ffprobe` installed:
- For ffmpeg installation, see: https://ffmpeg.org/download.html.
- To verify installation run command line and type `ffmpeg -version` and `ffprobe -version`, both should print some results.
2. Install the project:
```sh
pip install kfe
```
# Running
1. In console run:
```sh
kfe
```
If you get an error that the default `8000` port is taken, you can change it using `kfe --port <other port>`. For more options run `kfe --help`.
2. Open `http://localhost:8000` in the browser.
3. Follow instructions on GUI, analyzing directory can take some time, but later searches will be fast. All analysis information will be stored on your disk and won't need to be done again. Adding first directory might be especially slow since all AI models will be downloaded. After they are downloaded application will work offline.
If you see CUDA out of memory errors you can still run the application using CPU with `kfe --cpu`. The transcription model is the most resource demanding, see the next section for instruction how to change it.
If you are on Linux and want to run application on system startup you can clone the project and run `./systemd/install_with_systemd.sh`.
# Models
Application uses the following models/libraries for english directories:
- Transcriptions - for each audio and video files transcription is generated using [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) model if you have CUDA or Apple silicon, otherwise [openai/whisper-base](https://huggingface.co/openai/whisper-base). This model requires more hardware resources than other models, you might want to change it, see the next section.
- OCR - for each image application attempts to extract text using [easyocr](https://github.com/JaidedAI/EasyOCR) library.
- CLIP embeddings - for each image and video (from which multiple image frames are extracted) application generates CLIP embeddings using [openai/clip](https://huggingface.co/openai/clip-vit-base-patch32) model. This enables searching images by arbitrary text, without need for any annotations.
- Text embeddings - application generates embeddings of each type of text that can be searched (descriptions that you can write manually, transcriptions and OCR results) using [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) model.
- Text lemmatization - each type of text and every user query is preprocessed using [spacy/en_core_web_trf](https://spacy.io/models/en#en_core_web_lg) lemmatization model for lexical search purposes. If you are unfamilar what this means, the tldr is that different forms of the same word (like `work = working = worked`) will be treated the same in this type of search.
## Changing models
Application uses models from huggingface, for various reasons you might want to change them. Transcription model is the most resource-demanding and can be changed with: `kfe --transcription-model <huggingface model id>`, where model id could be, for example, `openai/whisper-small`. See `kfe --help` for more info.
Currently other models can be changed only by modifying the source code. To do that, see [backend/kfe/dependencies.py](backend/kfe/dependencies.py) file and adjust it accordingly. If you change embedding model make sure to remove `.embeddings` directory so that embeddings will be recreated.
You might also want to use paid models or models hosted in the cloud, such as OpenAI Whisper through the API. There is no support (and no plans) for that, you would need to reimplement transcriber interface in [backend/kfe/features/transcriber.py](backend/kfe/features/transcriber.py) to achieve it.
# How does the application work?
Application allows you to search individual directories. You can use multiple directories, but no information is shared between them, and you cannot search all of them at once. When you register a directory using GUI the following things happen.
### Initialization
1. Application creates a sqlite database file in the directory, named `.kfe.db`. This database stores almost all the metadata about files in the selected directory, including descriptions, extracted transcriptions, lemmatization results and more. You can see the SQL table format in [backend/kfe/persistence/model.py](backend/kfe/persistence/model.py).
2. Applications scans the directory and adds every multimedia file to the database, subdirectories and other types of files are ignored.
3. For each file, application extracts relevant text (OCR for images, transcriptions for videos and audio) and lemmatizes it. Results are written to the database so it can be done only once.
4. Application generates various types of embeddings and stores them in `.embeddings` directory, there is a file with encoded embeddings for each original file in the directory. See `models` above to have an idea what embeddings are generated.
5. Application loads the data to various search structures:
- original and lemmatized text is split into words and added to a reverse index structure, which is a map of `word -> list of files in which the word appears`,
- embeddings are loaded into numpy matrices (different matrices for different types of embeddings).
6. Application generates thumbnails and saves them in `.thumbnails` subdirectory of the selected folder.
7. Application begins to watch for directory changes, processing new files the same way as above and cleaning up after the removed ones. Note that the GUI does NOT allow you to modify any files (nor does the application do so by itself), you must use your native file explorer for that.
If you restart the application and directory was already initialized before then only steps 5-7 happen.
### Query-time
At this stage directory is marked as ready for querying. When you enter a query without any `@` modifiers the following things happen.
1. Lexical/keyword search:
- query is lemmatized and split into words,
- reverse index is queried to load files which have descriptions, OCRs or transcriptions that contain some of the words from the query,
- files are scored according to [BM25 metric](https://en.wikipedia.org/wiki/Okapi_BM25).
2. Semantic search:
- application generates query embedding using text embedding model,
- matrices with pre-computed embeddings of descriptions, OCRs and transcriptions are used to compute cosine similarity of the query embedding and all of those embeddings.
3. CLIP search:
- application generates query embedding using text CLIP encoder,
- matrices with pre-computed clip embeddings of images and videos are used to compute cosine similarity of the query embedding and all of those embeddings.
4. Ordering results using hybrid approach. A variation of [reciprocal rank fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) is used to combine results for all of those search modes. The variation is a custom problem-specific metric that attempts to weight confidence of individual retrievers and not just the ranks. See [backend/kfe/utils/search.py](backend/kfe/utils/search.py) for more details.
5. Ordered results with file thumbnails are returned to the UI which presents them to the user.
All of those structures and algorithms are written from scratch and included in the project. The application doesn't use tools such as elastic search for lexical search or faiss for similarity search. The assumption is that directories are not massive, they can contain up to few tens of thousands of files, not hundreds of thousands of files. For such use case this approach should work seamlessly on consumer grade machines (~200ms of search latency for a PC with 12gen i5 for a directory with ~5000 files with latency dominated by generation of embeddings and not search).
Application lets you also choose search mode (e.g., search only transcriptions) and filter results, GUI help in the top right corner enumerates all the options.
### Resource usage
Application loads models in a lazy manner (only when they are needed) and recycles them automatically. If you have application running in the background it doesn't consume any GPU memory. When models required for querying are loaded they use about 2GB of memory.
For initialization, a heavier transcription model will be loaded if there are audio files. During directory initialization application can consume up to 5GB of GPU memory. It can work on CPU the same but will likely be slower, you can pass `--cpu` flag to force CPU usage.
Apart from that, application requires ~1GB of RAM when idle and >2GB when used (exact numbers depend on how many files you have, 2GB was for ~10k files). Add GPU stats to that if you are not using GPU.
Storage: All models and dependencies require <10GB of disk space.
### Removing all data created or downloaded by the application
Models are stored in `.kfe` directory in the home folder, OCR model is stored in `.EasyOCR` directory in home folder. Apart from that in each registered directory there are `.embeddings` and `.thumbnails` folders and `.kfe.db` file.
To backup generated or manually written metadata it suffices to copy `.kfe.db` file.
# Programmatic access to the data and API
Metadata is stored in sqlite database, you can access it using any sqlite library or tool. For example using `sqlite3` tool on Linux:
```sh
sqlite3 .kfe.db
SELECT * FROM files;
```
Alternatively, if you want to reuse some of the utilities from this project:
```py
from pathlib import Path
from kfe.persistence.db import Database
from kfe.persistence.file_metadata_repository import FileMetadataRepository
from kfe.persistence.model import FileMetadata
root_dir = Path('/todo') # directory where you have .kfe.db
files_db = Database(root_dir, log_sql=False)
await files_db.init_db()
async with files_db.session() as session:
repo = FileMetadataRepository(session)
files = await repo.load_all_files()
for f in files:
print(f'{f.name}: {f.transcript}')
```
To decode generated embeddings of a file:
```py
from pathlib import Path
from kfe.persistence.embeddings import EmbeddingPersistor
root_dir = Path('/todo') # directory where you have .kfe.db
file_name = 'todo.mp4' # file name inside this directory
embedding_persistor = EmbeddingPersistor(root_dir)
embeddings = embedding_persistor.load_without_consistency_check(file_name)
print(embeddings.__dict__.keys()) # ['description', 'ocr_text', 'transcription_text', 'clip_image', 'clip_video']
print(embeddings.transcription_text.embedding.shape) # (1024, )
```
To see all endpoints open http://localhost:8000/docs. To perform a search you can, for example, run:
```sh
curl -X POST "http://localhost:8000/load/search?offset=0&limit=10" \
-H "X-Directory: NAME-OF-YOUR-DIRECTORY" \
-H "Content-Type: application/json" \
-d '{"query": "YOUR QUERY"}'
```
Raw data
{
"_id": null,
"home_page": null,
"name": "kfe",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "file explorer, search engine, multimedia, video, audio, image, semantic, lexical, clip, transcription, gui",
"author": null,
"author_email": "flok3n@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/dc/1a/6415709f5465e1c9a6b820cfa01b0bf8a32c3198e6b45626eb7e64c2094d/kfe-1.1.2.tar.gz",
"platform": null,
"description": "# kfe\n\nCross-platform File Explorer and Search Engine for Multimedia.\n\n# Features\n- Full privacy. Data never leaves your machine.\n- Text query-based search that accounts for:\n - visual aspects of images and videos based on [CLIP](https://huggingface.co/docs/transformers/en/model_doc/clip) embeddings,\n - transcriptions that are automatically generated for audio and video files,\n - text that is automatically extracted from images,\n - descriptions that you can manually write using GUI.\n- Similarity search capabilities:\n - find images or videos similar to one of images from your directory,\n - find images similar to any image pasted from the clipboard,\n - find files with semantically similar metadata (descriptions, transcriptions or text extracted from image).\n- Browser GUI that lets you easily use all of those search options, browse files and edit file metadata.\n- Standalone program that depends only on ffmpeg. Project includes all the necessary database and search features.\n- Works offline, can work with and without GPU.\n- Works on Mac, Linux and Windows.\n- Supports English and Polish languages.\n\n## Intended use cases\n\nThe application was designed for directories containing up to 10k images, short (<5 minutes) videos, or audio files. File names are assumed to be non-descriptive. Examples of such directories are:\n- phone gallery or audio recordings copied to PC,\n- data dumps from messaging apps like Messenger (Messenger built-in search works only for text messages, but they allow downloading all media, which can be searched using this app),\n- saved memes.\n\n# YouTube Demo\n\n\n<div align=\"center\">\n <a href=\"https://www.youtube.com/watch?v=LSe0QB6dzEY\">\n <img src=\"https://img.youtube.com/vi/LSe0QB6dzEY/0.jpg\" alt=\"Project Demo\" />\n </a>\n <p><a href=\"https://www.youtube.com/watch?v=LSe0QB6dzEY\">Watch the demo on YouTube</a></p>\n\n</div>\n\n# Installation\n\n1. Make sure that you have `python>=3.10` and `ffmpeg` with `ffprobe` installed:\n- For ffmpeg installation, see: https://ffmpeg.org/download.html.\n- To verify installation run command line and type `ffmpeg -version` and `ffprobe -version`, both should print some results.\n\n2. Install the project:\n\n```sh\npip install kfe\n```\n\n# Running\n\n1. In console run:\n```sh\nkfe\n```\n\nIf you get an error that the default `8000` port is taken, you can change it using `kfe --port <other port>`. For more options run `kfe --help`.\n\n2. Open `http://localhost:8000` in the browser.\n\n3. Follow instructions on GUI, analyzing directory can take some time, but later searches will be fast. All analysis information will be stored on your disk and won't need to be done again. Adding first directory might be especially slow since all AI models will be downloaded. After they are downloaded application will work offline.\n\nIf you see CUDA out of memory errors you can still run the application using CPU with `kfe --cpu`. The transcription model is the most resource demanding, see the next section for instruction how to change it.\n\nIf you are on Linux and want to run application on system startup you can clone the project and run `./systemd/install_with_systemd.sh`.\n\n# Models\n\nApplication uses the following models/libraries for english directories:\n- Transcriptions - for each audio and video files transcription is generated using [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) model if you have CUDA or Apple silicon, otherwise [openai/whisper-base](https://huggingface.co/openai/whisper-base). This model requires more hardware resources than other models, you might want to change it, see the next section.\n- OCR - for each image application attempts to extract text using [easyocr](https://github.com/JaidedAI/EasyOCR) library.\n- CLIP embeddings - for each image and video (from which multiple image frames are extracted) application generates CLIP embeddings using [openai/clip](https://huggingface.co/openai/clip-vit-base-patch32) model. This enables searching images by arbitrary text, without need for any annotations.\n- Text embeddings - application generates embeddings of each type of text that can be searched (descriptions that you can write manually, transcriptions and OCR results) using [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5) model.\n- Text lemmatization - each type of text and every user query is preprocessed using [spacy/en_core_web_trf](https://spacy.io/models/en#en_core_web_lg) lemmatization model for lexical search purposes. If you are unfamilar what this means, the tldr is that different forms of the same word (like `work = working = worked`) will be treated the same in this type of search.\n\n## Changing models\n\nApplication uses models from huggingface, for various reasons you might want to change them. Transcription model is the most resource-demanding and can be changed with: `kfe --transcription-model <huggingface model id>`, where model id could be, for example, `openai/whisper-small`. See `kfe --help` for more info.\n\nCurrently other models can be changed only by modifying the source code. To do that, see [backend/kfe/dependencies.py](backend/kfe/dependencies.py) file and adjust it accordingly. If you change embedding model make sure to remove `.embeddings` directory so that embeddings will be recreated.\n\nYou might also want to use paid models or models hosted in the cloud, such as OpenAI Whisper through the API. There is no support (and no plans) for that, you would need to reimplement transcriber interface in [backend/kfe/features/transcriber.py](backend/kfe/features/transcriber.py) to achieve it.\n\n\n# How does the application work?\n\nApplication allows you to search individual directories. You can use multiple directories, but no information is shared between them, and you cannot search all of them at once. When you register a directory using GUI the following things happen.\n\n### Initialization\n\n1. Application creates a sqlite database file in the directory, named `.kfe.db`. This database stores almost all the metadata about files in the selected directory, including descriptions, extracted transcriptions, lemmatization results and more. You can see the SQL table format in [backend/kfe/persistence/model.py](backend/kfe/persistence/model.py). \n2. Applications scans the directory and adds every multimedia file to the database, subdirectories and other types of files are ignored.\n3. For each file, application extracts relevant text (OCR for images, transcriptions for videos and audio) and lemmatizes it. Results are written to the database so it can be done only once.\n4. Application generates various types of embeddings and stores them in `.embeddings` directory, there is a file with encoded embeddings for each original file in the directory. See `models` above to have an idea what embeddings are generated.\n5. Application loads the data to various search structures:\n - original and lemmatized text is split into words and added to a reverse index structure, which is a map of `word -> list of files in which the word appears`,\n - embeddings are loaded into numpy matrices (different matrices for different types of embeddings).\n6. Application generates thumbnails and saves them in `.thumbnails` subdirectory of the selected folder.\n7. Application begins to watch for directory changes, processing new files the same way as above and cleaning up after the removed ones. Note that the GUI does NOT allow you to modify any files (nor does the application do so by itself), you must use your native file explorer for that.\n\nIf you restart the application and directory was already initialized before then only steps 5-7 happen.\n\n### Query-time\n\nAt this stage directory is marked as ready for querying. When you enter a query without any `@` modifiers the following things happen.\n\n1. Lexical/keyword search:\n - query is lemmatized and split into words,\n - reverse index is queried to load files which have descriptions, OCRs or transcriptions that contain some of the words from the query,\n - files are scored according to [BM25 metric](https://en.wikipedia.org/wiki/Okapi_BM25).\n2. Semantic search:\n - application generates query embedding using text embedding model,\n - matrices with pre-computed embeddings of descriptions, OCRs and transcriptions are used to compute cosine similarity of the query embedding and all of those embeddings.\n3. CLIP search:\n - application generates query embedding using text CLIP encoder,\n - matrices with pre-computed clip embeddings of images and videos are used to compute cosine similarity of the query embedding and all of those embeddings.\n4. Ordering results using hybrid approach. A variation of [reciprocal rank fusion](https://plg.uwaterloo.ca/~gvcormac/cormacksigir09-rrf.pdf) is used to combine results for all of those search modes. The variation is a custom problem-specific metric that attempts to weight confidence of individual retrievers and not just the ranks. See [backend/kfe/utils/search.py](backend/kfe/utils/search.py) for more details.\n5. Ordered results with file thumbnails are returned to the UI which presents them to the user.\n\n\nAll of those structures and algorithms are written from scratch and included in the project. The application doesn't use tools such as elastic search for lexical search or faiss for similarity search. The assumption is that directories are not massive, they can contain up to few tens of thousands of files, not hundreds of thousands of files. For such use case this approach should work seamlessly on consumer grade machines (~200ms of search latency for a PC with 12gen i5 for a directory with ~5000 files with latency dominated by generation of embeddings and not search).\n\nApplication lets you also choose search mode (e.g., search only transcriptions) and filter results, GUI help in the top right corner enumerates all the options.\n\n\n### Resource usage\n\nApplication loads models in a lazy manner (only when they are needed) and recycles them automatically. If you have application running in the background it doesn't consume any GPU memory. When models required for querying are loaded they use about 2GB of memory.\n\nFor initialization, a heavier transcription model will be loaded if there are audio files. During directory initialization application can consume up to 5GB of GPU memory. It can work on CPU the same but will likely be slower, you can pass `--cpu` flag to force CPU usage.\n\nApart from that, application requires ~1GB of RAM when idle and >2GB when used (exact numbers depend on how many files you have, 2GB was for ~10k files). Add GPU stats to that if you are not using GPU.\n\nStorage: All models and dependencies require <10GB of disk space.\n\n\n### Removing all data created or downloaded by the application\n\nModels are stored in `.kfe` directory in the home folder, OCR model is stored in `.EasyOCR` directory in home folder. Apart from that in each registered directory there are `.embeddings` and `.thumbnails` folders and `.kfe.db` file.\n\nTo backup generated or manually written metadata it suffices to copy `.kfe.db` file.\n\n\n# Programmatic access to the data and API\n\nMetadata is stored in sqlite database, you can access it using any sqlite library or tool. For example using `sqlite3` tool on Linux:\n\n```sh\nsqlite3 .kfe.db\nSELECT * FROM files;\n```\n\nAlternatively, if you want to reuse some of the utilities from this project:\n\n```py\nfrom pathlib import Path\nfrom kfe.persistence.db import Database\nfrom kfe.persistence.file_metadata_repository import FileMetadataRepository\nfrom kfe.persistence.model import FileMetadata\n\nroot_dir = Path('/todo') # directory where you have .kfe.db\nfiles_db = Database(root_dir, log_sql=False)\nawait files_db.init_db()\n\nasync with files_db.session() as session:\n repo = FileMetadataRepository(session)\n files = await repo.load_all_files()\n for f in files:\n print(f'{f.name}: {f.transcript}')\n```\n\nTo decode generated embeddings of a file:\n\n```py\nfrom pathlib import Path\nfrom kfe.persistence.embeddings import EmbeddingPersistor\n\nroot_dir = Path('/todo') # directory where you have .kfe.db\nfile_name = 'todo.mp4' # file name inside this directory\n\nembedding_persistor = EmbeddingPersistor(root_dir)\nembeddings = embedding_persistor.load_without_consistency_check(file_name)\n\nprint(embeddings.__dict__.keys()) # ['description', 'ocr_text', 'transcription_text', 'clip_image', 'clip_video']\nprint(embeddings.transcription_text.embedding.shape) # (1024, )\n```\n\nTo see all endpoints open http://localhost:8000/docs. To perform a search you can, for example, run:\n\n```sh\ncurl -X POST \"http://localhost:8000/load/search?offset=0&limit=10\" \\\n -H \"X-Directory: NAME-OF-YOUR-DIRECTORY\" \\\n -H \"Content-Type: application/json\" \\\n -d '{\"query\": \"YOUR QUERY\"}'\n\n```\n",
"bugtrack_url": null,
"license": "MIT License",
"summary": "File Explorer and Search Engine for locally stored multimedia",
"version": "1.1.2",
"project_urls": {
"Repository": "https://github.com/Fl0k3n/kfe"
},
"split_keywords": [
"file explorer",
" search engine",
" multimedia",
" video",
" audio",
" image",
" semantic",
" lexical",
" clip",
" transcription",
" gui"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "664af3eb5a07d0deeb4c2fc7f428200db4dae94a093875aa2147099a5d2e3567",
"md5": "060a4cd5a574d73dc02fcb919bf2fcc8",
"sha256": "bab9291d339efe837098d21056df8c89a7c401d91f9c8fc4628763ef22390cbb"
},
"downloads": -1,
"filename": "kfe-1.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "060a4cd5a574d73dc02fcb919bf2fcc8",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 1076107,
"upload_time": "2025-01-14T19:04:05",
"upload_time_iso_8601": "2025-01-14T19:04:05.927276Z",
"url": "https://files.pythonhosted.org/packages/66/4a/f3eb5a07d0deeb4c2fc7f428200db4dae94a093875aa2147099a5d2e3567/kfe-1.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "dc1a6415709f5465e1c9a6b820cfa01b0bf8a32c3198e6b45626eb7e64c2094d",
"md5": "6583441b009837c12c9fdfa9c81d8d79",
"sha256": "73f52e737895eabbaa6dcad6581b17074474a339b1c145b4721af57291877407"
},
"downloads": -1,
"filename": "kfe-1.1.2.tar.gz",
"has_sig": false,
"md5_digest": "6583441b009837c12c9fdfa9c81d8d79",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 1057522,
"upload_time": "2025-01-14T19:04:13",
"upload_time_iso_8601": "2025-01-14T19:04:13.444688Z",
"url": "https://files.pythonhosted.org/packages/dc/1a/6415709f5465e1c9a6b820cfa01b0bf8a32c3198e6b45626eb7e64c2094d/kfe-1.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-14 19:04:13",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Fl0k3n",
"github_project": "kfe",
"github_not_found": true,
"lcname": "kfe"
}