Name | Version | Summary | date |
maidr |
0.4.0 |
Multimodal Access and Interactive Data Representations |
2024-06-16 20:43:04 |
stark-qa |
0.1.0 |
Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases |
2024-06-13 06:50:03 |
ammico-lavis |
1.0.2.3 |
LAVIS - A One-stop Library for Language-Vision Intelligence |
2024-06-12 09:34:56 |
jetson-examples |
0.1.4 |
Running Gen AI models and applications on NVIDIA Jetson devices with one-line command |
2024-06-11 02:30:37 |
cornac |
2.2.1 |
A Comparative Framework for Multimodal Recommender Systems |
2024-05-24 10:22:24 |
lavis-gml |
1.0.2.post4 |
LAVIS - A One-stop Library for Language-Vision Intelligence |
2024-05-15 21:24:27 |
hume |
0.5.1 |
Hume AI Python SDK |
2024-05-02 17:04:04 |
mexca |
1.0.4 |
Emotion expression capture from multiple modalities. |
2024-05-01 12:27:38 |
gradio-awsbr-mmchatbot |
0.0.4 |
This component enables multi-modal input for the Anthropic Claude v3 suite of models available from Amazon Bedrock |
2024-04-16 02:58:17 |
gradio-awsBedrock-multimodalChatbot |
0.0.1 |
This component enables multi-modal input for the Anthropic Claude v3 suite of models available from Amazon Bedrock |
2024-04-16 01:14:33 |
tinymm |
0.10 |
A simple and 'tiny' implementation of many multimodal models |
2024-04-04 23:25:30 |
exordium |
1.2.5 |
Collection of utility tools and deep learning methods. |
2024-04-02 15:24:30 |