
[](https://rikai.readthedocs.io/)
[](https://javadoc.io/doc/ai.eto/rikai_2.12)



Join the community:
[](https://gitter.im/rikaidev/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
> :heavy_exclamation_mark: This repository is still experimental. No API-compatibility is guaranteed.
# Rikai
Rikai is a framework specifically designed for AI workflows focused around large scale unstructured datasets
(e.g., images, videos, sensor data (future), text (future), and more).
Through every stage of the AI modeling workflow,
Rikai strives to offer a great developer experience when working with real-world AI datasets.
The quality of an AI dataset can make or break an AI project, but tooling for AI data is sorely lacking in ergonomics.
As a result, practitioners must spend most of their time and effort wrestling with their data instead of innovating on the models and use cases.
Rikai alleviates the pain that AI practitioners experience on a daily basis dealing with the myriad of tedious data tasks,
so they can focus again on model-building and problem solving.
To start trying Rikai right away, checkout the [Quickstart Guide](https://rikai.readthedocs.io/en/latest/quickstart.html).
## Main Features
### Data format
The core of Rikai is a data format ("rikai format") based on [Apache Parquet](https://parquet.apache.org/).
Rikai augments parquet with a rich collection of semantic types design specifically for unstructured data and annotations.
### Integrations
Rikai comes with an extensive set of I/O connectors. For ETL, Rikai is able to consume popular formats like ROS bags and Coco.
For analysis, it's easy to read Rikai data into pandas/spark DataFrames (Rikai handles serde for the semantic types).
And for training, Rikai allows direct creation of Pytorch/Tensorflow datasets without manual conversion.
### SQL-ML Engine
Rikai extends Spark SQL with ML capability which allows users to analyze Rikai datasets using own models with SQL
("Bring your own model")
### Visualization
Carefully crafted data-visualization embedded with semantic types, especially in Jupyter notebooks,
to help you visualize and inspect your AI data without having to remember complicated raw image manipulations.
## Roadmap
1. Improved video support
2. Text / sensors / geospatial support
3. Versioning support built into the dataset
4. Better Rikai UDT-support
5. Declarative annotation API (think vega-lite for annotating images/videos)
6. Integrations into dbt and BI tools
## Example
```python
from pyspark.sql import Row
from pyspark.ml.linalg import DenseMatrix
from rikai.types import Image, Box2d
from rikai.numpy import wrap
import numpy as np
df = spark.createDataFrame(
[
{
"id": 1,
"mat": DenseMatrix(2, 2, range(4)),
"image": Image("s3://foo/bar/1.png"),
"annotations": [
Row(
label="cat",
mask=wrap(np.random.rand(256, 256)),
bbox=Box2d(xmin=1.0, ymin=2.0, xmax=3.0, ymax=4.0),
)
],
}
]
)
df.write.format("rikai").save("s3://path/to/features")
```
Train dataset in `Pytorch`
```python
from torch.utils.data import DataLoader
from torchvision import transforms as T
from rikai.pytorch.vision import Dataset
transform = T.Compose([
T.Resize(640),
T.ToTensor(),
T.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))
])
dataset = Dataset(
"s3://path/to/features",
image_column="image",
transform=transform
)
loader = DataLoader(
dataset,
batch_size=32,
num_workers=8,
)
for batch in loader:
predicts = model(batch.to("cuda"))
```
Using a ML model in Spark SQL (**experiemental**)
```sql
CREATE MODEL yolo5
OPTIONS (min_confidence=0.3, device="gpu", batch_size=32)
USING "s3://bucket/to/yolo5_spec.yaml";
SELECT id, ML_PREDICT(yolo5, image) FROM my_dataset
WHERE split = "train" LIMIT 100;
```
Rikai can use MLflow as its model registry. This allows you to automatically pickup the latest
model version if you're using the mlflow model registry. Here is a list of supported model flavors:
+ PyTorch (pytorch)
+ Tensorflow (tensorflow)
+ Scikit-learn (sklearn)
```sql
CREATE MODEL yolo5
OPTIONS (min_confidence=0.3, device="gpu", batch_size=32)
USING "mlflow:///yolo5_model/";
SELECT id, ML_PREDICT(yolo5, image) FROM my_dataset
WHERE split = "train" LIMIT 100;
```
For more details on the model spec, see [SQL-ML documentation](https://rikai.readthedocs.io/en/latest/sqlml.html)
## Getting Started
Currently Rikai is maintained for <a name="VersionMatrix"></a>Scala 2.12 and Python 3.7, 3.8, 3.9
There are multiple ways to install Rikai:
1. Try it using the included [Dockerfile](#Docker).
2. Install via pip `pip install rikai`, with
[extras for gcp, pytorch/tf, and others](#Extras).
3. Install from [source](#Source)
Note: if you want to use Rikai with your own pyspark, please consult
[rikai documentation](https://rikai.readthedocs.io/en/latest/spark.html) for tips.
### <a name="Docker"></a>Docker
The included Dockerfile creates a standalone demo image with
Jupyter, Pytorch, Spark, and rikai preinstalled with notebooks for you
to play with the capabilities of the rikai feature store.
To build and run the docker image from the current directory:
```bash
# Clone the repo
git clone git@github.com:eto-ai/rikai rikai
# Build the docker image
docker build --tag rikai --network host .
# Run the image
docker run -p 0.0.0.0:8888:8888/tcp rikai:latest jupyter lab -ip 0.0.0.0 --port 8888
```
If successful, the console should then print out a clickable link to JupyterLab. You can also
open a browser tab and go to `localhost:8888`.
### <a name="Extras"></a>Install from pypi
Base rikai library can be installed with just `pip install rikai`. Dependencies for supporting
pytorch (pytorch and torchvision), jupyter (matplotlib and jupyterlab) are all part of
optional extras. Many open-source datasets also use Youtube videos so we've also added pafy and
youtube-dl as optional extras as well.
For example, if you want to use pytorch in Jupyter to train models on rikai datasets in s3
containing Youtube videos you would run:
`pip install rikai[pytorch,jupyter,youtube]`
If you're not sure what you need and don't mind installing some extra dependencies, you can
simply install everything:
`pip install rikai[all]`
### <a name="Source"></a>Install from source
To build from source you'll need python as well as Scala with sbt installed:
```bash
# Clone the repo
git clone git@github.com:eto-ai/rikai rikai
# Build the jar
sbt publishLocal
# Install python package
cd python
pip install -e . # pip install -e .[all] to install all optional extras (see "Install from pypi")
```
### Utilities
[pre-commit](https://pre-commit.com/) can be helpful in keep consistent code format with the repository.
It can trigger reformat and extra things in your local machine before the CI force you to do it.
If you want it, install and enable `pre-commit`
```bash
pip install pre-commit
pre-commit install #in your local development directory
#pre-commit installed at .git/hooks/pre-commit
```
If you want to uninstall it, it would be easy, too.
```
pre-commit uninstall
```
Raw data
{
"_id": null,
"home_page": "https://github.com/eto-ai/rikai",
"name": "rikai",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": "",
"keywords": "",
"author": "Rikai authors",
"author_email": "rikai-dev@eto.ai",
"download_url": "https://files.pythonhosted.org/packages/56/c5/4f97cb5df9b9cec7b045543fbd1fa1f2a7033bb5b932ccdc39b07ec656cf/rikai-0.1.15.tar.gz",
"platform": null,
"description": "\n[](https://rikai.readthedocs.io/)\n[](https://javadoc.io/doc/ai.eto/rikai_2.12)\n\n\n\n\n\nJoin the community:\n[](https://gitter.im/rikaidev/community?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)\n\n> :heavy_exclamation_mark: This repository is still experimental. No API-compatibility is guaranteed.\n\n# Rikai\n\nRikai is a framework specifically designed for AI workflows focused around large scale unstructured datasets\n(e.g., images, videos, sensor data (future), text (future), and more).\nThrough every stage of the AI modeling workflow,\nRikai strives to offer a great developer experience when working with real-world AI datasets.\n\nThe quality of an AI dataset can make or break an AI project, but tooling for AI data is sorely lacking in ergonomics.\nAs a result, practitioners must spend most of their time and effort wrestling with their data instead of innovating on the models and use cases.\nRikai alleviates the pain that AI practitioners experience on a daily basis dealing with the myriad of tedious data tasks,\nso they can focus again on model-building and problem solving.\n\nTo start trying Rikai right away, checkout the [Quickstart Guide](https://rikai.readthedocs.io/en/latest/quickstart.html).\n\n## Main Features\n\n### Data format\n\nThe core of Rikai is a data format (\"rikai format\") based on [Apache Parquet](https://parquet.apache.org/).\nRikai augments parquet with a rich collection of semantic types design specifically for unstructured data and annotations.\n\n### Integrations\n\nRikai comes with an extensive set of I/O connectors. For ETL, Rikai is able to consume popular formats like ROS bags and Coco.\nFor analysis, it's easy to read Rikai data into pandas/spark DataFrames (Rikai handles serde for the semantic types).\nAnd for training, Rikai allows direct creation of Pytorch/Tensorflow datasets without manual conversion.\n\n### SQL-ML Engine\n\nRikai extends Spark SQL with ML capability which allows users to analyze Rikai datasets using own models with SQL\n(\"Bring your own model\")\n\n### Visualization\n\nCarefully crafted data-visualization embedded with semantic types, especially in Jupyter notebooks,\nto help you visualize and inspect your AI data without having to remember complicated raw image manipulations.\n\n## Roadmap\n1. Improved video support\n2. Text / sensors / geospatial support\n3. Versioning support built into the dataset\n4. Better Rikai UDT-support\n5. Declarative annotation API (think vega-lite for annotating images/videos)\n6. Integrations into dbt and BI tools\n\n## Example\n\n```python\nfrom pyspark.sql import Row\nfrom pyspark.ml.linalg import DenseMatrix\nfrom rikai.types import Image, Box2d\nfrom rikai.numpy import wrap\nimport numpy as np\n\ndf = spark.createDataFrame(\n [\n {\n \"id\": 1,\n \"mat\": DenseMatrix(2, 2, range(4)),\n \"image\": Image(\"s3://foo/bar/1.png\"),\n \"annotations\": [\n Row(\n label=\"cat\",\n mask=wrap(np.random.rand(256, 256)),\n bbox=Box2d(xmin=1.0, ymin=2.0, xmax=3.0, ymax=4.0),\n )\n ],\n }\n ]\n)\n\ndf.write.format(\"rikai\").save(\"s3://path/to/features\")\n```\n\nTrain dataset in `Pytorch`\n\n```python\nfrom torch.utils.data import DataLoader\nfrom torchvision import transforms as T\nfrom rikai.pytorch.vision import Dataset\n\ntransform = T.Compose([\n T.Resize(640),\n T.ToTensor(),\n T.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))\n])\n\ndataset = Dataset(\n \"s3://path/to/features\",\n image_column=\"image\",\n transform=transform\n)\nloader = DataLoader(\n dataset,\n batch_size=32,\n num_workers=8,\n)\nfor batch in loader:\n predicts = model(batch.to(\"cuda\"))\n```\n\nUsing a ML model in Spark SQL (**experiemental**)\n\n```sql\nCREATE MODEL yolo5\nOPTIONS (min_confidence=0.3, device=\"gpu\", batch_size=32)\nUSING \"s3://bucket/to/yolo5_spec.yaml\";\n\nSELECT id, ML_PREDICT(yolo5, image) FROM my_dataset\nWHERE split = \"train\" LIMIT 100;\n```\n\nRikai can use MLflow as its model registry. This allows you to automatically pickup the latest\nmodel version if you're using the mlflow model registry. Here is a list of supported model flavors:\n+ PyTorch (pytorch)\n+ Tensorflow (tensorflow)\n+ Scikit-learn (sklearn)\n\n```sql\nCREATE MODEL yolo5\nOPTIONS (min_confidence=0.3, device=\"gpu\", batch_size=32)\nUSING \"mlflow:///yolo5_model/\";\n\nSELECT id, ML_PREDICT(yolo5, image) FROM my_dataset\nWHERE split = \"train\" LIMIT 100;\n```\n\nFor more details on the model spec, see [SQL-ML documentation](https://rikai.readthedocs.io/en/latest/sqlml.html)\n\n## Getting Started\n\nCurrently Rikai is maintained for <a name=\"VersionMatrix\"></a>Scala 2.12 and Python 3.7, 3.8, 3.9\n\nThere are multiple ways to install Rikai:\n\n1. Try it using the included [Dockerfile](#Docker).\n2. Install via pip `pip install rikai`, with\n [extras for gcp, pytorch/tf, and others](#Extras).\n3. Install from [source](#Source)\n\nNote: if you want to use Rikai with your own pyspark, please consult\n[rikai documentation](https://rikai.readthedocs.io/en/latest/spark.html) for tips.\n\n### <a name=\"Docker\"></a>Docker\n\nThe included Dockerfile creates a standalone demo image with\nJupyter, Pytorch, Spark, and rikai preinstalled with notebooks for you\nto play with the capabilities of the rikai feature store.\n\nTo build and run the docker image from the current directory:\n```bash\n# Clone the repo\ngit clone git@github.com:eto-ai/rikai rikai\n# Build the docker image\ndocker build --tag rikai --network host .\n# Run the image\ndocker run -p 0.0.0.0:8888:8888/tcp rikai:latest jupyter lab -ip 0.0.0.0 --port 8888\n```\n\nIf successful, the console should then print out a clickable link to JupyterLab. You can also\nopen a browser tab and go to `localhost:8888`.\n\n### <a name=\"Extras\"></a>Install from pypi\n\nBase rikai library can be installed with just `pip install rikai`. Dependencies for supporting\npytorch (pytorch and torchvision), jupyter (matplotlib and jupyterlab) are all part of\noptional extras. Many open-source datasets also use Youtube videos so we've also added pafy and\nyoutube-dl as optional extras as well.\n\nFor example, if you want to use pytorch in Jupyter to train models on rikai datasets in s3\ncontaining Youtube videos you would run:\n\n`pip install rikai[pytorch,jupyter,youtube]`\n\nIf you're not sure what you need and don't mind installing some extra dependencies, you can\nsimply install everything:\n\n`pip install rikai[all]`\n\n### <a name=\"Source\"></a>Install from source\n\nTo build from source you'll need python as well as Scala with sbt installed:\n\n```bash\n# Clone the repo\ngit clone git@github.com:eto-ai/rikai rikai\n# Build the jar\nsbt publishLocal\n# Install python package\ncd python\npip install -e . # pip install -e .[all] to install all optional extras (see \"Install from pypi\")\n```\n\n### Utilities\n\n[pre-commit](https://pre-commit.com/) can be helpful in keep consistent code format with the repository. \nIt can trigger reformat and extra things in your local machine before the CI force you to do it.\n\nIf you want it, install and enable `pre-commit`\n```bash\npip install pre-commit\npre-commit install #in your local development directory\n#pre-commit installed at .git/hooks/pre-commit\n```\nIf you want to uninstall it, it would be easy, too.\n```\npre-commit uninstall\n```\n\n\n",
"bugtrack_url": null,
"license": "Apache License, Version 2.0",
"summary": "",
"version": "0.1.15",
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"md5": "ccb8eb7a332336c995426a60f18a7954",
"sha256": "3e8e68cf6447e77740b34982e64d3a4079cbd3703641f250443fde77542d080a"
},
"downloads": -1,
"filename": "rikai-0.1.15-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ccb8eb7a332336c995426a60f18a7954",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 126686,
"upload_time": "2022-12-08T05:00:22",
"upload_time_iso_8601": "2022-12-08T05:00:22.637647Z",
"url": "https://files.pythonhosted.org/packages/56/b0/f62e6e432a53a7c7e8fe4a885e1ea81bb03479172f3da989c56713512029/rikai-0.1.15-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"md5": "3da481a408093c0f974d6b75a8ba2def",
"sha256": "972317e7449737b87b4810f5e26ecf0a3939bcb7204affa266022b1b282d10bb"
},
"downloads": -1,
"filename": "rikai-0.1.15.tar.gz",
"has_sig": false,
"md5_digest": "3da481a408093c0f974d6b75a8ba2def",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 75515,
"upload_time": "2022-12-08T05:00:24",
"upload_time_iso_8601": "2022-12-08T05:00:24.897928Z",
"url": "https://files.pythonhosted.org/packages/56/c5/4f97cb5df9b9cec7b045543fbd1fa1f2a7033bb5b932ccdc39b07ec656cf/rikai-0.1.15.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2022-12-08 05:00:24",
"github": true,
"gitlab": false,
"bitbucket": false,
"github_user": "eto-ai",
"github_project": "rikai",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "rikai"
}