chalkpy


Namechalkpy JSON
Version 2.36.3 PyPI version JSON
download
home_pageNone
SummaryPython SDK for Chalk
upload_time2024-04-24 18:33:11
maintainerNone
docs_urlNone
authorChalk AI, Inc.
requires_python<3.13,>3.8
licenseNone
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Chalk

Chalk enables innovative machine learning teams to focus on building
the unique products and models that make their business stand out.
Behind the scenes Chalk seamlessly handles data infrastructure with
a best-in-class developer experience. Here’s how it works –

---

## Develop

Chalk makes it simple to develop feature pipelines for machine
learning. Define Python functions using the libraries and tools you're
familiar with instead of specialized DSLs. Chalk then orchestrates
your functions into pipelines that execute in parallel on a Rust-based
engine and coordinates the infrastructure required to compute
features.

### Features

To get started, [define your features](/docs/features) with
[Pydantic](https://pydantic-docs.helpmanual.io/)-inspired Python classes.
You can define schemas, specify relationships, and add metadata
to help your team share and re-use work.

```py
@features
class User:
    id: int
    full_name: str
    nickname: Optional[str]
    email: Optional[str]
    birthday: date
    credit_score: float
    datawarehouse_feature: float

    transactions: DataFrame[Transaction] = has_many(lambda: Transaction.user_id == User.id)
```

### Resolvers

Next, tell Chalk how to compute your features.
Chalk ingests data from your existing data stores,
and lets you use Python to compute features with
[feature resolvers](/docs/resolver-overview).
Feature resolvers are declared with the decorators `@online` and
`@offline`, and can depend on the outputs of other feature resolvers.

Resolvers make it easy to rapidly integrate a wide variety of data
sources, join them together, and use them in your model.

#### SQL

```python
pg = PostgreSQLSource()

@online
def get_user(uid: User.id) -> Features[User.full_name, User.email]:
    return pg.query_string(
        "select email, full_name from users where id=:id",
        args=dict(id=uid)
    ).one()
```

#### REST

```python
import requests

@online
def get_socure_score(uid: User.id) -> Features[User.socure_score]:
    return (
        requests.get("https://api.socure.com", json={
            id: uid
        }).json()['socure_score']
    )
```

---

## Execute

Once you've defined your features and resolvers, Chalk orchestrates
them into flexible pipelines that make training and executing models easy.

Chalk has built-in support for feature engineering workflows --
no need to manage Airflow or orchestrate complicated streaming flows.
You can execute resolver pipelines with declarative caching,
ingest batch data on a schedule, and easily make slow sources
available online for low-latency serving.

### Caching

Many data sources (like vendor APIs) are too slow for online use cases
and/or charge a high dollar cost-per-call. Chalk lets you optimize latency
and cost by defining declarative caching policies which are well-integrated
throughout the system. You no longer have to manage Redis, Memcached, DynamodDB,
or spend time tuning cache-warming pipelines.

Add a caching policy with one line of code in your feature definition:

```python
@features
class ExternalBankAccount:
-   balance: int
+   balance: int = feature(max_staleness="**1d**")
```

Optionally warm feature caches by executing resolvers on a schedule:

```py
@online(cron="**1d**")
def fn(id: User.id) -> User.credit_score:
  return redshift.query(...).all()
```

Or override staleness tolerances at query time when you need fresher
data for your models:

```py
chalk.query(
    ...,
    outputs: [User.fraud_score],
    max_staleness: { User.fraud_score: "1m" }
)
```

### Batch ETL ingestion

Chalk also makes it simple to generate training sets from data warehouse
sources -- join data from services like S3, Redshift, BQ, Snowflake
(or other custom sources) with historical features computed online.
Specify a cron schedule on an `@offline` resolver and Chalk automatically ingests
data with support for incremental reads:

```py
@offline(cron="**1h**")
def fn() -> Feature[User.id, User.datawarehouse_feature]:
  return redshift.query(...).incremental()
```

Chalk makes this data available for point-in-time-correct dataset
generation for data science use-cases. Every pipeline has built-in
monitoring and alerting to ensure data quality and timeliness.

### Reverse ETL

When your model needs to use features that are canonically stored in
a high-latency data source (like a data warehouse), Chalk's Reverse
ETL support makes it simple to bring those features online and serve
them quickly.

Add a single line of code to an `offline` resolver, and Chalk constructs
a managed reverse ETL pipeline for that data source:

```py
@offline(offline_to_online_etl="5m")
```

Now data from slow offline data sources is automatically available for
low-latency online serving.

---

## Deploy + query

Once you've defined your pipelines, you can rapidly deploy them to
production with Chalk's CLI:

```bash
chalk apply
```

This creates a deployment of your project, which is served at a unique
preview URL. You can promote this deployment to production, or
perform QA workflows on your preview environment to make sure that
your Chalk deployment performs as expected.

Once you promote your deployment to production, Chalk makes features
available for low-latency [online inference](/docs/query-basics) and
[offline training](/docs/training-client). Significantly, Chalk uses
the exact same source code to serve temporally-consistent training
sets to data scientists and live feature values to models. This re-use
ensures that feature values from online and offline contexts match and
dramatically cuts development time.

### Online inference

Chalk's online store & feature computation engine make it easy to query
features with ultra low-latency, so you can use your feature pipelines
to serve online inference use-cases.

Integrating Chalk with your production application takes minutes via
Chalk's simple REST API:

<RequestingFeaturesOnline style={{ height: 328 }} />

Features computed to serve online requests are also replicated to Chalk's
offline store for historical performance tracking and training set generation.

### Offline training

Data scientists can use Chalk's Jupyter integration to create datasets
and train models. Datasets are stored and tracked so that they can be
re-used by other modelers, and so that model provenance is tracked for
audit and reproducibility.

```python
X = ChalkClient.offline_query(
    input=labels[[User.uid, timestamp]],
    output=[
        User.returned_transactions_last_60,
        User.user_account_name_match_score,
        User.socure_score,
        User.identity.has_verified_phone,
        User.identity.is_voip_phone,
        User.identity.account_age_days,
        User.identity.email_age,
    ],
)
```

Chalk datasets are always "temporally consistent."
This means that you can provide labels with different past timestamps and
get historical features that represent what your application would have
retrieved online at those past times. Temporal consistency ensures that
your model training doesn't mix "future" and "past" data.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "chalkpy",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<3.13,>3.8",
    "maintainer_email": null,
    "keywords": null,
    "author": "Chalk AI, Inc.",
    "author_email": null,
    "download_url": "https://files.pythonhosted.org/packages/97/c8/c0a03fdd5cd12a0e1ee01f5e480d87aa0b7f1ee915f0b6d40e9fe8d6a2f9/chalkpy-2.36.3.tar.gz",
    "platform": null,
    "description": "# Chalk\n\nChalk enables innovative machine learning teams to focus on building\nthe unique products and models that make their business stand out.\nBehind the scenes Chalk seamlessly handles data infrastructure with\na best-in-class developer experience. Here\u2019s how it works \u2013\n\n---\n\n## Develop\n\nChalk makes it simple to develop feature pipelines for machine\nlearning. Define Python functions using the libraries and tools you're\nfamiliar with instead of specialized DSLs. Chalk then orchestrates\nyour functions into pipelines that execute in parallel on a Rust-based\nengine and coordinates the infrastructure required to compute\nfeatures.\n\n### Features\n\nTo get started, [define your features](/docs/features) with\n[Pydantic](https://pydantic-docs.helpmanual.io/)-inspired Python classes.\nYou can define schemas, specify relationships, and add metadata\nto help your team share and re-use work.\n\n```py\n@features\nclass User:\n    id: int\n    full_name: str\n    nickname: Optional[str]\n    email: Optional[str]\n    birthday: date\n    credit_score: float\n    datawarehouse_feature: float\n\n    transactions: DataFrame[Transaction] = has_many(lambda: Transaction.user_id == User.id)\n```\n\n### Resolvers\n\nNext, tell Chalk how to compute your features.\nChalk ingests data from your existing data stores,\nand lets you use Python to compute features with\n[feature resolvers](/docs/resolver-overview).\nFeature resolvers are declared with the decorators `@online` and\n`@offline`, and can depend on the outputs of other feature resolvers.\n\nResolvers make it easy to rapidly integrate a wide variety of data\nsources, join them together, and use them in your model.\n\n#### SQL\n\n```python\npg = PostgreSQLSource()\n\n@online\ndef get_user(uid: User.id) -> Features[User.full_name, User.email]:\n    return pg.query_string(\n        \"select email, full_name from users where id=:id\",\n        args=dict(id=uid)\n    ).one()\n```\n\n#### REST\n\n```python\nimport requests\n\n@online\ndef get_socure_score(uid: User.id) -> Features[User.socure_score]:\n    return (\n        requests.get(\"https://api.socure.com\", json={\n            id: uid\n        }).json()['socure_score']\n    )\n```\n\n---\n\n## Execute\n\nOnce you've defined your features and resolvers, Chalk orchestrates\nthem into flexible pipelines that make training and executing models easy.\n\nChalk has built-in support for feature engineering workflows --\nno need to manage Airflow or orchestrate complicated streaming flows.\nYou can execute resolver pipelines with declarative caching,\ningest batch data on a schedule, and easily make slow sources\navailable online for low-latency serving.\n\n### Caching\n\nMany data sources (like vendor APIs) are too slow for online use cases\nand/or charge a high dollar cost-per-call. Chalk lets you optimize latency\nand cost by defining declarative caching policies which are well-integrated\nthroughout the system. You no longer have to manage Redis, Memcached, DynamodDB,\nor spend time tuning cache-warming pipelines.\n\nAdd a caching policy with one line of code in your feature definition:\n\n```python\n@features\nclass ExternalBankAccount:\n-   balance: int\n+   balance: int = feature(max_staleness=\"**1d**\")\n```\n\nOptionally warm feature caches by executing resolvers on a schedule:\n\n```py\n@online(cron=\"**1d**\")\ndef fn(id: User.id) -> User.credit_score:\n  return redshift.query(...).all()\n```\n\nOr override staleness tolerances at query time when you need fresher\ndata for your models:\n\n```py\nchalk.query(\n    ...,\n    outputs: [User.fraud_score],\n    max_staleness: { User.fraud_score: \"1m\" }\n)\n```\n\n### Batch ETL ingestion\n\nChalk also makes it simple to generate training sets from data warehouse\nsources -- join data from services like S3, Redshift, BQ, Snowflake\n(or other custom sources) with historical features computed online.\nSpecify a cron schedule on an `@offline` resolver and Chalk automatically ingests\ndata with support for incremental reads:\n\n```py\n@offline(cron=\"**1h**\")\ndef fn() -> Feature[User.id, User.datawarehouse_feature]:\n  return redshift.query(...).incremental()\n```\n\nChalk makes this data available for point-in-time-correct dataset\ngeneration for data science use-cases. Every pipeline has built-in\nmonitoring and alerting to ensure data quality and timeliness.\n\n### Reverse ETL\n\nWhen your model needs to use features that are canonically stored in\na high-latency data source (like a data warehouse), Chalk's Reverse\nETL support makes it simple to bring those features online and serve\nthem quickly.\n\nAdd a single line of code to an `offline` resolver, and Chalk constructs\na managed reverse ETL pipeline for that data source:\n\n```py\n@offline(offline_to_online_etl=\"5m\")\n```\n\nNow data from slow offline data sources is automatically available for\nlow-latency online serving.\n\n---\n\n## Deploy + query\n\nOnce you've defined your pipelines, you can rapidly deploy them to\nproduction with Chalk's CLI:\n\n```bash\nchalk apply\n```\n\nThis creates a deployment of your project, which is served at a unique\npreview URL. You can promote this deployment to production, or\nperform QA workflows on your preview environment to make sure that\nyour Chalk deployment performs as expected.\n\nOnce you promote your deployment to production, Chalk makes features\navailable for low-latency [online inference](/docs/query-basics) and\n[offline training](/docs/training-client). Significantly, Chalk uses\nthe exact same source code to serve temporally-consistent training\nsets to data scientists and live feature values to models. This re-use\nensures that feature values from online and offline contexts match and\ndramatically cuts development time.\n\n### Online inference\n\nChalk's online store & feature computation engine make it easy to query\nfeatures with ultra low-latency, so you can use your feature pipelines\nto serve online inference use-cases.\n\nIntegrating Chalk with your production application takes minutes via\nChalk's simple REST API:\n\n<RequestingFeaturesOnline style={{ height: 328 }} />\n\nFeatures computed to serve online requests are also replicated to Chalk's\noffline store for historical performance tracking and training set generation.\n\n### Offline training\n\nData scientists can use Chalk's Jupyter integration to create datasets\nand train models. Datasets are stored and tracked so that they can be\nre-used by other modelers, and so that model provenance is tracked for\naudit and reproducibility.\n\n```python\nX = ChalkClient.offline_query(\n    input=labels[[User.uid, timestamp]],\n    output=[\n        User.returned_transactions_last_60,\n        User.user_account_name_match_score,\n        User.socure_score,\n        User.identity.has_verified_phone,\n        User.identity.is_voip_phone,\n        User.identity.account_age_days,\n        User.identity.email_age,\n    ],\n)\n```\n\nChalk datasets are always \"temporally consistent.\"\nThis means that you can provide labels with different past timestamps and\nget historical features that represent what your application would have\nretrieved online at those past times. Temporal consistency ensures that\nyour model training doesn't mix \"future\" and \"past\" data.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "Python SDK for Chalk",
    "version": "2.36.3",
    "project_urls": {
        "homepage": "https://chalk.ai"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "669e67b0c7284b0f86a8b82749de03f292a39593224d1377eeee888e15a1fd4d",
                "md5": "5907210ac171274a42618710ecb753d5",
                "sha256": "047b5f7ef3e3217c33d41118468e0dd3dbc3816a3fe80bbc808e1fa936d1047a"
            },
            "downloads": -1,
            "filename": "chalkpy-2.36.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "5907210ac171274a42618710ecb753d5",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<3.13,>3.8",
            "size": 1006454,
            "upload_time": "2024-04-24T18:33:02",
            "upload_time_iso_8601": "2024-04-24T18:33:02.286897Z",
            "url": "https://files.pythonhosted.org/packages/66/9e/67b0c7284b0f86a8b82749de03f292a39593224d1377eeee888e15a1fd4d/chalkpy-2.36.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "97c8c0a03fdd5cd12a0e1ee01f5e480d87aa0b7f1ee915f0b6d40e9fe8d6a2f9",
                "md5": "73d14f52c178ba4f0e8c0775aacb33f1",
                "sha256": "0791d525ecf0fd630f36a4650af8bb813a4adc2948b2cc5619649fbd0c6a3c5a"
            },
            "downloads": -1,
            "filename": "chalkpy-2.36.3.tar.gz",
            "has_sig": false,
            "md5_digest": "73d14f52c178ba4f0e8c0775aacb33f1",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<3.13,>3.8",
            "size": 514860,
            "upload_time": "2024-04-24T18:33:11",
            "upload_time_iso_8601": "2024-04-24T18:33:11.544461Z",
            "url": "https://files.pythonhosted.org/packages/97/c8/c0a03fdd5cd12a0e1ee01f5e480d87aa0b7f1ee915f0b6d40e9fe8d6a2f9/chalkpy-2.36.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-24 18:33:11",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "chalkpy"
}
        
Elapsed time: 0.29341s