azure-monitor-query


Nameazure-monitor-query JSON
Version 2.0.0 PyPI version JSON
download
home_pagehttps://github.com/Azure/azure-sdk-for-python/tree/main/sdk
SummaryMicrosoft Corporation Azure Monitor Query Client Library for Python
upload_time2025-07-30 22:23:41
maintainerNone
docs_urlNone
authorMicrosoft Corporation
requires_python>=3.9
licenseMIT License
keywords azure azure sdk
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage
            # Azure Monitor Query client library for Python

The Azure Monitor Query client library is used to execute read-only queries against [Azure Monitor][azure_monitor_overview]'s Logs data platform.

- [Logs](https://learn.microsoft.com/azure/azure-monitor/logs/data-platform-logs) - Collects and organizes log and performance data from monitored resources. Data from different sources such as platform logs from Azure services, log and performance data from virtual machines agents, and usage and performance data from apps can be consolidated into a single [Azure Log Analytics workspace](https://learn.microsoft.com/azure/azure-monitor/logs/data-platform-logs#log-analytics-and-workspaces). The various data types can be analyzed together using the [Kusto Query Language][kusto_query_language].

> **Important**: As of version 2.0.0, `MetricsClient` and `MetricsQueryClient` have been removed from the `azure-monitor-query` package. For metrics querying capabilities, please use the separate [`azure-monitor-querymetrics`](https://pypi.org/project/azure-monitor-querymetrics/) package which provides `MetricsClient`, or the [`azure-mgmt-monitor`](https://pypi.org/project/azure-mgmt-monitor/) package. For more details, see the [migration guide](https://aka.ms/azsdk/python/monitor/query/migration).

**Resources:**

- [Source code][source]
- [Package (PyPI)][package]
- [Package (Conda)](https://anaconda.org/microsoft/azure-monitor-query/)
- [API reference documentation][python-query-ref-docs]
- [Service documentation][azure_monitor_overview]
- [Samples][samples]
- [Change log][changelog]

## Getting started

### Prerequisites

- Python 3.9 or later
- An [Azure subscription][azure_subscription]
- To query Logs, you need one of the following things:
  - An [Azure Log Analytics workspace][azure_monitor_create_using_portal]
  - An Azure resource of any kind (Storage Account, Key Vault, Cosmos DB, etc.)

### Install the package

Install the Azure Monitor Query client library for Python with [pip][pip]:

```bash
pip install azure-monitor-query
```

### Create the client

An authenticated client is required to query Logs. The library includes both synchronous and asynchronous forms of the client. To authenticate, create an instance of a token credential. Use that instance when creating a `LogsQueryClient`. The following examples use `DefaultAzureCredential` from the [azure-identity](https://pypi.org/project/azure-identity/) package.

> **Note**: For Metrics querying capabilities, please use the separate `azure-monitor-querymetrics` package which provides `MetricsClient`, or the `azure-mgmt-monitor` package.

#### Synchronous clients

Consider the following example, which creates a synchronous client for Logs querying:

```python
from azure.identity import DefaultAzureCredential
from azure.monitor.query import LogsQueryClient

credential = DefaultAzureCredential()
logs_query_client = LogsQueryClient(credential)
```

#### Asynchronous clients

The asynchronous forms of the query client APIs are found in the `.aio`-suffixed namespace. For example:

```python
from azure.identity.aio import DefaultAzureCredential
from azure.monitor.query.aio import LogsQueryClient

credential = DefaultAzureCredential()
async_logs_query_client = LogsQueryClient(credential)
```

To use the asynchronous clients, you must also install an async transport, such as [aiohttp](https://pypi.org/project/aiohttp/).

```sh
pip install aiohttp
```

#### Configure client for Azure sovereign cloud

By default, the client is configured to use the Azure public cloud. To use a sovereign cloud, provide the correct `endpoint` argument when using `LogsQueryClient`. For example:

```python
from azure.identity import AzureAuthorityHosts, DefaultAzureCredential
from azure.monitor.query import LogsQueryClient

# Authority can also be set via the AZURE_AUTHORITY_HOST environment variable.
credential = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_GOVERNMENT)

logs_query_client = LogsQueryClient(credential, endpoint="https://api.loganalytics.us")
```

### Execute the query

For examples of Logs queries, see the [Examples](#examples) section.

## Key concepts

### Logs query rate limits and throttling

The Log Analytics service applies throttling when the request rate is too high. Limits, such as the maximum number of rows returned, are also applied on the Kusto queries. For more information, see [Query API](https://learn.microsoft.com/azure/azure-monitor/service-limits#la-query-api).

If you're executing a batch logs query, a throttled request returns a `LogsQueryError` object. That object's `code` value is `ThrottledError`.

## Examples

- [Logs query](#logs-query)
  - [Resource-centric logs query](#resource-centric-logs-query)
  - [Specify timespan](#specify-timespan)
  - [Handle logs query response](#handle-logs-query-response)
- [Batch logs query](#batch-logs-query)
- [Advanced logs query scenarios](#advanced-logs-query-scenarios)
  - [Set logs query timeout](#set-logs-query-timeout)
  - [Query multiple workspaces](#query-multiple-workspaces)
  - [Include statistics](#include-statistics)
  - [Include visualization](#include-visualization)

### Logs query

This example shows how to query a Log Analytics workspace. To handle the response and view it in a tabular form, the [`pandas`](https://pypi.org/project/pandas/) library is used. See the [samples][samples] if you choose not to use `pandas`.

#### Resource-centric logs query

The following example demonstrates how to query logs directly from an Azure resource without the use of a Log Analytics workspace. Here, the `query_resource` method is used instead of `query_workspace`. Instead of a workspace ID, an Azure resource identifier is passed in. For example, `/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/{resource-provider}/{resource-type}/{resource-name}`.

```python
import os
import pandas as pd
from datetime import timedelta
from azure.monitor.query import LogsQueryClient, LogsQueryStatus
from azure.core.exceptions import HttpResponseError
from azure.identity import DefaultAzureCredential

credential  = DefaultAzureCredential()
client = LogsQueryClient(credential)

query = """AzureActivity | take 5"""

try:
    response = client.query_resource(os.environ['LOGS_RESOURCE_ID'], query, timespan=timedelta(days=1))
    if response.status == LogsQueryStatus.SUCCESS:
        data = response.tables
    else:
        # LogsQueryPartialResult
        error = response.partial_error
        data = response.partial_data
        print(error)

    for table in data:
        df = pd.DataFrame(data=table.rows, columns=table.columns)
        print(df)
except HttpResponseError as err:
    print("something fatal happened")
    print(err)
```

#### Specify timespan

The `timespan` parameter specifies the time duration for which to query the data. This value can take one of the following forms:

- a `timedelta`
- a `timedelta` and a start `datetime`
- a start `datetime`/end `datetime`

For example:

```python
import os
import pandas as pd
from datetime import datetime, timezone
from azure.monitor.query import LogsQueryClient, LogsQueryResult
from azure.identity import DefaultAzureCredential
from azure.core.exceptions import HttpResponseError

credential = DefaultAzureCredential()
client = LogsQueryClient(credential)

query = """AppRequests | take 5"""

start_time=datetime(2021, 7, 2, tzinfo=timezone.utc)
end_time=datetime(2021, 7, 4, tzinfo=timezone.utc)

try:
    response = client.query_workspace(
        workspace_id=os.environ['LOG_WORKSPACE_ID'],
        query=query,
        timespan=(start_time, end_time)
        )
    if response.status == LogsQueryStatus.SUCCESS:
        data = response.tables
    else:
        # LogsQueryPartialResult
        error = response.partial_error
        data = response.partial_data
        print(error)

    for table in data:
        df = pd.DataFrame(data=table.rows, columns=table.columns)
        print(df)
except HttpResponseError as err:
    print("something fatal happened")
    print(err)
```

#### Handle logs query response

The `query_workspace` API returns either a `LogsQueryResult` or a `LogsQueryPartialResult` object. The `batch_query` API returns a list that can contain `LogsQueryResult`, `LogsQueryPartialResult`, and `LogsQueryError` objects. Here's a hierarchy of the response:

```
LogsQueryResult
|---statistics
|---visualization
|---tables (list of `LogsTable` objects)
    |---name
    |---rows
    |---columns
    |---columns_types

LogsQueryPartialResult
|---statistics
|---visualization
|---partial_error (a `LogsQueryError` object)
    |---code
    |---message
    |---details
    |---status
|---partial_data (list of `LogsTable` objects)
    |---name
    |---rows
    |---columns
    |---columns_types
```

The `LogsQueryResult` directly iterates over the table as a convenience. For example, to handle a logs query response with tables and display it using `pandas`:

```python
response = client.query(...)
for table in response:
    df = pd.DataFrame(table.rows, columns=[col.name for col in table.columns])
```

A full sample can be found [here](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_logs_single_query.py).

In a similar fashion, to handle a batch logs query response:

```python
for result in response:
    if result.status == LogsQueryStatus.SUCCESS:
        for table in result:
            df = pd.DataFrame(table.rows, columns=table.columns)
            print(df)
```

A full sample can be found [here](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_batch_query.py).

### Batch logs query

The following example demonstrates sending multiple queries at the same time using the batch query API. The queries can either be represented as a list of `LogsBatchQuery` objects or a dictionary. This example uses the former approach.

```python
import os
from datetime import timedelta, datetime, timezone
import pandas as pd
from azure.monitor.query import LogsQueryClient, LogsBatchQuery, LogsQueryStatus
from azure.identity import DefaultAzureCredential

credential = DefaultAzureCredential()
client = LogsQueryClient(credential)
requests = [
    LogsBatchQuery(
        query="AzureActivity | summarize count()",
        timespan=timedelta(hours=1),
        workspace_id=os.environ['LOG_WORKSPACE_ID']
    ),
    LogsBatchQuery(
        query= """bad query""",
        timespan=timedelta(days=1),
        workspace_id=os.environ['LOG_WORKSPACE_ID']
    ),
    LogsBatchQuery(
        query= """let Weight = 92233720368547758;
        range x from 1 to 3 step 1
        | summarize percentilesw(x, Weight * 100, 50)""",
        workspace_id=os.environ['LOG_WORKSPACE_ID'],
        timespan=(datetime(2021, 6, 2, tzinfo=timezone.utc), datetime(2021, 6, 5, tzinfo=timezone.utc)), # (start, end)
        include_statistics=True
    ),
]
results = client.query_batch(requests)

for res in results:
    if res.status == LogsQueryStatus.PARTIAL:
        ## this will be a LogsQueryPartialResult
        print(res.partial_error)
        for table in res.partial_data:
            df = pd.DataFrame(table.rows, columns=table.columns)
            print(df)
    elif res.status == LogsQueryStatus.SUCCESS:
        ## this will be a LogsQueryResult
        table = res.tables[0]
        df = pd.DataFrame(table.rows, columns=table.columns)
        print(df)
    else:
        # this will be a LogsQueryError
        print(res.message)

```

### Advanced logs query scenarios

#### Set logs query timeout

The following example shows setting a server timeout in seconds. A gateway timeout is raised if the query takes more time than the mentioned timeout. The default is 180 seconds and can be set up to 10 minutes (600 seconds).

```python
import os
from datetime import timedelta
from azure.monitor.query import LogsQueryClient
from azure.identity import DefaultAzureCredential

credential = DefaultAzureCredential()
client = LogsQueryClient(credential)

response = client.query_workspace(
    os.environ['LOG_WORKSPACE_ID'],
    "range x from 1 to 10000000000 step 1 | count",
    timespan=timedelta(days=1),
    server_timeout=600 # sets the timeout to 10 minutes
    )
```

#### Query multiple workspaces

The same logs query can be executed across multiple Log Analytics workspaces. In addition to the Kusto query, the following parameters are required:

- `workspace_id` - The first (primary) workspace ID
- `additional_workspaces` - A list of workspaces, excluding the workspace provided in the `workspace_id` parameter. The parameter's list items can consist of the following identifier formats:
  - Qualified workspace names
  - Workspace IDs
  - Azure resource IDs

For example, the following query executes in three workspaces:

```python
client.query_workspace(
    <workspace_id>,
    query,
    timespan=timedelta(days=1),
    additional_workspaces=['<workspace 2>', '<workspace 3>']
    )
```

A full sample can be found [here](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_log_query_multiple_workspaces.py).

#### Include statistics

To get logs query execution statistics, such as CPU and memory consumption:

1. Set the `include_statistics` parameter to `True`.
1. Access the `statistics` field inside the `LogsQueryResult` object.

The following example prints the query execution time:

```python
query = "AzureActivity | top 10 by TimeGenerated"
result = client.query_workspace(
    <workspace_id>,
    query,
    timespan=timedelta(days=1),
    include_statistics=True
    )

execution_time = result.statistics.get("query", {}).get("executionTime")
print(f"Query execution time: {execution_time}")
```

The `statistics` field is a `dict` that corresponds to the raw JSON response, and its structure can vary by query. The statistics are found within the `query` property. For example:

```python
{
  "query": {
    "executionTime": 0.0156478,
    "resourceUsage": {...},
    "inputDatasetStatistics": {...},
    "datasetStatistics": [{...}]
  }
}
```

#### Include visualization

To get visualization data for logs queries using the [render operator](https://learn.microsoft.com/azure/data-explorer/kusto/query/renderoperator?pivots=azuremonitor):

1. Set the `include_visualization` property to `True`.
1. Access the `visualization` field inside the `LogsQueryResult` object.

For example:

```python
query = (
    "StormEvents"
    "| summarize event_count = count() by State"
    "| where event_count > 10"
    "| project State, event_count"
    "| render columnchart"
)
result = client.query_workspace(
    <workspace_id>,
    query,
    timespan=timedelta(days=1),
    include_visualization=True
    )

print(f"Visualization result: {result.visualization}")
```

The `visualization` field is a `dict` that corresponds to the raw JSON response, and its structure can vary by query. For example:

```python
{
  "visualization": "columnchart",
  "title": "the chart title",
  "accumulate": False,
  "isQuerySorted": False,
  "kind": None,
  "legend": None,
  "series": None,
  "yMin": "NaN",
  "yMax": "NaN",
  "xAxis": None,
  "xColumn": None,
  "xTitle": "x axis title",
  "yAxis": None,
  "yColumns": None,
  "ySplit": None,
  "yTitle": None,
  "anomalyColumns": None
}
```

Interpretation of the visualization data is left to the library consumer. To use this data with the [Plotly graphing library](https://plotly.com/python/), see the [synchronous](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_logs_query_visualization.py) or [asynchronous](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/async_samples/sample_logs_query_visualization_async.py) code samples.

## Troubleshooting

See our [troubleshooting guide][troubleshooting_guide] for details on how to diagnose various failure scenarios.

## Next steps

To learn more about Azure Monitor, see the [Azure Monitor service documentation][azure_monitor_overview].

### Samples

The following code samples show common scenarios with the Azure Monitor Query client library.

#### Logs query samples

- [Send a single query with LogsQueryClient and handle the response as a table](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_logs_single_query.py) ([async sample](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/async_samples/sample_log_query_async.py))
- [Send a single query with LogsQueryClient and handle the response in key-value form](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_logs_query_key_value_form.py)
- [Send a single query with LogsQueryClient without pandas](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_single_log_query_without_pandas.py)
- [Send a single query with LogsQueryClient across multiple workspaces](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_log_query_multiple_workspaces.py)
- [Send multiple queries with LogsQueryClient](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_batch_query.py)
- [Send a single query with LogsQueryClient using server timeout](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_server_timeout.py)

## Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit [cla.microsoft.com][cla].

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.

This project has adopted the [Microsoft Open Source Code of Conduct][code_of_conduct]. For more information see the [Code of Conduct FAQ][coc_faq] or contact [opencode@microsoft.com][coc_contact] with any additional questions or comments.

<!-- LINKS -->

[azure_core_exceptions]: https://aka.ms/azsdk/python/core/docs#module-azure.core.exceptions
[azure_core_ref_docs]: https://aka.ms/azsdk/python/core/docs
[azure_monitor_create_using_portal]: https://learn.microsoft.com/azure/azure-monitor/logs/quick-create-workspace
[azure_monitor_overview]: https://learn.microsoft.com/azure/azure-monitor/
[azure_subscription]: https://azure.microsoft.com/free/python/
[changelog]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-query/CHANGELOG.md
[kusto_query_language]: https://learn.microsoft.com/azure/data-explorer/kusto/query/
[package]: https://aka.ms/azsdk-python-monitor-query-pypi
[pip]: https://pypi.org/project/pip/
[python_logging]: https://docs.python.org/3/library/logging.html
[python-query-ref-docs]: https://aka.ms/azsdk/python/monitor-query/docs
[samples]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-query/samples
[source]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/
[troubleshooting_guide]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/TROUBLESHOOTING.md

[cla]: https://cla.microsoft.com
[code_of_conduct]: https://opensource.microsoft.com/codeofconduct/
[coc_faq]: https://opensource.microsoft.com/codeofconduct/faq/
[coc_contact]: mailto:opencode@microsoft.com

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Azure/azure-sdk-for-python/tree/main/sdk",
    "name": "azure-monitor-query",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "azure, azure sdk",
    "author": "Microsoft Corporation",
    "author_email": "azpysdkhelp@microsoft.com",
    "download_url": "https://files.pythonhosted.org/packages/04/c0/e5c760f38224575f1eba35c319842f2be30fab599854ba9bd0b19d39c261/azure_monitor_query-2.0.0.tar.gz",
    "platform": null,
    "description": "# Azure Monitor Query client library for Python\n\nThe Azure Monitor Query client library is used to execute read-only queries against [Azure Monitor][azure_monitor_overview]'s Logs data platform.\n\n- [Logs](https://learn.microsoft.com/azure/azure-monitor/logs/data-platform-logs) - Collects and organizes log and performance data from monitored resources. Data from different sources such as platform logs from Azure services, log and performance data from virtual machines agents, and usage and performance data from apps can be consolidated into a single [Azure Log Analytics workspace](https://learn.microsoft.com/azure/azure-monitor/logs/data-platform-logs#log-analytics-and-workspaces). The various data types can be analyzed together using the [Kusto Query Language][kusto_query_language].\n\n> **Important**: As of version 2.0.0, `MetricsClient` and `MetricsQueryClient` have been removed from the `azure-monitor-query` package. For metrics querying capabilities, please use the separate [`azure-monitor-querymetrics`](https://pypi.org/project/azure-monitor-querymetrics/) package which provides `MetricsClient`, or the [`azure-mgmt-monitor`](https://pypi.org/project/azure-mgmt-monitor/) package. For more details, see the [migration guide](https://aka.ms/azsdk/python/monitor/query/migration).\n\n**Resources:**\n\n- [Source code][source]\n- [Package (PyPI)][package]\n- [Package (Conda)](https://anaconda.org/microsoft/azure-monitor-query/)\n- [API reference documentation][python-query-ref-docs]\n- [Service documentation][azure_monitor_overview]\n- [Samples][samples]\n- [Change log][changelog]\n\n## Getting started\n\n### Prerequisites\n\n- Python 3.9 or later\n- An [Azure subscription][azure_subscription]\n- To query Logs, you need one of the following things:\n  - An [Azure Log Analytics workspace][azure_monitor_create_using_portal]\n  - An Azure resource of any kind (Storage Account, Key Vault, Cosmos DB, etc.)\n\n### Install the package\n\nInstall the Azure Monitor Query client library for Python with [pip][pip]:\n\n```bash\npip install azure-monitor-query\n```\n\n### Create the client\n\nAn authenticated client is required to query Logs. The library includes both synchronous and asynchronous forms of the client. To authenticate, create an instance of a token credential. Use that instance when creating a `LogsQueryClient`. The following examples use `DefaultAzureCredential` from the [azure-identity](https://pypi.org/project/azure-identity/) package.\n\n> **Note**: For Metrics querying capabilities, please use the separate `azure-monitor-querymetrics` package which provides `MetricsClient`, or the `azure-mgmt-monitor` package.\n\n#### Synchronous clients\n\nConsider the following example, which creates a synchronous client for Logs querying:\n\n```python\nfrom azure.identity import DefaultAzureCredential\nfrom azure.monitor.query import LogsQueryClient\n\ncredential = DefaultAzureCredential()\nlogs_query_client = LogsQueryClient(credential)\n```\n\n#### Asynchronous clients\n\nThe asynchronous forms of the query client APIs are found in the `.aio`-suffixed namespace. For example:\n\n```python\nfrom azure.identity.aio import DefaultAzureCredential\nfrom azure.monitor.query.aio import LogsQueryClient\n\ncredential = DefaultAzureCredential()\nasync_logs_query_client = LogsQueryClient(credential)\n```\n\nTo use the asynchronous clients, you must also install an async transport, such as [aiohttp](https://pypi.org/project/aiohttp/).\n\n```sh\npip install aiohttp\n```\n\n#### Configure client for Azure sovereign cloud\n\nBy default, the client is configured to use the Azure public cloud. To use a sovereign cloud, provide the correct `endpoint` argument when using `LogsQueryClient`. For example:\n\n```python\nfrom azure.identity import AzureAuthorityHosts, DefaultAzureCredential\nfrom azure.monitor.query import LogsQueryClient\n\n# Authority can also be set via the AZURE_AUTHORITY_HOST environment variable.\ncredential = DefaultAzureCredential(authority=AzureAuthorityHosts.AZURE_GOVERNMENT)\n\nlogs_query_client = LogsQueryClient(credential, endpoint=\"https://api.loganalytics.us\")\n```\n\n### Execute the query\n\nFor examples of Logs queries, see the [Examples](#examples) section.\n\n## Key concepts\n\n### Logs query rate limits and throttling\n\nThe Log Analytics service applies throttling when the request rate is too high. Limits, such as the maximum number of rows returned, are also applied on the Kusto queries. For more information, see [Query API](https://learn.microsoft.com/azure/azure-monitor/service-limits#la-query-api).\n\nIf you're executing a batch logs query, a throttled request returns a `LogsQueryError` object. That object's `code` value is `ThrottledError`.\n\n## Examples\n\n- [Logs query](#logs-query)\n  - [Resource-centric logs query](#resource-centric-logs-query)\n  - [Specify timespan](#specify-timespan)\n  - [Handle logs query response](#handle-logs-query-response)\n- [Batch logs query](#batch-logs-query)\n- [Advanced logs query scenarios](#advanced-logs-query-scenarios)\n  - [Set logs query timeout](#set-logs-query-timeout)\n  - [Query multiple workspaces](#query-multiple-workspaces)\n  - [Include statistics](#include-statistics)\n  - [Include visualization](#include-visualization)\n\n### Logs query\n\nThis example shows how to query a Log Analytics workspace. To handle the response and view it in a tabular form, the [`pandas`](https://pypi.org/project/pandas/) library is used. See the [samples][samples] if you choose not to use `pandas`.\n\n#### Resource-centric logs query\n\nThe following example demonstrates how to query logs directly from an Azure resource without the use of a Log Analytics workspace. Here, the `query_resource` method is used instead of `query_workspace`. Instead of a workspace ID, an Azure resource identifier is passed in. For example, `/subscriptions/{subscription-id}/resourceGroups/{resource-group-name}/providers/{resource-provider}/{resource-type}/{resource-name}`.\n\n```python\nimport os\nimport pandas as pd\nfrom datetime import timedelta\nfrom azure.monitor.query import LogsQueryClient, LogsQueryStatus\nfrom azure.core.exceptions import HttpResponseError\nfrom azure.identity import DefaultAzureCredential\n\ncredential  = DefaultAzureCredential()\nclient = LogsQueryClient(credential)\n\nquery = \"\"\"AzureActivity | take 5\"\"\"\n\ntry:\n    response = client.query_resource(os.environ['LOGS_RESOURCE_ID'], query, timespan=timedelta(days=1))\n    if response.status == LogsQueryStatus.SUCCESS:\n        data = response.tables\n    else:\n        # LogsQueryPartialResult\n        error = response.partial_error\n        data = response.partial_data\n        print(error)\n\n    for table in data:\n        df = pd.DataFrame(data=table.rows, columns=table.columns)\n        print(df)\nexcept HttpResponseError as err:\n    print(\"something fatal happened\")\n    print(err)\n```\n\n#### Specify timespan\n\nThe `timespan` parameter specifies the time duration for which to query the data. This value can take one of the following forms:\n\n- a `timedelta`\n- a `timedelta` and a start `datetime`\n- a start `datetime`/end `datetime`\n\nFor example:\n\n```python\nimport os\nimport pandas as pd\nfrom datetime import datetime, timezone\nfrom azure.monitor.query import LogsQueryClient, LogsQueryResult\nfrom azure.identity import DefaultAzureCredential\nfrom azure.core.exceptions import HttpResponseError\n\ncredential = DefaultAzureCredential()\nclient = LogsQueryClient(credential)\n\nquery = \"\"\"AppRequests | take 5\"\"\"\n\nstart_time=datetime(2021, 7, 2, tzinfo=timezone.utc)\nend_time=datetime(2021, 7, 4, tzinfo=timezone.utc)\n\ntry:\n    response = client.query_workspace(\n        workspace_id=os.environ['LOG_WORKSPACE_ID'],\n        query=query,\n        timespan=(start_time, end_time)\n        )\n    if response.status == LogsQueryStatus.SUCCESS:\n        data = response.tables\n    else:\n        # LogsQueryPartialResult\n        error = response.partial_error\n        data = response.partial_data\n        print(error)\n\n    for table in data:\n        df = pd.DataFrame(data=table.rows, columns=table.columns)\n        print(df)\nexcept HttpResponseError as err:\n    print(\"something fatal happened\")\n    print(err)\n```\n\n#### Handle logs query response\n\nThe `query_workspace` API returns either a `LogsQueryResult` or a `LogsQueryPartialResult` object. The `batch_query` API returns a list that can contain `LogsQueryResult`, `LogsQueryPartialResult`, and `LogsQueryError` objects. Here's a hierarchy of the response:\n\n```\nLogsQueryResult\n|---statistics\n|---visualization\n|---tables (list of `LogsTable` objects)\n    |---name\n    |---rows\n    |---columns\n    |---columns_types\n\nLogsQueryPartialResult\n|---statistics\n|---visualization\n|---partial_error (a `LogsQueryError` object)\n    |---code\n    |---message\n    |---details\n    |---status\n|---partial_data (list of `LogsTable` objects)\n    |---name\n    |---rows\n    |---columns\n    |---columns_types\n```\n\nThe `LogsQueryResult` directly iterates over the table as a convenience. For example, to handle a logs query response with tables and display it using `pandas`:\n\n```python\nresponse = client.query(...)\nfor table in response:\n    df = pd.DataFrame(table.rows, columns=[col.name for col in table.columns])\n```\n\nA full sample can be found [here](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_logs_single_query.py).\n\nIn a similar fashion, to handle a batch logs query response:\n\n```python\nfor result in response:\n    if result.status == LogsQueryStatus.SUCCESS:\n        for table in result:\n            df = pd.DataFrame(table.rows, columns=table.columns)\n            print(df)\n```\n\nA full sample can be found [here](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_batch_query.py).\n\n### Batch logs query\n\nThe following example demonstrates sending multiple queries at the same time using the batch query API. The queries can either be represented as a list of `LogsBatchQuery` objects or a dictionary. This example uses the former approach.\n\n```python\nimport os\nfrom datetime import timedelta, datetime, timezone\nimport pandas as pd\nfrom azure.monitor.query import LogsQueryClient, LogsBatchQuery, LogsQueryStatus\nfrom azure.identity import DefaultAzureCredential\n\ncredential = DefaultAzureCredential()\nclient = LogsQueryClient(credential)\nrequests = [\n    LogsBatchQuery(\n        query=\"AzureActivity | summarize count()\",\n        timespan=timedelta(hours=1),\n        workspace_id=os.environ['LOG_WORKSPACE_ID']\n    ),\n    LogsBatchQuery(\n        query= \"\"\"bad query\"\"\",\n        timespan=timedelta(days=1),\n        workspace_id=os.environ['LOG_WORKSPACE_ID']\n    ),\n    LogsBatchQuery(\n        query= \"\"\"let Weight = 92233720368547758;\n        range x from 1 to 3 step 1\n        | summarize percentilesw(x, Weight * 100, 50)\"\"\",\n        workspace_id=os.environ['LOG_WORKSPACE_ID'],\n        timespan=(datetime(2021, 6, 2, tzinfo=timezone.utc), datetime(2021, 6, 5, tzinfo=timezone.utc)), # (start, end)\n        include_statistics=True\n    ),\n]\nresults = client.query_batch(requests)\n\nfor res in results:\n    if res.status == LogsQueryStatus.PARTIAL:\n        ## this will be a LogsQueryPartialResult\n        print(res.partial_error)\n        for table in res.partial_data:\n            df = pd.DataFrame(table.rows, columns=table.columns)\n            print(df)\n    elif res.status == LogsQueryStatus.SUCCESS:\n        ## this will be a LogsQueryResult\n        table = res.tables[0]\n        df = pd.DataFrame(table.rows, columns=table.columns)\n        print(df)\n    else:\n        # this will be a LogsQueryError\n        print(res.message)\n\n```\n\n### Advanced logs query scenarios\n\n#### Set logs query timeout\n\nThe following example shows setting a server timeout in seconds. A gateway timeout is raised if the query takes more time than the mentioned timeout. The default is 180 seconds and can be set up to 10 minutes (600 seconds).\n\n```python\nimport os\nfrom datetime import timedelta\nfrom azure.monitor.query import LogsQueryClient\nfrom azure.identity import DefaultAzureCredential\n\ncredential = DefaultAzureCredential()\nclient = LogsQueryClient(credential)\n\nresponse = client.query_workspace(\n    os.environ['LOG_WORKSPACE_ID'],\n    \"range x from 1 to 10000000000 step 1 | count\",\n    timespan=timedelta(days=1),\n    server_timeout=600 # sets the timeout to 10 minutes\n    )\n```\n\n#### Query multiple workspaces\n\nThe same logs query can be executed across multiple Log Analytics workspaces. In addition to the Kusto query, the following parameters are required:\n\n- `workspace_id` - The first (primary) workspace ID\n- `additional_workspaces` - A list of workspaces, excluding the workspace provided in the `workspace_id` parameter. The parameter's list items can consist of the following identifier formats:\n  - Qualified workspace names\n  - Workspace IDs\n  - Azure resource IDs\n\nFor example, the following query executes in three workspaces:\n\n```python\nclient.query_workspace(\n    <workspace_id>,\n    query,\n    timespan=timedelta(days=1),\n    additional_workspaces=['<workspace 2>', '<workspace 3>']\n    )\n```\n\nA full sample can be found [here](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_log_query_multiple_workspaces.py).\n\n#### Include statistics\n\nTo get logs query execution statistics, such as CPU and memory consumption:\n\n1. Set the `include_statistics` parameter to `True`.\n1. Access the `statistics` field inside the `LogsQueryResult` object.\n\nThe following example prints the query execution time:\n\n```python\nquery = \"AzureActivity | top 10 by TimeGenerated\"\nresult = client.query_workspace(\n    <workspace_id>,\n    query,\n    timespan=timedelta(days=1),\n    include_statistics=True\n    )\n\nexecution_time = result.statistics.get(\"query\", {}).get(\"executionTime\")\nprint(f\"Query execution time: {execution_time}\")\n```\n\nThe `statistics` field is a `dict` that corresponds to the raw JSON response, and its structure can vary by query. The statistics are found within the `query` property. For example:\n\n```python\n{\n  \"query\": {\n    \"executionTime\": 0.0156478,\n    \"resourceUsage\": {...},\n    \"inputDatasetStatistics\": {...},\n    \"datasetStatistics\": [{...}]\n  }\n}\n```\n\n#### Include visualization\n\nTo get visualization data for logs queries using the [render operator](https://learn.microsoft.com/azure/data-explorer/kusto/query/renderoperator?pivots=azuremonitor):\n\n1. Set the `include_visualization` property to `True`.\n1. Access the `visualization` field inside the `LogsQueryResult` object.\n\nFor example:\n\n```python\nquery = (\n    \"StormEvents\"\n    \"| summarize event_count = count() by State\"\n    \"| where event_count > 10\"\n    \"| project State, event_count\"\n    \"| render columnchart\"\n)\nresult = client.query_workspace(\n    <workspace_id>,\n    query,\n    timespan=timedelta(days=1),\n    include_visualization=True\n    )\n\nprint(f\"Visualization result: {result.visualization}\")\n```\n\nThe `visualization` field is a `dict` that corresponds to the raw JSON response, and its structure can vary by query. For example:\n\n```python\n{\n  \"visualization\": \"columnchart\",\n  \"title\": \"the chart title\",\n  \"accumulate\": False,\n  \"isQuerySorted\": False,\n  \"kind\": None,\n  \"legend\": None,\n  \"series\": None,\n  \"yMin\": \"NaN\",\n  \"yMax\": \"NaN\",\n  \"xAxis\": None,\n  \"xColumn\": None,\n  \"xTitle\": \"x axis title\",\n  \"yAxis\": None,\n  \"yColumns\": None,\n  \"ySplit\": None,\n  \"yTitle\": None,\n  \"anomalyColumns\": None\n}\n```\n\nInterpretation of the visualization data is left to the library consumer. To use this data with the [Plotly graphing library](https://plotly.com/python/), see the [synchronous](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_logs_query_visualization.py) or [asynchronous](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/async_samples/sample_logs_query_visualization_async.py) code samples.\n\n## Troubleshooting\n\nSee our [troubleshooting guide][troubleshooting_guide] for details on how to diagnose various failure scenarios.\n\n## Next steps\n\nTo learn more about Azure Monitor, see the [Azure Monitor service documentation][azure_monitor_overview].\n\n### Samples\n\nThe following code samples show common scenarios with the Azure Monitor Query client library.\n\n#### Logs query samples\n\n- [Send a single query with LogsQueryClient and handle the response as a table](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_logs_single_query.py) ([async sample](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/async_samples/sample_log_query_async.py))\n- [Send a single query with LogsQueryClient and handle the response in key-value form](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_logs_query_key_value_form.py)\n- [Send a single query with LogsQueryClient without pandas](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_single_log_query_without_pandas.py)\n- [Send a single query with LogsQueryClient across multiple workspaces](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_log_query_multiple_workspaces.py)\n- [Send multiple queries with LogsQueryClient](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_batch_query.py)\n- [Send a single query with LogsQueryClient using server timeout](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/samples/sample_server_timeout.py)\n\n## Contributing\n\nThis project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit [cla.microsoft.com][cla].\n\nWhen you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repositories using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct][code_of_conduct]. For more information see the [Code of Conduct FAQ][coc_faq] or contact [opencode@microsoft.com][coc_contact] with any additional questions or comments.\n\n<!-- LINKS -->\n\n[azure_core_exceptions]: https://aka.ms/azsdk/python/core/docs#module-azure.core.exceptions\n[azure_core_ref_docs]: https://aka.ms/azsdk/python/core/docs\n[azure_monitor_create_using_portal]: https://learn.microsoft.com/azure/azure-monitor/logs/quick-create-workspace\n[azure_monitor_overview]: https://learn.microsoft.com/azure/azure-monitor/\n[azure_subscription]: https://azure.microsoft.com/free/python/\n[changelog]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-query/CHANGELOG.md\n[kusto_query_language]: https://learn.microsoft.com/azure/data-explorer/kusto/query/\n[package]: https://aka.ms/azsdk-python-monitor-query-pypi\n[pip]: https://pypi.org/project/pip/\n[python_logging]: https://docs.python.org/3/library/logging.html\n[python-query-ref-docs]: https://aka.ms/azsdk/python/monitor-query/docs\n[samples]: https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/monitor/azure-monitor-query/samples\n[source]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/\n[troubleshooting_guide]: https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/monitor/azure-monitor-query/TROUBLESHOOTING.md\n\n[cla]: https://cla.microsoft.com\n[code_of_conduct]: https://opensource.microsoft.com/codeofconduct/\n[coc_faq]: https://opensource.microsoft.com/codeofconduct/faq/\n[coc_contact]: mailto:opencode@microsoft.com\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Microsoft Corporation Azure Monitor Query Client Library for Python",
    "version": "2.0.0",
    "project_urls": {
        "Homepage": "https://github.com/Azure/azure-sdk-for-python/tree/main/sdk"
    },
    "split_keywords": [
        "azure",
        " azure sdk"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "520c6b08a5a1e5f0bd97cefa13c53bf47f281a9a11732d19a94a86709acbc6bd",
                "md5": "46c53fd1acabbb50a9348fd9d959e617",
                "sha256": "8f52d581271d785e12f49cd5aaa144b8910fb843db2373855a7ef94c7fc462ea"
            },
            "downloads": -1,
            "filename": "azure_monitor_query-2.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "46c53fd1acabbb50a9348fd9d959e617",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 71102,
            "upload_time": "2025-07-30T22:23:43",
            "upload_time_iso_8601": "2025-07-30T22:23:43.056523Z",
            "url": "https://files.pythonhosted.org/packages/52/0c/6b08a5a1e5f0bd97cefa13c53bf47f281a9a11732d19a94a86709acbc6bd/azure_monitor_query-2.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "04c0e5c760f38224575f1eba35c319842f2be30fab599854ba9bd0b19d39c261",
                "md5": "65117face2be1cf97fc4b7c7c02a73de",
                "sha256": "7b05f2fcac4fb67fc9f77a7d4c5d98a0f3099fb73b57c69ec1b080773994671b"
            },
            "downloads": -1,
            "filename": "azure_monitor_query-2.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "65117face2be1cf97fc4b7c7c02a73de",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 86658,
            "upload_time": "2025-07-30T22:23:41",
            "upload_time_iso_8601": "2025-07-30T22:23:41.534871Z",
            "url": "https://files.pythonhosted.org/packages/04/c0/e5c760f38224575f1eba35c319842f2be30fab599854ba9bd0b19d39c261/azure_monitor_query-2.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-30 22:23:41",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Azure",
    "github_project": "azure-sdk-for-python",
    "travis_ci": false,
    "coveralls": true,
    "github_actions": true,
    "lcname": "azure-monitor-query"
}
        
Elapsed time: 1.75827s