# Google SecOps SDK for Python
A Python SDK for interacting with Google Security Operations products, currently supporting Chronicle/SecOps SIEM.
This wraps the API for common use cases, including UDM searches, entity lookups, IoCs, alert management, case management, and detection rule management.
## Installation
```bash
pip install secops
```
## Authentication
The SDK supports two main authentication methods:
### 1. Application Default Credentials (ADC)
The simplest and recommended way to authenticate the SDK. Application Default Credentials provide a consistent authentication method that works across different Google Cloud environments and local development.
There are several ways to use ADC:
#### a. Using `gcloud` CLI (Recommended for Local Development)
```bash
# Login and set up application-default credentials
gcloud auth application-default login
```
Then in your code:
```python
from secops import SecOpsClient
# Initialize with default credentials - no explicit configuration needed
client = SecOpsClient()
```
#### b. Using Environment Variable
Set the environment variable pointing to your service account key:
```bash
export GOOGLE_APPLICATION_CREDENTIALS="/path/to/service-account.json"
```
Then in your code:
```python
from secops import SecOpsClient
# Initialize with default credentials - will automatically use the credentials file
client = SecOpsClient()
```
#### c. Google Cloud Environment (Automatic)
When running on Google Cloud services (Compute Engine, Cloud Functions, Cloud Run, etc.), ADC works automatically without any configuration:
```python
from secops import SecOpsClient
# Initialize with default credentials - will automatically use the service account
# assigned to your Google Cloud resource
client = SecOpsClient()
```
ADC will automatically try these authentication methods in order:
1. Environment variable `GOOGLE_APPLICATION_CREDENTIALS`
2. Google Cloud SDK credentials (set by `gcloud auth application-default login`)
3. Google Cloud-provided service account credentials
4. Local service account impersonation credentials
### 2. Service Account Authentication
For more explicit control, you can authenticate using a service account. This can be done in two ways:
#### a. Using a Service Account JSON File
```python
from secops import SecOpsClient
# Initialize with service account JSON file
client = SecOpsClient(service_account_path="/path/to/service-account.json")
```
#### b. Using Service Account Info Dictionary
```python
from secops import SecOpsClient
# Service account details as a dictionary
service_account_info = {
"type": "service_account",
"project_id": "your-project-id",
"private_key_id": "key-id",
"private_key": "-----BEGIN PRIVATE KEY-----\n...",
"client_email": "service-account@project.iam.gserviceaccount.com",
"client_id": "client-id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/..."
}
# Initialize with service account info
client = SecOpsClient(service_account_info=service_account_info)
```
## Using the Chronicle API
### Initializing the Chronicle Client
After creating a SecOpsClient, you need to initialize the Chronicle-specific client:
```python
# Initialize Chronicle client
chronicle = client.chronicle(
customer_id="your-chronicle-instance-id", # Your Chronicle instance ID
project_id="your-project-id", # Your GCP project ID
region="us" # Chronicle API region
)
```
### Log Ingestion
Ingest raw logs directly into Chronicle:
```python
from datetime import datetime, timezone
import json
# Create a sample log (this is an OKTA log)
current_time = datetime.now(timezone.utc).isoformat().replace('+00:00', 'Z')
okta_log = {
"actor": {
"displayName": "Joe Doe",
"alternateId": "jdoe@example.com"
},
"client": {
"ipAddress": "192.168.1.100",
"userAgent": {
"os": "Mac OS X",
"browser": "SAFARI"
}
},
"displayMessage": "User login to Okta",
"eventType": "user.session.start",
"outcome": {
"result": "SUCCESS"
},
"published": current_time # Current time in ISO format
}
# Ingest the log using the default forwarder
result = chronicle.ingest_log(
log_type="OKTA", # Chronicle log type
log_message=json.dumps(okta_log) # JSON string of the log
)
print(f"Operation: {result.get('operation')}")
```
The SDK also supports non-JSON log formats. Here's an example with XML for Windows Event logs:
```python
# Create a Windows Event XML log
xml_content = """<Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'>
<System>
<Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-A5BA-3E3B0328C30D}'/>
<EventID>4624</EventID>
<Version>1</Version>
<Level>0</Level>
<Task>12544</Task>
<Opcode>0</Opcode>
<Keywords>0x8020000000000000</Keywords>
<TimeCreated SystemTime='2024-05-10T14:30:00Z'/>
<EventRecordID>202117513</EventRecordID>
<Correlation/>
<Execution ProcessID='656' ThreadID='700'/>
<Channel>Security</Channel>
<Computer>WIN-SERVER.xyz.net</Computer>
<Security/>
</System>
<EventData>
<Data Name='SubjectUserSid'>S-1-0-0</Data>
<Data Name='SubjectUserName'>-</Data>
<Data Name='TargetUserName'>svcUser</Data>
<Data Name='WorkstationName'>CLIENT-PC</Data>
<Data Name='LogonType'>3</Data>
</EventData>
</Event>"""
# Ingest the XML log - no json.dumps() needed for XML
result = chronicle.ingest_log(
log_type="WINEVTLOG_XML", # Windows Event Log XML format
log_message=xml_content # Raw XML content
)
print(f"Operation: {result.get('operation')}")
```
The SDK supports all log types available in Chronicle. You can:
1. View available log types:
```python
# Get all available log types
log_types = chronicle.get_all_log_types()
for lt in log_types[:5]: # Show first 5
print(f"{lt.id}: {lt.description}")
```
2. Search for specific log types:
```python
# Search for log types related to firewalls
firewall_types = chronicle.search_log_types("firewall")
for lt in firewall_types:
print(f"{lt.id}: {lt.description}")
```
3. Validate log types:
```python
# Check if a log type is valid
if chronicle.is_valid_log_type("OKTA"):
print("Valid log type")
else:
print("Invalid log type")
```
4. Use custom forwarders:
```python
# Create or get a custom forwarder
forwarder = chronicle.get_or_create_forwarder(display_name="MyCustomForwarder")
forwarder_id = forwarder["name"].split("/")[-1]
# Use the custom forwarder for log ingestion
result = chronicle.ingest_log(
log_type="WINDOWS",
log_message=json.dumps(windows_log),
forwarder_id=forwarder_id
)
```
5. Use custom timestamps:
```python
from datetime import datetime, timedelta, timezone
# Define custom timestamps
log_entry_time = datetime.now(timezone.utc) - timedelta(hours=1)
collection_time = datetime.now(timezone.utc)
result = chronicle.ingest_log(
log_type="OKTA",
log_message=json.dumps(okta_log),
log_entry_time=log_entry_time, # When the log was generated
collection_time=collection_time # When the log was collected
)
```
### Basic UDM Search
Search for network connection events:
```python
from datetime import datetime, timedelta, timezone
# Set time range for queries
end_time = datetime.now(timezone.utc)
start_time = end_time - timedelta(hours=24) # Last 24 hours
# Perform UDM search
results = chronicle.search_udm(
query="""
metadata.event_type = "NETWORK_CONNECTION"
ip != ""
""",
start_time=start_time,
end_time=end_time,
max_events=5
)
# Example response:
{
"events": [
{
"event": {
"metadata": {
"eventTimestamp": "2024-02-09T10:30:00Z",
"eventType": "NETWORK_CONNECTION"
},
"target": {
"ip": "192.168.1.100",
"port": 443
},
"principal": {
"hostname": "workstation-1"
}
}
}
],
"total_events": 1
}
```
### Statistics Queries
Get statistics about network connections grouped by hostname:
```python
stats = chronicle.get_stats(
query="""metadata.event_type = "NETWORK_CONNECTION"
match:
target.hostname
outcome:
$count = count(metadata.id)
order:
$count desc""",
start_time=start_time,
end_time=end_time,
max_events=1000,
max_values=10
)
# Example response:
{
"columns": ["hostname", "count"],
"rows": [
{"hostname": "server-1", "count": 1500},
{"hostname": "server-2", "count": 1200}
],
"total_rows": 2
}
```
### CSV Export
Export specific fields to CSV format:
```python
csv_data = chronicle.fetch_udm_search_csv(
query='metadata.event_type = "NETWORK_CONNECTION"',
start_time=start_time,
end_time=end_time,
fields=["timestamp", "user", "hostname", "process name"]
)
# Example response:
"""
metadata.eventTimestamp,principal.hostname,target.ip,target.port
2024-02-09T10:30:00Z,workstation-1,192.168.1.100,443
2024-02-09T10:31:00Z,workstation-2,192.168.1.101,80
"""
```
### Query Validation
Validate a UDM query before execution:
```python
query = 'target.ip != "" and principal.hostname = "test-host"'
validation = chronicle.validate_query(query)
# Example response:
{
"isValid": true,
"queryType": "QUERY_TYPE_UDM_QUERY",
"suggestedFields": [
"target.ip",
"principal.hostname"
]
}
```
### Natural Language Search
Search for events using natural language instead of UDM query syntax:
```python
from datetime import datetime, timedelta, timezone
# Set time range for queries
end_time = datetime.now(timezone.utc)
start_time = end_time - timedelta(hours=24) # Last 24 hours
# Option 1: Translate natural language to UDM query
udm_query = chronicle.translate_nl_to_udm("show me network connections")
print(f"Translated query: {udm_query}")
# Example output: 'metadata.event_type="NETWORK_CONNECTION"'
# Then run the query manually if needed
results = chronicle.search_udm(
query=udm_query,
start_time=start_time,
end_time=end_time
)
# Option 2: Perform complete search with natural language
results = chronicle.nl_search(
text="show me failed login attempts",
start_time=start_time,
end_time=end_time,
max_events=100
)
# Example response (same format as search_udm):
{
"events": [
{
"event": {
"metadata": {
"eventTimestamp": "2024-02-09T10:30:00Z",
"eventType": "USER_LOGIN"
},
"principal": {
"user": {
"userid": "jdoe"
}
},
"securityResult": {
"action": "BLOCK",
"summary": "Failed login attempt"
}
}
}
],
"total_events": 1
}
```
The natural language search feature supports various query patterns:
- "Show me network connections"
- "Find suspicious processes"
- "Show login failures in the last hour"
- "Display connections to IP address 192.168.1.100"
If the natural language cannot be translated to a valid UDM query, an `APIError` will be raised with a message indicating that no valid query could be generated.
### Entity Summary
Get detailed information about specific entities:
```python
# IP address summary
ip_summary = chronicle.summarize_entity(
start_time=start_time,
end_time=end_time,
value="192.168.1.100" # Automatically detects IP
)
# Domain summary
domain_summary = chronicle.summarize_entity(
start_time=start_time,
end_time=end_time,
value="example.com" # Automatically detects domain
)
# File hash summary
file_summary = chronicle.summarize_entity(
start_time=start_time,
end_time=end_time,
value="e17dd4eef8b4978673791ef4672f4f6a" # Automatically detects MD5
)
# Example response structure:
{
"entities": [
{
"name": "entities/...",
"metadata": {
"entityType": "ASSET",
"interval": {
"startTime": "2024-02-08T10:30:00Z",
"endTime": "2024-02-09T10:30:00Z"
}
},
"metric": {
"firstSeen": "2024-02-08T10:30:00Z",
"lastSeen": "2024-02-09T10:30:00Z"
},
"entity": {
"asset": {
"ip": ["192.168.1.100"]
}
}
}
],
"alertCounts": [
{
"rule": "Suspicious Network Connection",
"count": 5
}
],
"widgetMetadata": {
"detections": 5,
"total": 1000
}
}
```
### Entity Summary from Query
Look up entities based on a UDM query:
```python
# Search for a specific file hash across multiple UDM paths
md5_hash = "e17dd4eef8b4978673791ef4672f4f6a"
query = f'target.file.md5 = "{md5_hash}" OR principal.file.md5 = "{md5_hash}"'
entity_summaries = chronicle.summarize_entities_from_query(
query=query,
start_time=start_time,
end_time=end_time
)
# Example response:
[
{
"entities": [
{
"name": "entities/...",
"metadata": {
"entityType": "FILE",
"interval": {
"startTime": "2024-02-08T10:30:00Z",
"endTime": "2024-02-09T10:30:00Z"
}
},
"metric": {
"firstSeen": "2024-02-08T10:30:00Z",
"lastSeen": "2024-02-09T10:30:00Z"
},
"entity": {
"file": {
"md5": "e17dd4eef8b4978673791ef4672f4f6a",
"sha1": "da39a3ee5e6b4b0d3255bfef95601890afd80709",
"filename": "suspicious.exe"
}
}
}
]
}
]
```
### List IoCs (Indicators of Compromise)
Retrieve IoC matches against ingested events:
```python
iocs = chronicle.list_iocs(
start_time=start_time,
end_time=end_time,
max_matches=1000,
add_mandiant_attributes=True,
prioritized_only=False
)
# Process the results
for ioc in iocs['matches']:
ioc_type = next(iter(ioc['artifactIndicator'].keys()))
ioc_value = next(iter(ioc['artifactIndicator'].values()))
print(f"IoC Type: {ioc_type}, Value: {ioc_value}")
print(f"Sources: {', '.join(ioc['sources'])}")
```
The IoC response includes:
- The indicator itself (domain, IP, hash, etc.)
- Sources and categories
- Affected assets in your environment
- First and last seen timestamps
- Confidence scores and severity ratings
- Associated threat actors and malware families (with Mandiant attributes)
### Alerts and Case Management
Retrieve alerts and their associated cases:
```python
# Get non-closed alerts
alerts = chronicle.get_alerts(
start_time=start_time,
end_time=end_time,
snapshot_query='feedback_summary.status != "CLOSED"',
max_alerts=100
)
# Get alerts from the response
alert_list = alerts.get('alerts', {}).get('alerts', [])
# Extract case IDs from alerts
case_ids = {alert.get('caseName') for alert in alert_list if alert.get('caseName')}
# Get case details
if case_ids:
cases = chronicle.get_cases(list(case_ids))
# Process cases
for case in cases.cases:
print(f"Case: {case.display_name}")
print(f"Priority: {case.priority}")
print(f"Status: {case.status}")
```
The alerts response includes:
- Progress status and completion status
- Alert counts (baseline and filtered)
- Alert details (rule information, detection details, etc.)
- Case associations
You can filter alerts using the snapshot query parameter with fields like:
- `detection.rule_name`
- `detection.alert_state`
- `feedback_summary.verdict`
- `feedback_summary.priority`
- `feedback_summary.status`
### Case Management Helpers
The `CaseList` class provides helper methods for working with cases:
```python
# Get details for specific cases
cases = chronicle.get_cases(["case-id-1", "case-id-2"])
# Filter cases by priority
high_priority = cases.filter_by_priority("PRIORITY_HIGH")
# Filter cases by status
open_cases = cases.filter_by_status("STATUS_OPEN")
# Look up a specific case
case = cases.get_case("case-id-1")
```
## Rule Management
The SDK provides comprehensive support for managing Chronicle detection rules:
### Creating Rules
Create new detection rules using YARA-L 2.0 syntax:
```python
rule_text = """
rule simple_network_rule {
meta:
description = "Example rule to detect network connections"
author = "SecOps SDK Example"
severity = "Medium"
priority = "Medium"
yara_version = "YL2.0"
rule_version = "1.0"
events:
$e.metadata.event_type = "NETWORK_CONNECTION"
$e.principal.hostname != ""
condition:
$e
}
"""
# Create the rule
rule = chronicle.create_rule(rule_text)
rule_id = rule.get("name", "").split("/")[-1]
print(f"Rule ID: {rule_id}")
```
### Managing Rules
Retrieve, list, update, enable/disable, and delete rules:
```python
# List all rules
rules = chronicle.list_rules()
for rule in rules.get("rules", []):
rule_id = rule.get("name", "").split("/")[-1]
enabled = rule.get("deployment", {}).get("enabled", False)
print(f"Rule ID: {rule_id}, Enabled: {enabled}")
# Get specific rule
rule = chronicle.get_rule(rule_id)
print(f"Rule content: {rule.get('text')}")
# Update rule
updated_rule = chronicle.update_rule(rule_id, updated_rule_text)
# Enable/disable rule
deployment = chronicle.enable_rule(rule_id, enabled=True) # Enable
deployment = chronicle.enable_rule(rule_id, enabled=False) # Disable
# Delete rule
chronicle.delete_rule(rule_id)
```
### Retrohunts
Run rules against historical data to find past matches:
```python
from datetime import datetime, timedelta, timezone
# Set time range for retrohunt
end_time = datetime.now(timezone.utc)
start_time = end_time - timedelta(days=7) # Search past 7 days
# Create retrohunt
retrohunt = chronicle.create_retrohunt(rule_id, start_time, end_time)
operation_id = retrohunt.get("name", "").split("/")[-1]
# Check retrohunt status
retrohunt_status = chronicle.get_retrohunt(rule_id, operation_id)
is_complete = retrohunt_status.get("metadata", {}).get("done", False)
```
### Detections and Errors
Monitor rule detections and execution errors:
```python
# List detections for a rule
detections = chronicle.list_detections(rule_id)
for detection in detections.get("detections", []):
detection_id = detection.get("id", "")
event_time = detection.get("eventTime", "")
alerting = detection.get("alertState", "") == "ALERTING"
print(f"Detection: {detection_id}, Time: {event_time}, Alerting: {alerting}")
# List execution errors for a rule
errors = chronicle.list_errors(rule_id)
for error in errors.get("ruleExecutionErrors", []):
error_message = error.get("error_message", "")
create_time = error.get("create_time", "")
print(f"Error: {error_message}, Time: {create_time}")
```
### Rule Alerts
Search for alerts generated by rules:
```python
# Set time range for alert search
end_time = datetime.now(timezone.utc)
start_time = end_time - timedelta(days=7) # Search past 7 days
# Search for rule alerts
alerts_response = chronicle.search_rule_alerts(
start_time=start_time,
end_time=end_time,
page_size=10
)
# The API returns a nested structure where alerts are grouped by rule
# Extract and process all alerts from this structure
all_alerts = []
too_many_alerts = alerts_response.get('tooManyAlerts', False)
# Process the nested response structure - alerts are grouped by rule
for rule_alert in alerts_response.get('ruleAlerts', []):
# Extract rule metadata
rule_metadata = rule_alert.get('ruleMetadata', {})
rule_id = rule_metadata.get('properties', {}).get('ruleId', 'Unknown')
rule_name = rule_metadata.get('properties', {}).get('name', 'Unknown')
# Get alerts for this rule
rule_alerts = rule_alert.get('alerts', [])
# Process each alert
for alert in rule_alerts:
# Extract important fields
alert_id = alert.get("id", "")
detection_time = alert.get("detectionTimestamp", "")
commit_time = alert.get("commitTimestamp", "")
alerting_type = alert.get("alertingType", "")
print(f"Alert ID: {alert_id}")
print(f"Rule ID: {rule_id}")
print(f"Rule Name: {rule_name}")
print(f"Detection Time: {detection_time}")
# Extract events from the alert
if 'resultEvents' in alert:
for var_name, event_data in alert.get('resultEvents', {}).items():
if 'eventSamples' in event_data:
for sample in event_data.get('eventSamples', []):
if 'event' in sample:
event = sample['event']
# Process event data
event_type = event.get('metadata', {}).get('eventType', 'Unknown')
print(f"Event Type: {event_type}")
```
If `tooManyAlerts` is True in the response, consider narrowing your search criteria using a smaller time window or more specific filters.
### Rule Sets
Manage curated rule sets:
```python
# Define deployments for rule sets
deployments = [
{
"category_id": "category-uuid",
"rule_set_id": "ruleset-uuid",
"precision": "broad",
"enabled": True,
"alerting": False
}
]
# Update rule set deployments
chronicle.batch_update_curated_rule_set_deployments(deployments)
```
### Rule Validation
Validate a YARA-L2 rule before creating or updating it:
```python
# Example rule
rule_text = """
rule test_rule {
meta:
description = "Test rule for validation"
author = "Test Author"
severity = "Low"
yara_version = "YL2.0"
rule_version = "1.0"
events:
$e.metadata.event_type = "NETWORK_CONNECTION"
condition:
$e
}
"""
# Validate the rule
result = chronicle.validate_rule(rule_text)
if result.success:
print("Rule is valid")
else:
print(f"Rule is invalid: {result.message}")
if result.position:
print(f"Error at line {result.position['startLine']}, column {result.position['startColumn']}")
```
## Error Handling
The SDK defines several custom exceptions:
```python
from secops.exceptions import SecOpsError, AuthenticationError, APIError
try:
results = chronicle.search_udm(...)
except AuthenticationError as e:
print(f"Authentication failed: {e}")
except APIError as e:
print(f"API request failed: {e}")
except SecOpsError as e:
print(f"General error: {e}")
```
## Value Type Detection
The SDK automatically detects these entity types:
- IPv4 addresses
- MD5/SHA1/SHA256 hashes
- Domain names
- Email addresses
- MAC addresses
- Hostnames
Example of automatic detection:
```python
# These will automatically use the correct field paths and value types
ip_summary = chronicle.summarize_entity(value="192.168.1.100")
domain_summary = chronicle.summarize_entity(value="example.com")
hash_summary = chronicle.summarize_entity(value="e17dd4eef8b4978673791ef4672f4f6a")
```
You can also override the automatic detection:
```python
summary = chronicle.summarize_entity(
value="example.com",
field_path="custom.field.path", # Override automatic detection
value_type="DOMAIN_NAME" # Explicitly set value type
)
```
## License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "secops",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "chronicle, google, secops, security",
"author": null,
"author_email": "Google SecOps Team <chronicle@google.com>",
"download_url": "https://files.pythonhosted.org/packages/a6/36/28552ed80215641a271a9db238c028ffd6023977918763ed5016ea250a17/secops-0.1.4.tar.gz",
"platform": null,
"description": "# Google SecOps SDK for Python\n\nA Python SDK for interacting with Google Security Operations products, currently supporting Chronicle/SecOps SIEM.\nThis wraps the API for common use cases, including UDM searches, entity lookups, IoCs, alert management, case management, and detection rule management.\n\n## Installation\n\n```bash\npip install secops\n```\n\n## Authentication\n\nThe SDK supports two main authentication methods:\n\n### 1. Application Default Credentials (ADC)\n\nThe simplest and recommended way to authenticate the SDK. Application Default Credentials provide a consistent authentication method that works across different Google Cloud environments and local development.\n\nThere are several ways to use ADC:\n\n#### a. Using `gcloud` CLI (Recommended for Local Development)\n\n```bash\n# Login and set up application-default credentials\ngcloud auth application-default login\n```\n\nThen in your code:\n```python\nfrom secops import SecOpsClient\n\n# Initialize with default credentials - no explicit configuration needed\nclient = SecOpsClient()\n```\n\n#### b. Using Environment Variable\n\nSet the environment variable pointing to your service account key:\n```bash\nexport GOOGLE_APPLICATION_CREDENTIALS=\"/path/to/service-account.json\"\n```\n\nThen in your code:\n```python\nfrom secops import SecOpsClient\n\n# Initialize with default credentials - will automatically use the credentials file\nclient = SecOpsClient()\n```\n\n#### c. Google Cloud Environment (Automatic)\n\nWhen running on Google Cloud services (Compute Engine, Cloud Functions, Cloud Run, etc.), ADC works automatically without any configuration:\n\n```python\nfrom secops import SecOpsClient\n\n# Initialize with default credentials - will automatically use the service account \n# assigned to your Google Cloud resource\nclient = SecOpsClient()\n```\n\nADC will automatically try these authentication methods in order:\n1. Environment variable `GOOGLE_APPLICATION_CREDENTIALS`\n2. Google Cloud SDK credentials (set by `gcloud auth application-default login`)\n3. Google Cloud-provided service account credentials\n4. Local service account impersonation credentials\n\n### 2. Service Account Authentication\n\nFor more explicit control, you can authenticate using a service account. This can be done in two ways:\n\n#### a. Using a Service Account JSON File\n\n```python\nfrom secops import SecOpsClient\n\n# Initialize with service account JSON file\nclient = SecOpsClient(service_account_path=\"/path/to/service-account.json\")\n```\n\n#### b. Using Service Account Info Dictionary\n\n```python\nfrom secops import SecOpsClient\n\n# Service account details as a dictionary\nservice_account_info = {\n \"type\": \"service_account\",\n \"project_id\": \"your-project-id\",\n \"private_key_id\": \"key-id\",\n \"private_key\": \"-----BEGIN PRIVATE KEY-----\\n...\",\n \"client_email\": \"service-account@project.iam.gserviceaccount.com\",\n \"client_id\": \"client-id\",\n \"auth_uri\": \"https://accounts.google.com/o/oauth2/auth\",\n \"token_uri\": \"https://oauth2.googleapis.com/token\",\n \"auth_provider_x509_cert_url\": \"https://www.googleapis.com/oauth2/v1/certs\",\n \"client_x509_cert_url\": \"https://www.googleapis.com/robot/v1/metadata/x509/...\"\n}\n\n# Initialize with service account info\nclient = SecOpsClient(service_account_info=service_account_info)\n```\n\n## Using the Chronicle API\n\n### Initializing the Chronicle Client\n\nAfter creating a SecOpsClient, you need to initialize the Chronicle-specific client:\n\n```python\n# Initialize Chronicle client\nchronicle = client.chronicle(\n customer_id=\"your-chronicle-instance-id\", # Your Chronicle instance ID\n project_id=\"your-project-id\", # Your GCP project ID\n region=\"us\" # Chronicle API region\n)\n```\n\n### Log Ingestion\n\nIngest raw logs directly into Chronicle:\n\n```python\nfrom datetime import datetime, timezone\nimport json\n\n# Create a sample log (this is an OKTA log)\ncurrent_time = datetime.now(timezone.utc).isoformat().replace('+00:00', 'Z')\nokta_log = {\n \"actor\": {\n \"displayName\": \"Joe Doe\",\n \"alternateId\": \"jdoe@example.com\"\n },\n \"client\": {\n \"ipAddress\": \"192.168.1.100\",\n \"userAgent\": {\n \"os\": \"Mac OS X\",\n \"browser\": \"SAFARI\"\n }\n },\n \"displayMessage\": \"User login to Okta\",\n \"eventType\": \"user.session.start\",\n \"outcome\": {\n \"result\": \"SUCCESS\"\n },\n \"published\": current_time # Current time in ISO format\n}\n\n# Ingest the log using the default forwarder\nresult = chronicle.ingest_log(\n log_type=\"OKTA\", # Chronicle log type\n log_message=json.dumps(okta_log) # JSON string of the log\n)\n\nprint(f\"Operation: {result.get('operation')}\")\n```\n\nThe SDK also supports non-JSON log formats. Here's an example with XML for Windows Event logs:\n\n```python\n# Create a Windows Event XML log\nxml_content = \"\"\"<Event xmlns='http://schemas.microsoft.com/win/2004/08/events/event'>\n <System>\n <Provider Name='Microsoft-Windows-Security-Auditing' Guid='{54849625-5478-4994-A5BA-3E3B0328C30D}'/>\n <EventID>4624</EventID>\n <Version>1</Version>\n <Level>0</Level>\n <Task>12544</Task>\n <Opcode>0</Opcode>\n <Keywords>0x8020000000000000</Keywords>\n <TimeCreated SystemTime='2024-05-10T14:30:00Z'/>\n <EventRecordID>202117513</EventRecordID>\n <Correlation/>\n <Execution ProcessID='656' ThreadID='700'/>\n <Channel>Security</Channel>\n <Computer>WIN-SERVER.xyz.net</Computer>\n <Security/>\n </System>\n <EventData>\n <Data Name='SubjectUserSid'>S-1-0-0</Data>\n <Data Name='SubjectUserName'>-</Data>\n <Data Name='TargetUserName'>svcUser</Data>\n <Data Name='WorkstationName'>CLIENT-PC</Data>\n <Data Name='LogonType'>3</Data>\n </EventData>\n</Event>\"\"\"\n\n# Ingest the XML log - no json.dumps() needed for XML\nresult = chronicle.ingest_log(\n log_type=\"WINEVTLOG_XML\", # Windows Event Log XML format\n log_message=xml_content # Raw XML content\n)\n\nprint(f\"Operation: {result.get('operation')}\")\n```\nThe SDK supports all log types available in Chronicle. You can:\n\n1. View available log types:\n```python\n# Get all available log types\nlog_types = chronicle.get_all_log_types()\nfor lt in log_types[:5]: # Show first 5\n print(f\"{lt.id}: {lt.description}\")\n```\n\n2. Search for specific log types:\n```python\n# Search for log types related to firewalls\nfirewall_types = chronicle.search_log_types(\"firewall\")\nfor lt in firewall_types:\n print(f\"{lt.id}: {lt.description}\")\n```\n\n3. Validate log types:\n```python\n# Check if a log type is valid\nif chronicle.is_valid_log_type(\"OKTA\"):\n print(\"Valid log type\")\nelse:\n print(\"Invalid log type\")\n```\n\n4. Use custom forwarders:\n```python\n# Create or get a custom forwarder\nforwarder = chronicle.get_or_create_forwarder(display_name=\"MyCustomForwarder\")\nforwarder_id = forwarder[\"name\"].split(\"/\")[-1]\n\n# Use the custom forwarder for log ingestion\nresult = chronicle.ingest_log(\n log_type=\"WINDOWS\",\n log_message=json.dumps(windows_log),\n forwarder_id=forwarder_id\n)\n```\n\n5. Use custom timestamps:\n```python\nfrom datetime import datetime, timedelta, timezone\n\n# Define custom timestamps\nlog_entry_time = datetime.now(timezone.utc) - timedelta(hours=1)\ncollection_time = datetime.now(timezone.utc)\n\nresult = chronicle.ingest_log(\n log_type=\"OKTA\",\n log_message=json.dumps(okta_log),\n log_entry_time=log_entry_time, # When the log was generated\n collection_time=collection_time # When the log was collected\n)\n```\n\n### Basic UDM Search\n\nSearch for network connection events:\n\n```python\nfrom datetime import datetime, timedelta, timezone\n\n# Set time range for queries\nend_time = datetime.now(timezone.utc)\nstart_time = end_time - timedelta(hours=24) # Last 24 hours\n\n# Perform UDM search\nresults = chronicle.search_udm(\n query=\"\"\"\n metadata.event_type = \"NETWORK_CONNECTION\"\n ip != \"\"\n \"\"\",\n start_time=start_time,\n end_time=end_time,\n max_events=5\n)\n\n# Example response:\n{\n \"events\": [\n {\n \"event\": {\n \"metadata\": {\n \"eventTimestamp\": \"2024-02-09T10:30:00Z\",\n \"eventType\": \"NETWORK_CONNECTION\"\n },\n \"target\": {\n \"ip\": \"192.168.1.100\",\n \"port\": 443\n },\n \"principal\": {\n \"hostname\": \"workstation-1\"\n }\n }\n }\n ],\n \"total_events\": 1\n}\n```\n\n### Statistics Queries\n\nGet statistics about network connections grouped by hostname:\n\n```python\nstats = chronicle.get_stats(\n query=\"\"\"metadata.event_type = \"NETWORK_CONNECTION\"\nmatch:\n target.hostname\noutcome:\n $count = count(metadata.id)\norder:\n $count desc\"\"\",\n start_time=start_time,\n end_time=end_time,\n max_events=1000,\n max_values=10\n)\n\n# Example response:\n{\n \"columns\": [\"hostname\", \"count\"],\n \"rows\": [\n {\"hostname\": \"server-1\", \"count\": 1500},\n {\"hostname\": \"server-2\", \"count\": 1200}\n ],\n \"total_rows\": 2\n}\n```\n\n### CSV Export\n\nExport specific fields to CSV format:\n\n```python\ncsv_data = chronicle.fetch_udm_search_csv(\n query='metadata.event_type = \"NETWORK_CONNECTION\"',\n start_time=start_time,\n end_time=end_time,\n fields=[\"timestamp\", \"user\", \"hostname\", \"process name\"]\n)\n\n# Example response:\n\"\"\"\nmetadata.eventTimestamp,principal.hostname,target.ip,target.port\n2024-02-09T10:30:00Z,workstation-1,192.168.1.100,443\n2024-02-09T10:31:00Z,workstation-2,192.168.1.101,80\n\"\"\"\n```\n\n### Query Validation\n\nValidate a UDM query before execution:\n\n```python\nquery = 'target.ip != \"\" and principal.hostname = \"test-host\"'\nvalidation = chronicle.validate_query(query)\n\n# Example response:\n{\n \"isValid\": true,\n \"queryType\": \"QUERY_TYPE_UDM_QUERY\",\n \"suggestedFields\": [\n \"target.ip\",\n \"principal.hostname\"\n ]\n}\n```\n\n### Natural Language Search\n\nSearch for events using natural language instead of UDM query syntax:\n\n```python\nfrom datetime import datetime, timedelta, timezone\n\n# Set time range for queries\nend_time = datetime.now(timezone.utc)\nstart_time = end_time - timedelta(hours=24) # Last 24 hours\n\n# Option 1: Translate natural language to UDM query\nudm_query = chronicle.translate_nl_to_udm(\"show me network connections\")\nprint(f\"Translated query: {udm_query}\")\n# Example output: 'metadata.event_type=\"NETWORK_CONNECTION\"'\n\n# Then run the query manually if needed\nresults = chronicle.search_udm(\n query=udm_query,\n start_time=start_time,\n end_time=end_time\n)\n\n# Option 2: Perform complete search with natural language\nresults = chronicle.nl_search(\n text=\"show me failed login attempts\",\n start_time=start_time,\n end_time=end_time,\n max_events=100\n)\n\n# Example response (same format as search_udm):\n{\n \"events\": [\n {\n \"event\": {\n \"metadata\": {\n \"eventTimestamp\": \"2024-02-09T10:30:00Z\",\n \"eventType\": \"USER_LOGIN\"\n },\n \"principal\": {\n \"user\": {\n \"userid\": \"jdoe\"\n }\n },\n \"securityResult\": {\n \"action\": \"BLOCK\",\n \"summary\": \"Failed login attempt\"\n }\n }\n }\n ],\n \"total_events\": 1\n}\n```\n\nThe natural language search feature supports various query patterns:\n- \"Show me network connections\"\n- \"Find suspicious processes\"\n- \"Show login failures in the last hour\"\n- \"Display connections to IP address 192.168.1.100\"\n\nIf the natural language cannot be translated to a valid UDM query, an `APIError` will be raised with a message indicating that no valid query could be generated.\n\n### Entity Summary\n\nGet detailed information about specific entities:\n\n```python\n# IP address summary\nip_summary = chronicle.summarize_entity(\n start_time=start_time,\n end_time=end_time,\n value=\"192.168.1.100\" # Automatically detects IP\n)\n\n# Domain summary \ndomain_summary = chronicle.summarize_entity(\n start_time=start_time,\n end_time=end_time,\n value=\"example.com\" # Automatically detects domain\n)\n\n# File hash summary\nfile_summary = chronicle.summarize_entity(\n start_time=start_time,\n end_time=end_time,\n value=\"e17dd4eef8b4978673791ef4672f4f6a\" # Automatically detects MD5\n)\n\n# Example response structure:\n{\n \"entities\": [\n {\n \"name\": \"entities/...\",\n \"metadata\": {\n \"entityType\": \"ASSET\",\n \"interval\": {\n \"startTime\": \"2024-02-08T10:30:00Z\",\n \"endTime\": \"2024-02-09T10:30:00Z\"\n }\n },\n \"metric\": {\n \"firstSeen\": \"2024-02-08T10:30:00Z\",\n \"lastSeen\": \"2024-02-09T10:30:00Z\"\n },\n \"entity\": {\n \"asset\": {\n \"ip\": [\"192.168.1.100\"]\n }\n }\n }\n ],\n \"alertCounts\": [\n {\n \"rule\": \"Suspicious Network Connection\",\n \"count\": 5\n }\n ],\n \"widgetMetadata\": {\n \"detections\": 5,\n \"total\": 1000\n }\n}\n```\n\n### Entity Summary from Query\n\nLook up entities based on a UDM query:\n\n```python\n# Search for a specific file hash across multiple UDM paths\nmd5_hash = \"e17dd4eef8b4978673791ef4672f4f6a\"\nquery = f'target.file.md5 = \"{md5_hash}\" OR principal.file.md5 = \"{md5_hash}\"'\n\nentity_summaries = chronicle.summarize_entities_from_query(\n query=query,\n start_time=start_time,\n end_time=end_time\n)\n\n# Example response:\n[\n {\n \"entities\": [\n {\n \"name\": \"entities/...\",\n \"metadata\": {\n \"entityType\": \"FILE\",\n \"interval\": {\n \"startTime\": \"2024-02-08T10:30:00Z\",\n \"endTime\": \"2024-02-09T10:30:00Z\"\n }\n },\n \"metric\": {\n \"firstSeen\": \"2024-02-08T10:30:00Z\",\n \"lastSeen\": \"2024-02-09T10:30:00Z\"\n },\n \"entity\": {\n \"file\": {\n \"md5\": \"e17dd4eef8b4978673791ef4672f4f6a\",\n \"sha1\": \"da39a3ee5e6b4b0d3255bfef95601890afd80709\",\n \"filename\": \"suspicious.exe\"\n }\n }\n }\n ]\n }\n]\n```\n\n### List IoCs (Indicators of Compromise)\n\nRetrieve IoC matches against ingested events:\n\n```python\niocs = chronicle.list_iocs(\n start_time=start_time,\n end_time=end_time,\n max_matches=1000,\n add_mandiant_attributes=True,\n prioritized_only=False\n)\n\n# Process the results\nfor ioc in iocs['matches']:\n ioc_type = next(iter(ioc['artifactIndicator'].keys()))\n ioc_value = next(iter(ioc['artifactIndicator'].values()))\n print(f\"IoC Type: {ioc_type}, Value: {ioc_value}\")\n print(f\"Sources: {', '.join(ioc['sources'])}\")\n```\n\nThe IoC response includes:\n- The indicator itself (domain, IP, hash, etc.)\n- Sources and categories\n- Affected assets in your environment\n- First and last seen timestamps\n- Confidence scores and severity ratings\n- Associated threat actors and malware families (with Mandiant attributes)\n\n### Alerts and Case Management\n\nRetrieve alerts and their associated cases:\n\n```python\n# Get non-closed alerts\nalerts = chronicle.get_alerts(\n start_time=start_time,\n end_time=end_time,\n snapshot_query='feedback_summary.status != \"CLOSED\"',\n max_alerts=100\n)\n\n# Get alerts from the response\nalert_list = alerts.get('alerts', {}).get('alerts', [])\n\n# Extract case IDs from alerts\ncase_ids = {alert.get('caseName') for alert in alert_list if alert.get('caseName')}\n\n# Get case details\nif case_ids:\n cases = chronicle.get_cases(list(case_ids))\n \n # Process cases\n for case in cases.cases:\n print(f\"Case: {case.display_name}\")\n print(f\"Priority: {case.priority}\")\n print(f\"Status: {case.status}\")\n```\n\nThe alerts response includes:\n- Progress status and completion status\n- Alert counts (baseline and filtered)\n- Alert details (rule information, detection details, etc.)\n- Case associations\n\nYou can filter alerts using the snapshot query parameter with fields like:\n- `detection.rule_name`\n- `detection.alert_state`\n- `feedback_summary.verdict`\n- `feedback_summary.priority`\n- `feedback_summary.status`\n\n### Case Management Helpers\n\nThe `CaseList` class provides helper methods for working with cases:\n\n```python\n# Get details for specific cases\ncases = chronicle.get_cases([\"case-id-1\", \"case-id-2\"])\n\n# Filter cases by priority\nhigh_priority = cases.filter_by_priority(\"PRIORITY_HIGH\")\n\n# Filter cases by status\nopen_cases = cases.filter_by_status(\"STATUS_OPEN\")\n\n# Look up a specific case\ncase = cases.get_case(\"case-id-1\")\n```\n\n## Rule Management\n\nThe SDK provides comprehensive support for managing Chronicle detection rules:\n\n### Creating Rules\n\nCreate new detection rules using YARA-L 2.0 syntax:\n\n```python\nrule_text = \"\"\"\nrule simple_network_rule {\n meta:\n description = \"Example rule to detect network connections\"\n author = \"SecOps SDK Example\"\n severity = \"Medium\"\n priority = \"Medium\"\n yara_version = \"YL2.0\"\n rule_version = \"1.0\"\n events:\n $e.metadata.event_type = \"NETWORK_CONNECTION\"\n $e.principal.hostname != \"\"\n condition:\n $e\n}\n\"\"\"\n\n# Create the rule\nrule = chronicle.create_rule(rule_text)\nrule_id = rule.get(\"name\", \"\").split(\"/\")[-1]\nprint(f\"Rule ID: {rule_id}\")\n```\n\n### Managing Rules\n\nRetrieve, list, update, enable/disable, and delete rules:\n\n```python\n# List all rules\nrules = chronicle.list_rules()\nfor rule in rules.get(\"rules\", []):\n rule_id = rule.get(\"name\", \"\").split(\"/\")[-1]\n enabled = rule.get(\"deployment\", {}).get(\"enabled\", False)\n print(f\"Rule ID: {rule_id}, Enabled: {enabled}\")\n\n# Get specific rule\nrule = chronicle.get_rule(rule_id)\nprint(f\"Rule content: {rule.get('text')}\")\n\n# Update rule\nupdated_rule = chronicle.update_rule(rule_id, updated_rule_text)\n\n# Enable/disable rule\ndeployment = chronicle.enable_rule(rule_id, enabled=True) # Enable\ndeployment = chronicle.enable_rule(rule_id, enabled=False) # Disable\n\n# Delete rule\nchronicle.delete_rule(rule_id)\n```\n\n### Retrohunts\n\nRun rules against historical data to find past matches:\n\n```python\nfrom datetime import datetime, timedelta, timezone\n\n# Set time range for retrohunt\nend_time = datetime.now(timezone.utc)\nstart_time = end_time - timedelta(days=7) # Search past 7 days\n\n# Create retrohunt\nretrohunt = chronicle.create_retrohunt(rule_id, start_time, end_time)\noperation_id = retrohunt.get(\"name\", \"\").split(\"/\")[-1]\n\n# Check retrohunt status\nretrohunt_status = chronicle.get_retrohunt(rule_id, operation_id)\nis_complete = retrohunt_status.get(\"metadata\", {}).get(\"done\", False)\n```\n\n### Detections and Errors\n\nMonitor rule detections and execution errors:\n\n```python\n# List detections for a rule\ndetections = chronicle.list_detections(rule_id)\nfor detection in detections.get(\"detections\", []):\n detection_id = detection.get(\"id\", \"\")\n event_time = detection.get(\"eventTime\", \"\")\n alerting = detection.get(\"alertState\", \"\") == \"ALERTING\"\n print(f\"Detection: {detection_id}, Time: {event_time}, Alerting: {alerting}\")\n\n# List execution errors for a rule\nerrors = chronicle.list_errors(rule_id)\nfor error in errors.get(\"ruleExecutionErrors\", []):\n error_message = error.get(\"error_message\", \"\")\n create_time = error.get(\"create_time\", \"\")\n print(f\"Error: {error_message}, Time: {create_time}\")\n```\n\n### Rule Alerts\n\nSearch for alerts generated by rules:\n\n```python\n# Set time range for alert search\nend_time = datetime.now(timezone.utc)\nstart_time = end_time - timedelta(days=7) # Search past 7 days\n\n# Search for rule alerts\nalerts_response = chronicle.search_rule_alerts(\n start_time=start_time,\n end_time=end_time,\n page_size=10\n)\n\n# The API returns a nested structure where alerts are grouped by rule\n# Extract and process all alerts from this structure\nall_alerts = []\ntoo_many_alerts = alerts_response.get('tooManyAlerts', False)\n\n# Process the nested response structure - alerts are grouped by rule\nfor rule_alert in alerts_response.get('ruleAlerts', []):\n # Extract rule metadata\n rule_metadata = rule_alert.get('ruleMetadata', {})\n rule_id = rule_metadata.get('properties', {}).get('ruleId', 'Unknown')\n rule_name = rule_metadata.get('properties', {}).get('name', 'Unknown')\n \n # Get alerts for this rule\n rule_alerts = rule_alert.get('alerts', [])\n \n # Process each alert\n for alert in rule_alerts:\n # Extract important fields\n alert_id = alert.get(\"id\", \"\")\n detection_time = alert.get(\"detectionTimestamp\", \"\")\n commit_time = alert.get(\"commitTimestamp\", \"\")\n alerting_type = alert.get(\"alertingType\", \"\")\n \n print(f\"Alert ID: {alert_id}\")\n print(f\"Rule ID: {rule_id}\")\n print(f\"Rule Name: {rule_name}\")\n print(f\"Detection Time: {detection_time}\")\n \n # Extract events from the alert\n if 'resultEvents' in alert:\n for var_name, event_data in alert.get('resultEvents', {}).items():\n if 'eventSamples' in event_data:\n for sample in event_data.get('eventSamples', []):\n if 'event' in sample:\n event = sample['event']\n # Process event data\n event_type = event.get('metadata', {}).get('eventType', 'Unknown')\n print(f\"Event Type: {event_type}\")\n```\n\nIf `tooManyAlerts` is True in the response, consider narrowing your search criteria using a smaller time window or more specific filters.\n\n### Rule Sets\n\nManage curated rule sets:\n\n```python\n# Define deployments for rule sets\ndeployments = [\n {\n \"category_id\": \"category-uuid\",\n \"rule_set_id\": \"ruleset-uuid\",\n \"precision\": \"broad\",\n \"enabled\": True,\n \"alerting\": False\n }\n]\n\n# Update rule set deployments\nchronicle.batch_update_curated_rule_set_deployments(deployments)\n```\n\n### Rule Validation\n\nValidate a YARA-L2 rule before creating or updating it:\n\n```python\n# Example rule\nrule_text = \"\"\"\nrule test_rule {\n meta:\n description = \"Test rule for validation\"\n author = \"Test Author\"\n severity = \"Low\"\n yara_version = \"YL2.0\"\n rule_version = \"1.0\"\n events:\n $e.metadata.event_type = \"NETWORK_CONNECTION\"\n condition:\n $e\n}\n\"\"\"\n\n# Validate the rule\nresult = chronicle.validate_rule(rule_text)\n\nif result.success:\n print(\"Rule is valid\")\nelse:\n print(f\"Rule is invalid: {result.message}\")\n if result.position:\n print(f\"Error at line {result.position['startLine']}, column {result.position['startColumn']}\")\n```\n\n## Error Handling\n\nThe SDK defines several custom exceptions:\n\n```python\nfrom secops.exceptions import SecOpsError, AuthenticationError, APIError\n\ntry:\n results = chronicle.search_udm(...)\nexcept AuthenticationError as e:\n print(f\"Authentication failed: {e}\")\nexcept APIError as e:\n print(f\"API request failed: {e}\")\nexcept SecOpsError as e:\n print(f\"General error: {e}\")\n```\n\n## Value Type Detection\n\nThe SDK automatically detects these entity types:\n- IPv4 addresses\n- MD5/SHA1/SHA256 hashes\n- Domain names\n- Email addresses\n- MAC addresses\n- Hostnames\n\nExample of automatic detection:\n\n```python\n# These will automatically use the correct field paths and value types\nip_summary = chronicle.summarize_entity(value=\"192.168.1.100\")\ndomain_summary = chronicle.summarize_entity(value=\"example.com\")\nhash_summary = chronicle.summarize_entity(value=\"e17dd4eef8b4978673791ef4672f4f6a\")\n```\n\nYou can also override the automatic detection:\n\n```python\nsummary = chronicle.summarize_entity(\n value=\"example.com\",\n field_path=\"custom.field.path\", # Override automatic detection\n value_type=\"DOMAIN_NAME\" # Explicitly set value type\n)\n```\n\n## License\n\nThis project is licensed under the Apache License 2.0 - see the LICENSE file for details.",
"bugtrack_url": null,
"license": null,
"summary": "Python SDK for wrapping the Google SecOps API for common use cases",
"version": "0.1.4",
"project_urls": {
"Documentation": "https://github.com/google/secops-wrapper#readme",
"Homepage": "https://github.com/google/secops-wrapper",
"Issues": "https://github.com/google/secops-wrapper/issues",
"Repository": "https://github.com/google/secops-wrapper.git"
},
"split_keywords": [
"chronicle",
" google",
" secops",
" security"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "4a9e5685f389af3d65a33e63757b5aa51ed829da874b489d811fbf4954956c23",
"md5": "2a06534696731242497c7e9c73e7e396",
"sha256": "6edd8fe7e39ae4d0f1d907adbc6e048075342a351824f45c7a4f3cff332b3077"
},
"downloads": -1,
"filename": "secops-0.1.4-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2a06534696731242497c7e9c73e7e396",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 90421,
"upload_time": "2025-03-23T15:10:55",
"upload_time_iso_8601": "2025-03-23T15:10:55.536915Z",
"url": "https://files.pythonhosted.org/packages/4a/9e/5685f389af3d65a33e63757b5aa51ed829da874b489d811fbf4954956c23/secops-0.1.4-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "a63628552ed80215641a271a9db238c028ffd6023977918763ed5016ea250a17",
"md5": "d4f378b5f31a037f9dfe62e3418bede7",
"sha256": "f12d10b601a191dc968de068888a93f07afa2d9153783d009390f0f880825980"
},
"downloads": -1,
"filename": "secops-0.1.4.tar.gz",
"has_sig": false,
"md5_digest": "d4f378b5f31a037f9dfe62e3418bede7",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 105526,
"upload_time": "2025-03-23T15:10:56",
"upload_time_iso_8601": "2025-03-23T15:10:56.819294Z",
"url": "https://files.pythonhosted.org/packages/a6/36/28552ed80215641a271a9db238c028ffd6023977918763ed5016ea250a17/secops-0.1.4.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-03-23 15:10:56",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "google",
"github_project": "secops-wrapper#readme",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [
{
"name": "pytest",
"specs": []
},
{
"name": "pytest-cov",
"specs": []
}
],
"tox": true,
"lcname": "secops"
}