# AI Security Scanner: Test Your AI Applications for Prompt Injection Vulnerabilities
Test your AI chatbots and agents for prompt injection vulnerabilities using the SonnyLabs scanning service. Receive a comprehensive vulnerability report by email within 24 hours.
## Table of Contents
- [About](#about)
- [Security Risks in AI Applications](#security-risks-in-ai-applications)
- [Installation](#installation)
- [Pre-requisites](#pre-requisites)
- [Quick 3-Step Integration](#quick-3-step-integration)
- [Prompt to Integrate SonnyLabs to your AI application](#prompt-to-integrate-sonnylabs-to-your-ai-application)
- [Example](#example)
- [API Reference](#api-reference)
- [License](#license)
## About
SonnyLabs.ai is a cybersecurity testing service that scans AI applications for prompt injection vulnerabilities. Our scanner analyzes both user inputs and AI outputs to identify security weaknesses in your AI chatbots and agents. After testing, you'll receive a comprehensive vulnerability report by email within 24 hours.
This package is a simple Python client for the SonnyLabs vulnerability scanning service. There are 10,000 free scanning requests per month for testing your AI applications.
## When to Use SonnyLabs
SonnyLabs is designed specifically for the **testing phase** of your AI development lifecycle, not for production deployment with real users. Implement this tool:
- During pre-deployment security testing
- In dedicated QA/testing environments
- As part of your CI/CD pipeline for automated security testing
- When conducting penetration testing of your AI application
- Before releasing new AI features or models
The goal is to identify and address prompt injection vulnerabilities before your AI application goes live, enhancing your security posture proactively rather than monitoring production traffic.
## Security Risks in AI Applications
### Prompt Injection
Prompt injections are malicious inputs to AI applications that were designed by the user to manipulate an LLM into ignoring its original instructions or safety controls.
Risks associated with prompt injections:
- Bypassing content filters and safety mechanisms
- Extracting confidential system instructions
- Causing the LLM to perform unauthorized actions
- Compromising application security
The SonnyLabs vulnerability scanner provides a way to test your AI applications for prompt injection vulnerabilities without disrupting user interactions. You'll receive a comprehensive vulnerability report by email within 24 hours, detailing any security weaknesses found in both user inputs and AI responses.
## REST API output example on the input prompt
```json
{
"analysis": [
{
"type": "score",
"name": "prompt_injection",
"result": 0.99
}
],
"tag": "unique-request-identifier"
}
```
## Installation
The package is available on PyPI and can be installed using pip:
```bash
pip install sonnylabs
```
Alternatively, you can install directly from GitHub for the latest development version:
```bash
pip install git+https://github.com/SonnyLabs/sonnylabs_py
```
Or clone the repository and install locally:
```bash
git clone https://github.com/SonnyLabs/sonnylabs_py
cd sonnylabs_py
pip install -e .
```
## Pre-requisites
These are the pre-requisites for this package and to use the SonnyLabs REST API.
- Python 3.7 or higher
- An AI application/AI agent to integrate SonnyLabs with
- [A Sonnylabs account](https://sonnylabs-service.onrender.com)
- [A SonnyLabs API key](https://sonnylabs-service.onrender.com/analysis/api-keys)
- [A SonnyLabs analysis ID](https://sonnylabs-service.onrender.com/analysis)
- Securely store your API key and analysis ID (we recommend using a secure method like environment variables or a secrets manager)
### To register to SonnyLabs
1. Go to https://sonnylabs-service.onrender.com and register.
2. Confirm your email address, and login to your new SonnyLabs account.
### To get a SonnyLabs API key:
1. Go to [API Keys](https://sonnylabs-service.onrender.com/analysis/api-keys).
2. Select + Generate New API Key.
3. Copy the generated API key.
4. Store this API key securely for use in your application.
### To get a SonnyLabs analysis ID:
1. Go to [Analysis](https://sonnylabs-service.onrender.com/analysis).
2. Create a new analysis and name it after the AI application/AI agent you will be auditing.
3. After you press Submit, you will be brought to the empty analysis page.
4. The analysis ID is the last part of the URL, like https://sonnylabs-service.onrender.com/analysis/{analysis_id}. Note that the analysis ID can also be found in the [SonnyLabs analysis dashboard](https://sonnylabs-service.onrender.com/analysis).
5. Store this analysis ID securely for use in your application.
> **Note:** We recommend storing your API key and analysis ID securely using environment variables or a secrets manager, not hardcoded in your application code.
> **Performance:** The SonnyLabs service operates with sub-200ms latency (one fifth of a second) per prompt input or AI output, ensuring minimal impact on your application's performance while collecting data for vulnerability analysis.
## Quick 3-Step Integration
Getting started with SonnyLabs is simple. The most important function to know is `analyze_text()`, which is the core method for analyzing content for security risks.
### 1. Install and initialize the client
```python
# Install the SDK
pip install sonnylabs
# In your application
from sonnylabs import SonnyLabsClient
# Initialize the client with your securely stored credentials
client = SonnyLabsClient(
api_token="YOUR_API_TOKEN", # Replace with your actual API key or use a secure method to retrieve it
analysis_id="YOUR_ANALYSIS_ID", # Replace with your actual ID or use a secure method to retrieve it
base_url="https://sonnylabs-service.onrender.com" # Optional, this is the default value
)
```
### 2. Analyze input/output with a single function call
```python
# Send user input to the SonnyLabs API without showing results to users
input_result = client.analyze_text("User message here", scan_type="input")
# Process the message normally (no blocking)
ai_response = "AI response here"
# Link AI response with the input using the same tag
output_result = client.analyze_text(ai_response, scan_type="output", tag=input_result["tag"])
# All analysis happens on the backend and results are available in your SonnyLabs dashboard
```
For more advanced usage and complete examples, see the sections below.
## API Reference
This section documents all functions available in the SonnyLabsClient, their parameters, return values, and usage.
### Initialization
```python
SonnyLabsClient(api_token, base_url, analysis_id, timeout=5)
```
**Parameters:**
- `api_token` (str, **required**): Your SonnyLabs API key (previously called api_token, both are supported for backward compatibility).
- `base_url` (str, **required**): Base URL for the SonnyLabs API (e.g., "https://sonnylabs-service.onrender.com").
- `analysis_id` (str, **required**): The analysis ID associated with your application.
- `timeout` (int, optional): Request timeout in seconds. Default is 5 seconds.
### Core Analysis Methods
#### `analyze_text(text, scan_type="input", tag=None)`
**Description:** The primary method for analyzing text content for security risks.
**Parameters:**
- `text` (str, **required**): The text content to analyze.
- `scan_type` (str, optional): Either "input" (user message) or "output" (AI response). Default is "input".
- `tag` (str, optional): A unique identifier for linking related analyses. If not provided, one will be generated.
**Returns:** Dictionary with analysis results:
```python
{
"success": True, # Whether the API call was successful
"tag": "unique_tag", # The tag used for this analysis
"analysis": [ # Array of analysis results
{"type": "score", "name": "prompt_injection", "result": 0.8}
]
}
```
### Analysis
All prompt injection analysis is performed on the SonnyLabs backend. You only need to submit your data using the `analyze_text` function. Results will be available in your SonnyLabs dashboard after analysis is complete.
## Prompt to Integrate SonnyLabs to your AI application
Here is an example prompt to give to your IDE's LLM (Cursor, VSCode, Windsurf etc) to integrate the Sonnylabs REST API to your AI application.
```
As an expert AI developer, help me integrate SonnyLabs security auditing into my existing AI application.
I need to implement vulnerability scanning for my AI application:
1. Send test user inputs to SonnyLabs for vulnerability analysis
2. Send my AI's responses to SonnyLabs for security testing
3. Link user prompts with AI responses to identify potential vulnerabilities
I've already installed the SonnyLabs Python SDK using pip and have my API key and analysis ID from the SonnyLabs dashboard.
Please provide a step-by-step implementation guide including:
- How to initialize the SonnyLabs vulnerability scanner client
- How to send test inputs to SonnyLabs for security analysis
- How to send AI outputs to SonnyLabs for vulnerability detection
- How to properly use the 'tag' parameter to link prompts with their responses
- How to integrate this testing process with minimal code changes
Note: I understand that all vulnerability reports will be sent by email within 24 hours and I don't need to process any results directly in my application.
```
### Quick Start
```python
from sonnylabs import SonnyLabsClient
import os
from dotenv import load_dotenv
# Load API key from environment (recommended)
load_dotenv()
api_token = os.getenv("SONNYLABS_API_TOKEN")
analysis_id = os.getenv("SONNYLABS_ANALYSIS_ID")
# Initialize the client with your securely stored credentials
client = SonnyLabsClient(
api_token=api_token,
analysis_id=analysis_id,
base_url="https://sonnylabs-service.onrender.com"
)
# Analyze text for prompt injection risk (input)
result = client.analyze_text("Hello, how can I help you today?", scan_type="input")
print(f"Prompt injection score: {result['analysis'][0]['result']}")
# If you want to link an input with its corresponding output, change the scan_type from "input" to "output" but reuse the tag:
tag = result["tag"]
response = "I'm an AI assistant, I'd be happy to help!"
output_result = client.analyze_text(response, scan_type="output", tag=tag)
```
## Integrating with a Chatbot
Here's how to integrate the SDK into a Python chatbot to audit all security risks without blocking any messages:
### Set up the client
```python
from sonnylabs import SonnyLabsClient
import os
from dotenv import load_dotenv
# Load environment variables (recommended)
load_dotenv()
# Initialize the SonnyLabs client with your securely stored credentials
sonnylabs_client = SonnyLabsClient(
api_token=os.getenv("SONNYLABS_API_TOKEN"),
analysis_id=os.getenv("SONNYLABS_ANALYSIS_ID"),
base_url="https://sonnylabs-service.onrender.com"
)
```
### Implement message handling with audit-only logging
```python
def handle_user_message(user_message):
# Step 1: Send the user message to SonnyLabs (silently, no user-facing results)
analysis_result = sonnylabs_client.analyze_text(user_message, scan_type="input")
# Step 2: Process the message normally
bot_response = generate_bot_response(user_message)
# Step 3: Send the AI response using the same tag to link it with the input
tag = analysis_result["tag"] # Reuse the tag from the input analysis
sonnylabs_client.analyze_text(bot_response, scan_type="output", tag=tag)
# No need to process any results as everything is analyzed on the backend
# Always return the response (audit-only mode)
return bot_response
def generate_bot_response(user_message):
# Your existing chatbot logic here
# This could be a call to an LLM API or other response generation logic
return "This is the chatbot's response"
```
### License
This project is licensed under the MIT License - see the LICENSE file for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "sonnylabs",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "ai, security, prompt injection, vulnerability, scanner, cybersecurity, testing",
"author": null,
"author_email": "SonnyLabs <liana@sonnylabs.ai>",
"download_url": "https://files.pythonhosted.org/packages/29/65/f6b2cb740e503ff077fc5117d225c99d16849d71127061846ff1d2cca2c6/sonnylabs-0.1.2.tar.gz",
"platform": null,
"description": "# AI Security Scanner: Test Your AI Applications for Prompt Injection Vulnerabilities\r\n\r\nTest your AI chatbots and agents for prompt injection vulnerabilities using the SonnyLabs scanning service. Receive a comprehensive vulnerability report by email within 24 hours.\r\n\r\n## Table of Contents\r\n\r\n- [About](#about)\r\n- [Security Risks in AI Applications](#security-risks-in-ai-applications)\r\n- [Installation](#installation)\r\n- [Pre-requisites](#pre-requisites)\r\n- [Quick 3-Step Integration](#quick-3-step-integration)\r\n- [Prompt to Integrate SonnyLabs to your AI application](#prompt-to-integrate-sonnylabs-to-your-ai-application)\r\n- [Example](#example)\r\n- [API Reference](#api-reference)\r\n- [License](#license)\r\n\r\n## About\r\n\r\nSonnyLabs.ai is a cybersecurity testing service that scans AI applications for prompt injection vulnerabilities. Our scanner analyzes both user inputs and AI outputs to identify security weaknesses in your AI chatbots and agents. After testing, you'll receive a comprehensive vulnerability report by email within 24 hours.\r\n\r\nThis package is a simple Python client for the SonnyLabs vulnerability scanning service. There are 10,000 free scanning requests per month for testing your AI applications.\r\n\r\n## When to Use SonnyLabs\r\n\r\nSonnyLabs is designed specifically for the **testing phase** of your AI development lifecycle, not for production deployment with real users. Implement this tool:\r\n\r\n- During pre-deployment security testing\r\n- In dedicated QA/testing environments\r\n- As part of your CI/CD pipeline for automated security testing\r\n- When conducting penetration testing of your AI application\r\n- Before releasing new AI features or models\r\n\r\nThe goal is to identify and address prompt injection vulnerabilities before your AI application goes live, enhancing your security posture proactively rather than monitoring production traffic.\r\n\r\n## Security Risks in AI Applications \r\n\r\n### Prompt Injection\r\nPrompt injections are malicious inputs to AI applications that were designed by the user to manipulate an LLM into ignoring its original instructions or safety controls.\r\n\r\nRisks associated with prompt injections:\r\n- Bypassing content filters and safety mechanisms\r\n- Extracting confidential system instructions\r\n- Causing the LLM to perform unauthorized actions\r\n- Compromising application security\r\n\r\nThe SonnyLabs vulnerability scanner provides a way to test your AI applications for prompt injection vulnerabilities without disrupting user interactions. You'll receive a comprehensive vulnerability report by email within 24 hours, detailing any security weaknesses found in both user inputs and AI responses.\r\n\r\n## REST API output example on the input prompt\r\n\r\n```json \r\n{\r\n \"analysis\": [\r\n {\r\n \"type\": \"score\",\r\n \"name\": \"prompt_injection\",\r\n \"result\": 0.99\r\n }\r\n ],\r\n \"tag\": \"unique-request-identifier\"\r\n}\r\n```\r\n\r\n## Installation\r\n\r\nThe package is available on PyPI and can be installed using pip:\r\n\r\n```bash\r\npip install sonnylabs\r\n```\r\n\r\nAlternatively, you can install directly from GitHub for the latest development version:\r\n\r\n```bash\r\npip install git+https://github.com/SonnyLabs/sonnylabs_py\r\n```\r\n\r\nOr clone the repository and install locally:\r\n\r\n```bash\r\ngit clone https://github.com/SonnyLabs/sonnylabs_py\r\ncd sonnylabs_py\r\npip install -e .\r\n```\r\n\r\n## Pre-requisites \r\nThese are the pre-requisites for this package and to use the SonnyLabs REST API.\r\n\r\n- Python 3.7 or higher\r\n- An AI application/AI agent to integrate SonnyLabs with\r\n- [A Sonnylabs account](https://sonnylabs-service.onrender.com)\r\n- [A SonnyLabs API key](https://sonnylabs-service.onrender.com/analysis/api-keys)\r\n- [A SonnyLabs analysis ID](https://sonnylabs-service.onrender.com/analysis) \r\n- Securely store your API key and analysis ID (we recommend using a secure method like environment variables or a secrets manager)\r\n\r\n### To register to SonnyLabs\r\n\r\n1. Go to https://sonnylabs-service.onrender.com and register. \r\n2. Confirm your email address, and login to your new SonnyLabs account.\r\n\r\n### To get a SonnyLabs API key:\r\n1. Go to [API Keys](https://sonnylabs-service.onrender.com/analysis/api-keys).\r\n2. Select + Generate New API Key.\r\n3. Copy the generated API key.\r\n4. Store this API key securely for use in your application.\r\n\r\n### To get a SonnyLabs analysis ID:\r\n1. Go to [Analysis](https://sonnylabs-service.onrender.com/analysis).\r\n2. Create a new analysis and name it after the AI application/AI agent you will be auditing.\r\n3. After you press Submit, you will be brought to the empty analysis page.\r\n4. The analysis ID is the last part of the URL, like https://sonnylabs-service.onrender.com/analysis/{analysis_id}. Note that the analysis ID can also be found in the [SonnyLabs analysis dashboard](https://sonnylabs-service.onrender.com/analysis).\r\n5. Store this analysis ID securely for use in your application.\r\n\r\n> **Note:** We recommend storing your API key and analysis ID securely using environment variables or a secrets manager, not hardcoded in your application code.\r\n\r\n> **Performance:** The SonnyLabs service operates with sub-200ms latency (one fifth of a second) per prompt input or AI output, ensuring minimal impact on your application's performance while collecting data for vulnerability analysis.\r\n\r\n## Quick 3-Step Integration\r\n\r\nGetting started with SonnyLabs is simple. The most important function to know is `analyze_text()`, which is the core method for analyzing content for security risks.\r\n\r\n### 1. Install and initialize the client\r\n\r\n```python\r\n# Install the SDK\r\npip install sonnylabs\r\n\r\n# In your application\r\nfrom sonnylabs import SonnyLabsClient\r\n\r\n# Initialize the client with your securely stored credentials\r\nclient = SonnyLabsClient(\r\n api_token=\"YOUR_API_TOKEN\", # Replace with your actual API key or use a secure method to retrieve it\r\n analysis_id=\"YOUR_ANALYSIS_ID\", # Replace with your actual ID or use a secure method to retrieve it\r\n base_url=\"https://sonnylabs-service.onrender.com\" # Optional, this is the default value\r\n)\r\n```\r\n\r\n### 2. Analyze input/output with a single function call\r\n\r\n```python\r\n# Send user input to the SonnyLabs API without showing results to users\r\ninput_result = client.analyze_text(\"User message here\", scan_type=\"input\")\r\n\r\n# Process the message normally (no blocking)\r\nai_response = \"AI response here\"\r\n\r\n# Link AI response with the input using the same tag\r\noutput_result = client.analyze_text(ai_response, scan_type=\"output\", tag=input_result[\"tag\"])\r\n\r\n# All analysis happens on the backend and results are available in your SonnyLabs dashboard\r\n```\r\n\r\nFor more advanced usage and complete examples, see the sections below.\r\n\r\n## API Reference\r\n\r\nThis section documents all functions available in the SonnyLabsClient, their parameters, return values, and usage.\r\n\r\n### Initialization\r\n\r\n```python\r\nSonnyLabsClient(api_token, base_url, analysis_id, timeout=5)\r\n```\r\n\r\n**Parameters:**\r\n- `api_token` (str, **required**): Your SonnyLabs API key (previously called api_token, both are supported for backward compatibility).\r\n- `base_url` (str, **required**): Base URL for the SonnyLabs API (e.g., \"https://sonnylabs-service.onrender.com\").\r\n- `analysis_id` (str, **required**): The analysis ID associated with your application.\r\n- `timeout` (int, optional): Request timeout in seconds. Default is 5 seconds.\r\n\r\n### Core Analysis Methods\r\n\r\n#### `analyze_text(text, scan_type=\"input\", tag=None)`\r\n\r\n**Description:** The primary method for analyzing text content for security risks.\r\n\r\n**Parameters:**\r\n- `text` (str, **required**): The text content to analyze.\r\n- `scan_type` (str, optional): Either \"input\" (user message) or \"output\" (AI response). Default is \"input\".\r\n- `tag` (str, optional): A unique identifier for linking related analyses. If not provided, one will be generated.\r\n\r\n**Returns:** Dictionary with analysis results:\r\n```python\r\n{\r\n \"success\": True, # Whether the API call was successful\r\n \"tag\": \"unique_tag\", # The tag used for this analysis\r\n \"analysis\": [ # Array of analysis results\r\n {\"type\": \"score\", \"name\": \"prompt_injection\", \"result\": 0.8}\r\n ]\r\n}\r\n```\r\n\r\n### Analysis\r\n\r\nAll prompt injection analysis is performed on the SonnyLabs backend. You only need to submit your data using the `analyze_text` function. Results will be available in your SonnyLabs dashboard after analysis is complete.\r\n\r\n\r\n\r\n## Prompt to Integrate SonnyLabs to your AI application\r\nHere is an example prompt to give to your IDE's LLM (Cursor, VSCode, Windsurf etc) to integrate the Sonnylabs REST API to your AI application.\r\n\r\n```\r\nAs an expert AI developer, help me integrate SonnyLabs security auditing into my existing AI application.\r\n\r\nI need to implement vulnerability scanning for my AI application:\r\n1. Send test user inputs to SonnyLabs for vulnerability analysis\r\n2. Send my AI's responses to SonnyLabs for security testing\r\n3. Link user prompts with AI responses to identify potential vulnerabilities\r\n\r\nI've already installed the SonnyLabs Python SDK using pip and have my API key and analysis ID from the SonnyLabs dashboard.\r\n\r\nPlease provide a step-by-step implementation guide including:\r\n- How to initialize the SonnyLabs vulnerability scanner client\r\n- How to send test inputs to SonnyLabs for security analysis\r\n- How to send AI outputs to SonnyLabs for vulnerability detection\r\n- How to properly use the 'tag' parameter to link prompts with their responses\r\n- How to integrate this testing process with minimal code changes\r\n\r\nNote: I understand that all vulnerability reports will be sent by email within 24 hours and I don't need to process any results directly in my application.\r\n```\r\n\r\n### Quick Start\r\n```python\r\nfrom sonnylabs import SonnyLabsClient\r\nimport os\r\nfrom dotenv import load_dotenv\r\n\r\n# Load API key from environment (recommended)\r\nload_dotenv()\r\napi_token = os.getenv(\"SONNYLABS_API_TOKEN\")\r\nanalysis_id = os.getenv(\"SONNYLABS_ANALYSIS_ID\")\r\n\r\n# Initialize the client with your securely stored credentials\r\nclient = SonnyLabsClient(\r\n api_token=api_token,\r\n analysis_id=analysis_id,\r\n base_url=\"https://sonnylabs-service.onrender.com\" \r\n)\r\n\r\n# Analyze text for prompt injection risk (input)\r\nresult = client.analyze_text(\"Hello, how can I help you today?\", scan_type=\"input\")\r\nprint(f\"Prompt injection score: {result['analysis'][0]['result']}\")\r\n\r\n# If you want to link an input with its corresponding output, change the scan_type from \"input\" to \"output\" but reuse the tag:\r\ntag = result[\"tag\"]\r\nresponse = \"I'm an AI assistant, I'd be happy to help!\"\r\noutput_result = client.analyze_text(response, scan_type=\"output\", tag=tag)\r\n```\r\n\r\n## Integrating with a Chatbot\r\nHere's how to integrate the SDK into a Python chatbot to audit all security risks without blocking any messages:\r\n\r\n### Set up the client\r\n```python \r\nfrom sonnylabs import SonnyLabsClient\r\nimport os\r\nfrom dotenv import load_dotenv\r\n\r\n# Load environment variables (recommended)\r\nload_dotenv()\r\n\r\n# Initialize the SonnyLabs client with your securely stored credentials\r\nsonnylabs_client = SonnyLabsClient(\r\n api_token=os.getenv(\"SONNYLABS_API_TOKEN\"),\r\n analysis_id=os.getenv(\"SONNYLABS_ANALYSIS_ID\"),\r\n base_url=\"https://sonnylabs-service.onrender.com\"\r\n)\r\n```\r\n\r\n### Implement message handling with audit-only logging\r\n\r\n```python \r\ndef handle_user_message(user_message):\r\n # Step 1: Send the user message to SonnyLabs (silently, no user-facing results)\r\n analysis_result = sonnylabs_client.analyze_text(user_message, scan_type=\"input\")\r\n \r\n # Step 2: Process the message normally\r\n bot_response = generate_bot_response(user_message)\r\n \r\n # Step 3: Send the AI response using the same tag to link it with the input\r\n tag = analysis_result[\"tag\"] # Reuse the tag from the input analysis\r\n sonnylabs_client.analyze_text(bot_response, scan_type=\"output\", tag=tag)\r\n \r\n # No need to process any results as everything is analyzed on the backend\r\n \r\n # Always return the response (audit-only mode)\r\n return bot_response\r\n\r\ndef generate_bot_response(user_message):\r\n # Your existing chatbot logic here\r\n # This could be a call to an LLM API or other response generation logic\r\n return \"This is the chatbot's response\"\r\n```\r\n\r\n### License\r\nThis project is licensed under the MIT License - see the LICENSE file for details.\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Python client for the SonnyLabs AI Security Scanner - Test your AI applications for prompt injection vulnerabilities",
"version": "0.1.2",
"project_urls": {
"Bug Tracker": "https://github.com/SonnyLabs/sonnylabs_py/issues",
"Documentation": "https://github.com/SonnyLabs/sonnylabs_py#readme",
"Homepage": "https://sonnylabs.ai",
"Repository": "https://github.com/SonnyLabs/sonnylabs_py"
},
"split_keywords": [
"ai",
" security",
" prompt injection",
" vulnerability",
" scanner",
" cybersecurity",
" testing"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "227cc6458f5a7490a655ce112c84f008469134e5e59655ed9ae8d9bb09f60fc0",
"md5": "bd74677e021f2cf6205c17726c34602c",
"sha256": "cc7ed06795c5f73a30891bd7078ab45aa6673ef74b5268e5e8bb2cf58ee88b61"
},
"downloads": -1,
"filename": "sonnylabs-0.1.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bd74677e021f2cf6205c17726c34602c",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 8243,
"upload_time": "2025-07-26T15:05:11",
"upload_time_iso_8601": "2025-07-26T15:05:11.394347Z",
"url": "https://files.pythonhosted.org/packages/22/7c/c6458f5a7490a655ce112c84f008469134e5e59655ed9ae8d9bb09f60fc0/sonnylabs-0.1.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "2965f6b2cb740e503ff077fc5117d225c99d16849d71127061846ff1d2cca2c6",
"md5": "897dda4519f25dec800bfda9ba6114c1",
"sha256": "ab6c9f0d57a000a94019b3bf8c2666f61a4d9788ac023c3f2a118bc05d8c8595"
},
"downloads": -1,
"filename": "sonnylabs-0.1.2.tar.gz",
"has_sig": false,
"md5_digest": "897dda4519f25dec800bfda9ba6114c1",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 11544,
"upload_time": "2025-07-26T15:05:12",
"upload_time_iso_8601": "2025-07-26T15:05:12.265782Z",
"url": "https://files.pythonhosted.org/packages/29/65/f6b2cb740e503ff077fc5117d225c99d16849d71127061846ff1d2cca2c6/sonnylabs-0.1.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-26 15:05:12",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "SonnyLabs",
"github_project": "sonnylabs_py",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "sonnylabs"
}