## SafeAI Face Detection & Face Retrieval
SafeAI face detection and face retrieval library
<br>
## Installation
```python
pip install safeai-face
```
<br>
## Functions
### Face Detection
The [ face_detection ] function is used to detect faces in an image using a YOLO-based model. It supports both simple face detection and tracking over multiple frames if needed. The function returns a list of detected faces, each containing the bounding box coordinates, tracking ID (if enabled), and confidence score.
**Example Usage**
```python
import cv2
from safevision_face import face_detection
image = cv2.imread("image_path")
detection = face_detection(image, do_track=True)
#output : [{'box': (56, 16, 169, 167), 'track_id': 1, 'score': 0.8751850724220276}]
```
**Parameters**
- image_bgr(np.ndarray)
The input image in BGR format. This is typically read using OpenCV (cv2.imread).
- conf(float)
The confidence threshold for face detection. Detections with confidence scores below this value are ignored.
Default: 0.4.
- iou(float)
The confidence threshold for face detection. Detections with confidence scores below this value are ignored.
Default: 0.4.
- do_track(bool)
A boolean flag indicating whether to enable tracking. If True, the function will use a tracker to assign unique IDs to detected faces across frames.
Default: False.
- tracker_config(str)
The configuration file for the tracker, used when do_track is set to True.
Default: "bytetrack.yaml".
<br>
### Face Extraction
The [ face_extraction ] function extracts a feature embedding vector from a given face image using the EdgeFace model. This embedding is a numerical representation of the face, which can be used for tasks like face recognition, clustering, or similarity comparison.
**Example Usage**
```python
from safevision_face import face_extraction
vec = face_extraction(image)
#output : a vector of 512 dimensions
```
**Parameters**
- image_bgr(np.ndarray)
The input image in BGR format. Typically, this is a cropped face image obtained from a face detection model.
<br>
### Database Init
The [ db_set ] function initializes a connection to a Milvus database and creates a collection for storing vector data if it does not already exist. Milvus is a vector database commonly used for managing embeddings for similarity search and machine learning tasks.
**Example Usage**
```python
from safevision_face import db_set
client = db_set(
db_path="your_db_path/db_name.db",
collection_name="your_collection_name",
dimension=512,
metric_type="COSINE"
)
#output :
```
**Parameters**
- db_path(str)
The URI of the Milvus database.
- collection_name(str)
The name of the collection in the Milvus database.
- dimension(int)
The dimensionality of the vectors to be stored in the collection. This should match the dimension of the embeddings being used.
Default: 512
- metric_type(str)
The distance metric used for similarity searches in the collection.
Default: COSINE
- auto_id(bool)
Whether to enable automatic ID generation for records in the collection.
Default: True
- enable_dynamic_field(bool)
Whether to allow dynamic fields in the collection. Dynamic fields let you store non-fixed schema attributes.
Default: True
<br>
### Database Insert
The [ db_insert ] function is used to add a record into a specific collection in a Milvus database. The record contains both a vector (embedding) and associated metadata, enabling vector-based similarity searches while preserving contextual information about the stored data.
```python
from safevision_face import db_insert
db_insert(
client,
collection_name="your_collection_name",
vector=embedding_vector,
orig_path="/some/orig_path.jpg",
crop_path="/some/crop_path.jpg",
timestamp="20250101_120000",
tracking_id=123,
cam_id="cam_number"
)
#output : The function returns True after successfully inserting the record into the collection.
```
**Parameters**
- client(MilvusCLient)
An instance of the MilvusClient connected to the database.
- collection_name(str)
The name of the collection in which the data will be inserted.
- vector(np.ndarray)
A vector (embedding) to be stored in the database. This represents the numerical representation of data, such as facial embeddings for similarity search.
- orig_path(str)
The file path of the original image associated with the vector.
- crop_path(str)
The file path of the cropped image associated with the vector.
- timestamp(str)
A timestamp indicating when the data was generated or captured.
- tracking_id(int)
An ID used for tracking individuals or objects across frames or locations.
- cam_id(str)
The identifier of the camera or device from which the data was captured.
<br>
### Database Search
The [ db_search ] function performs a similarity search in a Milvus collection by comparing a query image's embedding (vector) against stored vectors. The results include the top matches that meet a specified similarity threshold.
```python
from safevision_face import db_search
results = db_search(
client,
collection_name="face_collection",
query_image=image_bgr,
top_k=1,
threshold=0.4,
extractor_func=face_extraction
)
#output : [{'score': 0.7510387301445007, 'entity': {'orig_path': '/some/orig_path.jpg', 'crop_path': '/some/crop_path.jpg', 'timestamp': '20250101_120000', 'tracking_id': 123, 'cam_id': 'cam102'}}]
```
**Parameters**
- client(MilvusCLient)
An instance of the MilvusClient connected to the Milvus database, enabling search operations on a specific collection.
- collection_name(str)
The name of the collection in the Milvus database where the search will be performed.
- query_image(np.ndarray)
The image (in numpy array format) for which the similarity search is conducted. This image will be converted into an embedding using the provided extractor_func.
- top_k(int, default: 5)
The maximum number of top matches to retrieve from the database.
- threshold(float, default: 0.4)
The minimum similarity score for a match to be considered valid. Matches with a score below this value will be filtered out.
- extractor_func
A function to extract the vector (embedding) from the query image. The function should take an image (numpy array) as input and return its embedding.
Raw data
{
"_id": null,
"home_page": "https://github.com/safeai-kr/safeai-face.git",
"name": "safeai-face",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10.8",
"maintainer_email": null,
"keywords": "SafeAI, safeai, face, face detection, face retrieval",
"author": "paradise999",
"author_email": "choirock6416@gamil.com",
"download_url": "https://files.pythonhosted.org/packages/0d/94/7bd182bc2892ffdcd1a6b057e4b0829da73a340bb8ee7e079a60709c2aa0/safeai_face-0.0.2.2.tar.gz",
"platform": null,
"description": "## SafeAI Face Detection & Face Retrieval\nSafeAI face detection and face retrieval library\n\n<br>\n\n## Installation\n```python\npip install safeai-face\n```\n\n<br>\n\n## Functions\n### Face Detection\nThe [ face_detection ] function is used to detect faces in an image using a YOLO-based model. It supports both simple face detection and tracking over multiple frames if needed. The function returns a list of detected faces, each containing the bounding box coordinates, tracking ID (if enabled), and confidence score.\n\n**Example Usage**\n\n```python\nimport cv2\nfrom safevision_face import face_detection\n\nimage = cv2.imread(\"image_path\")\ndetection = face_detection(image, do_track=True)\n\n#output : [{'box': (56, 16, 169, 167), 'track_id': 1, 'score': 0.8751850724220276}]\n```\n\n**Parameters**\n- image_bgr(np.ndarray)\n\n The input image in BGR format. This is typically read using OpenCV (cv2.imread). \n\n- conf(float)\n\n The confidence threshold for face detection. Detections with confidence scores below this value are ignored.\n \n Default: 0.4.\n\n- iou(float)\n\n The confidence threshold for face detection. Detections with confidence scores below this value are ignored.\n\n Default: 0.4.\n\n- do_track(bool)\n\n A boolean flag indicating whether to enable tracking. If True, the function will use a tracker to assign unique IDs to detected faces across frames.\n \n Default: False.\n\n- tracker_config(str)\n\n The configuration file for the tracker, used when do_track is set to True.\n \n Default: \"bytetrack.yaml\".\n\n<br>\n\n### Face Extraction\nThe [ face_extraction ] function extracts a feature embedding vector from a given face image using the EdgeFace model. This embedding is a numerical representation of the face, which can be used for tasks like face recognition, clustering, or similarity comparison.\n\n**Example Usage**\n\n```python\nfrom safevision_face import face_extraction\n\nvec = face_extraction(image)\n\n#output : a vector of 512 dimensions\n```\n\n**Parameters**\n- image_bgr(np.ndarray)\n\n The input image in BGR format. Typically, this is a cropped face image obtained from a face detection model.\n\n<br>\n\n### Database Init\nThe [ db_set ] function initializes a connection to a Milvus database and creates a collection for storing vector data if it does not already exist. Milvus is a vector database commonly used for managing embeddings for similarity search and machine learning tasks.\n\n**Example Usage**\n\n```python\nfrom safevision_face import db_set\n\nclient = db_set(\n db_path=\"your_db_path/db_name.db\", \n collection_name=\"your_collection_name\",\n dimension=512,\n metric_type=\"COSINE\"\n)\n\n#output : \n```\n\n**Parameters**\n- db_path(str)\n\n The URI of the Milvus database.\n\n- collection_name(str)\n\n The name of the collection in the Milvus database.\n\n- dimension(int)\n\n The dimensionality of the vectors to be stored in the collection. This should match the dimension of the embeddings being used.\n\n Default: 512\n\n- metric_type(str)\n\n The distance metric used for similarity searches in the collection.\n\n Default: COSINE\n\n- auto_id(bool)\n\n Whether to enable automatic ID generation for records in the collection.\n\n Default: True\n\n- enable_dynamic_field(bool)\n\n Whether to allow dynamic fields in the collection. Dynamic fields let you store non-fixed schema attributes.\n\n Default: True\n\n\n<br>\n\n### Database Insert\nThe [ db_insert ] function is used to add a record into a specific collection in a Milvus database. The record contains both a vector (embedding) and associated metadata, enabling vector-based similarity searches while preserving contextual information about the stored data.\n\n```python\nfrom safevision_face import db_insert\n\ndb_insert(\n client,\n collection_name=\"your_collection_name\",\n vector=embedding_vector,\n orig_path=\"/some/orig_path.jpg\",\n crop_path=\"/some/crop_path.jpg\",\n timestamp=\"20250101_120000\",\n tracking_id=123,\n cam_id=\"cam_number\"\n)\n\n#output : The function returns True after successfully inserting the record into the collection.\n```\n\n**Parameters**\n- client(MilvusCLient)\n\n An instance of the MilvusClient connected to the database. \n\n- collection_name(str)\n\n The name of the collection in which the data will be inserted.\n\n- vector(np.ndarray)\n\n A vector (embedding) to be stored in the database. This represents the numerical representation of data, such as facial embeddings for similarity search.\n\n- orig_path(str)\n\n The file path of the original image associated with the vector.\n\n- crop_path(str)\n\n The file path of the cropped image associated with the vector.\n\n- timestamp(str)\n\n A timestamp indicating when the data was generated or captured. \n\n- tracking_id(int)\n\n An ID used for tracking individuals or objects across frames or locations.\n\n- cam_id(str)\n\n The identifier of the camera or device from which the data was captured.\n\n\n\n<br>\n\n\n### Database Search\nThe [ db_search ] function performs a similarity search in a Milvus collection by comparing a query image's embedding (vector) against stored vectors. The results include the top matches that meet a specified similarity threshold.\n\n```python\nfrom safevision_face import db_search\n\nresults = db_search(\n client,\n collection_name=\"face_collection\",\n query_image=image_bgr,\n top_k=1,\n threshold=0.4,\n extractor_func=face_extraction\n)\n\n#output : [{'score': 0.7510387301445007, 'entity': {'orig_path': '/some/orig_path.jpg', 'crop_path': '/some/crop_path.jpg', 'timestamp': '20250101_120000', 'tracking_id': 123, 'cam_id': 'cam102'}}]\n```\n\n**Parameters**\n- client(MilvusCLient)\n\n An instance of the MilvusClient connected to the Milvus database, enabling search operations on a specific collection.\n\n \n- collection_name(str)\n\n The name of the collection in the Milvus database where the search will be performed.\n\n- query_image(np.ndarray)\n\n The image (in numpy array format) for which the similarity search is conducted. This image will be converted into an embedding using the provided extractor_func.\n\n- top_k(int, default: 5)\n\n The maximum number of top matches to retrieve from the database.\n\n- threshold(float, default: 0.4)\n\n The minimum similarity score for a match to be considered valid. Matches with a score below this value will be filtered out.\n\n- extractor_func\n\n A function to extract the vector (embedding) from the query image. The function should take an image (numpy array) as input and return its embedding.\n",
"bugtrack_url": null,
"license": null,
"summary": "SafeAI face detection and face retrieval library using milvus vector DB",
"version": "0.0.2.2",
"project_urls": {
"Homepage": "https://github.com/safeai-kr/safeai-face.git"
},
"split_keywords": [
"safeai",
" safeai",
" face",
" face detection",
" face retrieval"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a6d829d322bf633fbd4f1bae61f2418b1a0807e850fd1262cbe9b8496f33ee00",
"md5": "bc223eafab0090c0f21e89e27f3c3233",
"sha256": "db035ab39654a08747de0c2f16e584ec9bb7b6b237cd65b22724cd433a3a9c6f"
},
"downloads": -1,
"filename": "safeai_face-0.0.2.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bc223eafab0090c0f21e89e27f3c3233",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10.8",
"size": 51059602,
"upload_time": "2025-01-14T06:05:09",
"upload_time_iso_8601": "2025-01-14T06:05:09.970346Z",
"url": "https://files.pythonhosted.org/packages/a6/d8/29d322bf633fbd4f1bae61f2418b1a0807e850fd1262cbe9b8496f33ee00/safeai_face-0.0.2.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "0d947bd182bc2892ffdcd1a6b057e4b0829da73a340bb8ee7e079a60709c2aa0",
"md5": "3916558c965b0dcaf99865b97747ca54",
"sha256": "0d672c1b33df36fc0b1224846f9ec3b252003be9f5384f46aac1c982a78c9eeb"
},
"downloads": -1,
"filename": "safeai_face-0.0.2.2.tar.gz",
"has_sig": false,
"md5_digest": "3916558c965b0dcaf99865b97747ca54",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10.8",
"size": 51057798,
"upload_time": "2025-01-14T06:05:26",
"upload_time_iso_8601": "2025-01-14T06:05:26.150208Z",
"url": "https://files.pythonhosted.org/packages/0d/94/7bd182bc2892ffdcd1a6b057e4b0829da73a340bb8ee7e079a60709c2aa0/safeai_face-0.0.2.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-01-14 06:05:26",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "safeai-kr",
"github_project": "safeai-face",
"github_not_found": true,
"lcname": "safeai-face"
}