# roboflex.realsense
Support for the Realsense D435 camera. We really just wrap the sdk and encode data into messages. It might work with other D4** cameras - YMMV.
## System Dependencies
Sort of: none. We use cmake to build. If find_package(realsense2) fails (meaning cmake can't find some system-installed version of realsense2), we fetch and build it.
## Install
pip install roboflex.realsense
## Import
import roboflex.realsense as rr
## Nodes
There is only one: **RealsenseSensor**
You can use a class method to discover the serial numbers of all the cameras currently connected to your computer:
rr.RealsenseSensor.get_connected_device_serial_numbers()
There are two ways to instantiate realsense sensors:
1. Via the constructor - you must know the serial number. See below for documentation of the Config type.
sensor = rr.RealsenseSensor(
serial_number: str,
config: rr.Config,
name: str = "RealsenseSensor",
)
2. Via a class method - if you have a single realsense attached,
this is the easiest way - you don't even have to know the serial number.
sensor = rr.RealsenseSensor.get_one_sensor(
config: rr.Config,
name: str = "RealsenseSensor",
)
Use it like so:
# must be started!
sensor.start()
# You can get the serial number:
sensor.serial_number -> str
# You can get the configuration back:
sensor.config -> rr.Config
# You can get the two camera-k matrices:
sensor.depth_camera_k -> np.array
sensor.color_camera_k -> np.array
# You can get various device values:
sensor.width_pixels_color -> int
sensor.height_pixels_color -> int
sensor.width_pixels_depth -> int
sensor.height_pixels_depth -> int
sensor.fps_color -> int
sensor.fps_depth -> int
# You can manually trigger a message event:
# (for the most part you won't need this - only
# if you have special need to control the message
# production), and don't want to use "start".
sensor.produce()
# You can turn the IR laser off and on
sensor.set_laser_on_off(False)
sensor.set_laser_on_off(True)
## Messages
RealsenseSensor produces a single message type: **RealsenseFrameset**.
from roboflex.realsense import RealsenseFrameset
API:
# the timestamp just before reading from device
message.t0 -> Float
# the timestamp just after reading from device
message.t1 -> Float
# numpy array of shape=(height, width, 3), dtype=uint8
message.rgb -> np.ndarray
# numpy array of shape=(height, width), dtype=uint16
message.depth -> np.ndarray
# numpy array of shape=(height, width), dtype=uint8
message.ir1 -> np.ndarray
# numpy array of shape=(height, width), dtype=uint8
message.ir2 -> np.ndarray
# which camera (if any) the frame is aligned to
message.aligned_to -> rr.CameraType
# the serial number of the camera that produced this frameset
message.serial_number -> str
# the color camera k of the camera that produced this frameset
message.camera_k_rgb -> numpy array of (3, 3)
# the depth camera k of the camera that produced this frameset
message.camera_k_depth -> numpy array of (3, 3)
# the ir1 camera k of the camera that produced this frameset
message.camera_k_ir1 -> numpy array of (3, 3)
# the ir2 camera k of the camera that produced this frameset
message.camera_k_ir2 -> numpy array of (3, 3)
# the index, from the device, of this frameset
message.frame_number -> int
# the timestamp, from the device, of this frameset, in epoch seconds
message.timestamp -> float
DYNOFLEX:
d = DynoFlex.from_msg(message)
# the timestamp just before reading from device
d["t0"] -> Double
# the timestamp just after reading from device
d["t1"] -> Double
# numpy array of shape=(height, width, 3), dtype=uint8
d["rgb"] -> np.ndarray
# numpy array of shape=(height, width), dtype=uint16
d["depth"] -> np.ndarray
# numpy array of shape=(height, width), dtype=uint8
d["ir1"] -> np.ndarray
# numpy array of shape=(height, width), dtype=uint8
d["ir2"] -> np.ndarray
# which camera (if any) the frame is aligned to
d["aligned_to"] -> rr.CameraType
# the serial number of the camera that produced this frameset
d["serial_number"] -> str
# the color camera k of the camera that produced this frameset
d["camera_k_rgb"] -> numpy array of (3, 3)
# the depth camera k of the camera that produced this frameset
d["camera_k_depth"] -> numpy array of (3, 3)
# the camera k of the infrared camera 1 that produced this frameset
d["camera_k_ir1"] -> numpy array of (3, 3)
# the camera k of the infrared camera 2 that produced this frameset
d["camera_k_ir2"] -> numpy array of (3, 3)
# the index, from the device, of this frameset
d["n"] -> int
# the timestamp, from the device, of this frameset, in epoch seconds
d["t"] -> float
## Other
Some types used for configuration
Some enums:
CameraType:
RGB,
DEPTH,
IR1, # raw infrared camera1
IR2 # raw infrared camera2
# You can OR these together like so:
my_cameras = camera_type_or([CameraType.RGB, CameraType.DEPTH, etc])
# You can test for camera types:
has_rgb = camera_type_contains(my_camera_type, CameraType.RGB)
CameraAlignment:
NONE,
RGB,
DEPTH
D400VisualPreset:
CUSTOM,
DEFAULT,
HAND,
HIGH_ACCURACY,
HIGH_DENSITY,
MEDIUM_DENSITY
TemporalFilterParameters:
f = rr.TemporalFilterParameters(
alpha: float = 0.4,
delta: float = 20.0,
persistance_control: int = 7,
)
f.alpha -> float
f.delta -> float
f.persistance_control -> int
Config:
# all paramters are optional - defaults shown below
c = rr.Config(
camera_type: rr.CameraType = CameraType.RGB | CameraType.DEPTH,
align_to: rr.CameraAlignment = rr.CameraAlignment.NONE,
prioritize_ae: Bool = False,
rgb_settings: Dict[str, int] = {
"fps": 0,
"width": 0,
"height": 0,
},
depth_settings: Dict[str, int] = {
"fps": 0,
"width": 0,
"height": 0,
},
depth_visual_preset: rr.D400VisualPreset = rr.D400VisualPreset.DEFAULT,
temporal_filter_parameters: rr.TemporalFilterParameters = None,
hole_filling_mode: Optional[int] = None,
decimation_filter: Optional[int] = None,
)
# which actual cameras are in use - this a bitmask, OR-ed together
c.camera_type -> rr.CameraType
# which camera the frames will be aligned to (if any)
c.align_to -> rr.CameraAligment
# When `true`, allows fps to drop in order to better expose
# frames, such as in dimly lit environments
c.prioritize_ae -> Bool
# fps, height, and width of the color camera
c.rgb_settings -> Dict[str, int]
# fps, height, and width of the depth camera
c.depth_settings -> Dict[str, int]
# the depth camera visual preset
c.depth_visual_preset -> rr.D400VisualPreset
# the temporal filter, if any - can be None
c.temporal_filter_parameters -> rr.TemporalFilterParameters
# the 'hole filling mode', if any - can be None
# 0: fill_from_left
# 1: farthest_from_around
# 2: nearest_from_around
c.hole_filling_mode -> int
# NOTE!
The realsense camera supports many more settings, such as laser power, etc. If there's something you want supported that's not
here, just let me know.
Raw data
{
"_id": null,
"home_page": "https://github.com/flexrobotics/roboflex_realsense",
"name": "roboflex.realsense",
"maintainer": "",
"docs_url": null,
"requires_python": ">=3.6",
"maintainer_email": "",
"keywords": "realsense,robotics,middleware,flexbuffers,python,c++,c++20",
"author": "Colin Prepscius",
"author_email": "colinprepscius@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/53/8c/370f5f3b073bf768d261b8000b5a797fdf91f994977be4f8135555cc2cf1/roboflex.realsense-0.1.6.tar.gz",
"platform": null,
"description": "# roboflex.realsense\n\nSupport for the Realsense D435 camera. We really just wrap the sdk and encode data into messages. It might work with other D4** cameras - YMMV.\n\n## System Dependencies\n\nSort of: none. We use cmake to build. If find_package(realsense2) fails (meaning cmake can't find some system-installed version of realsense2), we fetch and build it. \n\n## Install\n\n pip install roboflex.realsense\n\n## Import\n\n import roboflex.realsense as rr\n\n## Nodes\n\nThere is only one: **RealsenseSensor**\n\nYou can use a class method to discover the serial numbers of all the cameras currently connected to your computer:\n\n rr.RealsenseSensor.get_connected_device_serial_numbers()\n\nThere are two ways to instantiate realsense sensors:\n\n1. Via the constructor - you must know the serial number. See below for documentation of the Config type.\n\n sensor = rr.RealsenseSensor(\n serial_number: str,\n config: rr.Config,\n name: str = \"RealsenseSensor\",\n )\n\n2. Via a class method - if you have a single realsense attached,\nthis is the easiest way - you don't even have to know the serial number.\n\n sensor = rr.RealsenseSensor.get_one_sensor(\n config: rr.Config,\n name: str = \"RealsenseSensor\",\n )\n\nUse it like so:\n\n # must be started!\n sensor.start()\n\n # You can get the serial number:\n sensor.serial_number -> str\n\n # You can get the configuration back:\n sensor.config -> rr.Config\n\n # You can get the two camera-k matrices:\n sensor.depth_camera_k -> np.array\n sensor.color_camera_k -> np.array\n\n # You can get various device values:\n sensor.width_pixels_color -> int\n sensor.height_pixels_color -> int\n sensor.width_pixels_depth -> int\n sensor.height_pixels_depth -> int\n sensor.fps_color -> int\n sensor.fps_depth -> int\n\n # You can manually trigger a message event:\n # (for the most part you won't need this - only\n # if you have special need to control the message\n # production), and don't want to use \"start\".\n sensor.produce()\n\n # You can turn the IR laser off and on\n sensor.set_laser_on_off(False)\n sensor.set_laser_on_off(True)\n\n\n## Messages\n\nRealsenseSensor produces a single message type: **RealsenseFrameset**.\n\n from roboflex.realsense import RealsenseFrameset\n\nAPI:\n\n # the timestamp just before reading from device\n message.t0 -> Float\n\n # the timestamp just after reading from device\n message.t1 -> Float\n\n # numpy array of shape=(height, width, 3), dtype=uint8\n message.rgb -> np.ndarray\n\n # numpy array of shape=(height, width), dtype=uint16\n message.depth -> np.ndarray\n\n # numpy array of shape=(height, width), dtype=uint8\n message.ir1 -> np.ndarray\n\n # numpy array of shape=(height, width), dtype=uint8\n message.ir2 -> np.ndarray\n\n # which camera (if any) the frame is aligned to\n message.aligned_to -> rr.CameraType\n\n # the serial number of the camera that produced this frameset\n message.serial_number -> str\n\n # the color camera k of the camera that produced this frameset\n message.camera_k_rgb -> numpy array of (3, 3)\n\n # the depth camera k of the camera that produced this frameset\n message.camera_k_depth -> numpy array of (3, 3)\n\n # the ir1 camera k of the camera that produced this frameset\n message.camera_k_ir1 -> numpy array of (3, 3)\n\n # the ir2 camera k of the camera that produced this frameset\n message.camera_k_ir2 -> numpy array of (3, 3)\n\n # the index, from the device, of this frameset\n message.frame_number -> int\n\n # the timestamp, from the device, of this frameset, in epoch seconds\n message.timestamp -> float\n\nDYNOFLEX:\n\n d = DynoFlex.from_msg(message)\n\n # the timestamp just before reading from device\n d[\"t0\"] -> Double\n\n # the timestamp just after reading from device\n d[\"t1\"] -> Double\n\n # numpy array of shape=(height, width, 3), dtype=uint8\n d[\"rgb\"] -> np.ndarray\n\n # numpy array of shape=(height, width), dtype=uint16\n d[\"depth\"] -> np.ndarray\n\n # numpy array of shape=(height, width), dtype=uint8\n d[\"ir1\"] -> np.ndarray\n\n # numpy array of shape=(height, width), dtype=uint8\n d[\"ir2\"] -> np.ndarray\n\n # which camera (if any) the frame is aligned to\n d[\"aligned_to\"] -> rr.CameraType\n\n # the serial number of the camera that produced this frameset\n d[\"serial_number\"] -> str\n\n # the color camera k of the camera that produced this frameset\n d[\"camera_k_rgb\"] -> numpy array of (3, 3)\n\n # the depth camera k of the camera that produced this frameset\n d[\"camera_k_depth\"] -> numpy array of (3, 3)\n\n # the camera k of the infrared camera 1 that produced this frameset\n d[\"camera_k_ir1\"] -> numpy array of (3, 3)\n\n # the camera k of the infrared camera 2 that produced this frameset\n d[\"camera_k_ir2\"] -> numpy array of (3, 3)\n\n # the index, from the device, of this frameset\n d[\"n\"] -> int\n\n # the timestamp, from the device, of this frameset, in epoch seconds\n d[\"t\"] -> float\n\n## Other\n\nSome types used for configuration\n\nSome enums:\n\n CameraType:\n RGB,\n DEPTH,\n IR1, # raw infrared camera1\n IR2 # raw infrared camera2\n \n # You can OR these together like so:\n\n my_cameras = camera_type_or([CameraType.RGB, CameraType.DEPTH, etc])\n\n # You can test for camera types:\n\n has_rgb = camera_type_contains(my_camera_type, CameraType.RGB)\n\n CameraAlignment:\n NONE,\n RGB,\n DEPTH\n\n D400VisualPreset:\n CUSTOM,\n DEFAULT,\n HAND,\n HIGH_ACCURACY,\n HIGH_DENSITY,\n MEDIUM_DENSITY\n\nTemporalFilterParameters:\n\n f = rr.TemporalFilterParameters(\n alpha: float = 0.4,\n delta: float = 20.0,\n persistance_control: int = 7,\n )\n\n f.alpha -> float\n f.delta -> float\n f.persistance_control -> int\n\nConfig:\n\n # all paramters are optional - defaults shown below\n c = rr.Config(\n camera_type: rr.CameraType = CameraType.RGB | CameraType.DEPTH,\n align_to: rr.CameraAlignment = rr.CameraAlignment.NONE,\n prioritize_ae: Bool = False,\n rgb_settings: Dict[str, int] = {\n \"fps\": 0,\n \"width\": 0,\n \"height\": 0,\n },\n depth_settings: Dict[str, int] = {\n \"fps\": 0,\n \"width\": 0,\n \"height\": 0,\n },\n depth_visual_preset: rr.D400VisualPreset = rr.D400VisualPreset.DEFAULT,\n temporal_filter_parameters: rr.TemporalFilterParameters = None,\n hole_filling_mode: Optional[int] = None,\n decimation_filter: Optional[int] = None,\n )\n\n # which actual cameras are in use - this a bitmask, OR-ed together\n c.camera_type -> rr.CameraType\n\n # which camera the frames will be aligned to (if any)\n c.align_to -> rr.CameraAligment\n\n # When `true`, allows fps to drop in order to better expose\n # frames, such as in dimly lit environments\n c.prioritize_ae -> Bool\n\n # fps, height, and width of the color camera\n c.rgb_settings -> Dict[str, int]\n\n # fps, height, and width of the depth camera\n c.depth_settings -> Dict[str, int]\n\n # the depth camera visual preset\n c.depth_visual_preset -> rr.D400VisualPreset\n\n # the temporal filter, if any - can be None\n c.temporal_filter_parameters -> rr.TemporalFilterParameters\n\n # the 'hole filling mode', if any - can be None\n # 0: fill_from_left\n # 1: farthest_from_around\n # 2: nearest_from_around\n c.hole_filling_mode -> int\n\n # NOTE!\n The realsense camera supports many more settings, such as laser power, etc. If there's something you want supported that's not\n here, just let me know.\n\n\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Roboflex Realsense Library",
"version": "0.1.6",
"project_urls": {
"Homepage": "https://github.com/flexrobotics/roboflex_realsense"
},
"split_keywords": [
"realsense",
"robotics",
"middleware",
"flexbuffers",
"python",
"c++",
"c++20"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "538c370f5f3b073bf768d261b8000b5a797fdf91f994977be4f8135555cc2cf1",
"md5": "5e86076d9394b2b635c4b65d24be731b",
"sha256": "2e5a564b08a863d2ad39e97f8139a09b458031f53fe95b7f4446ed7ee9059233"
},
"downloads": -1,
"filename": "roboflex.realsense-0.1.6.tar.gz",
"has_sig": false,
"md5_digest": "5e86076d9394b2b635c4b65d24be731b",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.6",
"size": 18658,
"upload_time": "2023-12-05T23:29:21",
"upload_time_iso_8601": "2023-12-05T23:29:21.466278Z",
"url": "https://files.pythonhosted.org/packages/53/8c/370f5f3b073bf768d261b8000b5a797fdf91f994977be4f8135555cc2cf1/roboflex.realsense-0.1.6.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2023-12-05 23:29:21",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "flexrobotics",
"github_project": "roboflex_realsense",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "roboflex.realsense"
}