bimvee


Namebimvee JSON
Version 1.0.22 PyPI version JSON
download
home_pagehttps://github.com/event-driven-robotics/bimvee
SummaryBatch Import, Manipulation, Visualisation and Export of Events etc
upload_time2024-04-19 09:20:35
maintainerNone
docs_urlNone
authorEvent-driven Perception for Robotics group at Istituto Italiano di Tecnologia: Simeon Bamford, Suman Ghosh, Aiko Dinale, Massimiliano Iacono, Ander Arriandiaga, etc
requires_pythonNone
licensegpl
keywords event event camera event-based event-driven spike dvs dynamic vision sensor neuromorphic aer address-event representationspiking neural network davis atis celex
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # bimvee - Batch Import, Manipulation, Visualisation, and Export of Events etc.

<img src="https://raw.githubusercontent.com/event-driven-robotics/bimvee/master/images/events.png" width="300"/> <img src="https://raw.githubusercontent.com/event-driven-robotics/bimvee/master/images/frames.png" width="300"/>
<img src="https://raw.githubusercontent.com/event-driven-robotics/bimvee/master/images/imu.png" width="300"/>
<img src="https://raw.githubusercontent.com/event-driven-robotics/bimvee/master/images/pose.png" width="300"/>
<img src="https://raw.githubusercontent.com/event-driven-robotics/bimvee/master/images/dvslastts.png" width="300"/>
<img src="https://raw.githubusercontent.com/event-driven-robotics/bimvee/master/images/eventrate.png" width="300"/>

# Quickstart

## Installation

There is a pip installer:

    pip install bimvee

Important! If you clone this repo, use --recurse-submodules option, as this repo uses 'importRosbag' library as a submodule.

## Usage

Look at [examples.py](https://github.com/event-driven-robotics/bimvee/blob/master/examples/examples.py) for examples of how to use the functionality in this library.

Want to play back your timestamped multi-channel data? Consider using https://github.com/event-driven-robotics/mustard

# Introduction

Working with timestamped address-event data from event cameras (dvs), and
possibly other neuromorphic sensors alongside other timestamped data
that we need for our experiments, including but not limited to:
- frame-based camera images
- IMU
- 6-DOF poses
- derived datatypes, such as optical (flow) events, or labelled dvs events (dvsL) etc
- Camera calibration info is also imported from e.g. ros (cam)

File formats supported include:
- IIT YARP .log - ATIS Gen1 and IMU, also iCub skin
- rpg_dvs_ros - DVS/DAVIS .bag
- Third-party datasets recorded by using the above rosbag importer (e.g. Penn MvSEC, UMD EvIMO, Intel Realsense etc)
- Vicon - as dumped by yarpDumper
- Samsung (SEC) Gen3 VGA .bin
- Universidad de Sevilla / PyNavis .aedat
- TU Graz .aer2
- INI .aedat (partial implementation - audio / DAS data only)
- Prophesee .raw .dat
- Pull requests welcome for importers or exporters of other file formats.

# Contents of library

## Import functions:

The aim is to bring the different formats into as common a format as possible.
Parameters: at least the param "filePathOrName" (otherwise working from current directory)
Returns a dict containing:

    {'info': {<filePathOrName, any other info derivable from file headers>},

    'data': {

         channel0: {}
         channel1: {}
         ...
         }}

The 'data' branch contains a dict for each channel. A 'channel' is an arbitrary
grouping of datasets. It might be that there is one channel for each sensor,
so for example a file might contain 'left' and 'right'
camera channels, and each of these channels might contain dvs events alongside
other data types like frames.
Each channel is a dict containing one dict for each type of data.
Data types may include:
- dvs (Timestamped (ts) 2D address-events (x, y) with polarity (pol), from an event camera)
- frame
- imu
- flow
- pose
- etc

dvs data type, for example, then contains:

- "pol": numpy array of bool
- "x": numpy array of np.uint16
- "y": numpy array of np.uint16
- "ts": numpy array of float

timestamps are always converted to seconds;
(raw formats are, however, e.g. int with unit increments of 80 ns for ATIS,
int with unit increments of 1 us for DAVIS, etc)

To the extent possible, dvs polarity is imported so that 1/True = ON/increase-in-light and
0/False = OFF/decrease-in-light. Be aware that individual datasets may contain the opposite convention.

Multiple files imported simultaneously appear in a list of dicts;
lists and dicts are referred to jointly as containers,
and the manipulation, visualistation and export functions which follow
tend toward accepting containers with an arbitrarily deep hierarchy.

## Visualisation functions

There is a set of general functions for common visualisations of imported datasets, using matplotlib or seaborn.

- plotDvsContrast
- plotDvsLastTs
- plotSpikeogram
- plotEventRate
- plotFrame
- plotImu
- plotPose
- plotCorrelogram
- plotFlow

These functions have several kwargs to modify their behaviour, and they support a 'callback' kwarg so you can pass in a function to do post-modification of the plots.

There are two different visualisation concepts. In the 'continuous' concept, a single plot shows all timestamped data for a given container. This might be limited to a certain time range, as defined by kwargs minTime and maxTime. Examples include:
- plotEventRate
- plotImu
- plotPose
- plotSpikeogram

In the 'snapshot' concept, a representation is generated for a chosen moment in time. In the case of frames this might be the nearest frame to the chosen time. In the case of dvs events this might be an image composed of events recruited from around that moment in time, for which there is a concept of the time window. In the case of poses this might be a projected view of the pose at the given time, where the pose might be linearly interpolated between the two nearest timestamped poses. Examples include:
- plotDvsContrastSingle
- plotDvsLastTs (in this case, the visualisation is based on all data up to the chosen time)

In the case of the snapshot views, there are general functions which when passed a data container will choose a set of times distributed throughout the time range of that data and generate a snapshot view for each of these moments. Examples include:
- plotDvsContrast
- plotFrame

'visualiser.py' defines a set of classes, one for each of a selection of data types, which generate snapshot views. These are output as numpy arrays, to be rendered by an external application.

info.py includes various functions to give quick text info about the contents of the containers that result from imports.

## Manipulation functions

There are some functions for standard manipulations of data:

timestamps.py contains timestamp manipulations
including jointly zeroing timestamps across multiple files, channels and datatypes.
split.py includes various common ways by which datasets need to be split, e.g. splitByPolarity

## Export functions

exportIitYarp - exports to IIT's EDPR YARP format. Alongside data.log and
info.log files, it exports an xml which specifies to yarpmanager how to
visualise the resulting data.

# Dependencies:

This library uses importRosbag library to import rosbag data without needing a ros installation.
This is included as a submodule.

Beyond the python standard library, the main dependencies are:

- numpy
- tqdm (for progress bars during import and export functions)

For the 'plot' family of visualisation functions:

- matplotlib
- mpl_toolkits (only for certain 3d visualisations)
- seaborn

The "visualiser", however, generates graphics as numpy arrays
without reference to matplotlib, for rendering by an external application.

plotDvsLastTs uses rankdata from scipy; however if it's not installed,
it defaults to a local definition; scipy is therefore an optional dependency.

undistortEvents function in events.py uses cv2 (openCv).

import/export Hdf5 functions use:

- hickle

# Type definitions

bimvee doesn't use classes for datatypes. Consequently, the code doesn't have a central place to refer to for the definition of datatypes. The types are intended to be used loosely, with minimal features which can be extended by adding optional fields. There is an optional container class which gives some functions for easier data manipulation.

There are some datatypes which are simply dicts which act as containers to group information, for example the 'cam' type. However most of the functionality of the library is based around the idea of a datatype dict containing a set of keys where each is a numpy array (or other iterable) where there is a 'ts' key, containing a numpy array of float timestamps, and then each iterable key should have the same number of elements (in the zeroth dimension) as the ts field. Thus a set of timestamped 'events' or other data type is defined. Other keys may be included which either aren't iterables or don't have the same number of elements in the zeroth dimension. These are therefore not interpreted as contributing dimensions to the set of data points. Concretely the datatypes which have some kind of support are:

- dvs
- frame
- sample
- imu
- pose6q
- point3
- flow
- skinSamples
- skinEvents
- ear
- cam

Definitions of minimal and optional(*) fields follow.

- fieldName   dimensions  datatype(numpy array unless otherwise stated) notes

## dvs:

- ts  n float
- x   n np.uint16
- y   n np.uint16 As the sensor outputs it; plot functions assume that y increases in downward direction, following https://arxiv.org/pdf/1610.08336.pdf
- pol n bool To the extent possible, True means increase in light, False means decrease.
- dimX* 1 int
- dimY* 1 int

## frame:

- ts    n float
- frame n list (of np.array of 2 or 3 dimensions np.uint8)

## sample:

- ts     n float
- sensor n np.uint8
- value  n int

## imu:

- ts  n    float
- acc  nx3 float accelerometer readings [x,y,z] in m/s
- angV nx3 float angV readings [yaw, pitch roll?] in rad/s
- mag  nx3 float magnetometer readings [x, y, z] in tesla
- temp n   float

## point3:

- ts    n   float
- point nx3 float row format is [x, y, z]


## pose6q (effectively extends point3):

- ts       n   float
- point    nx3 float row format is [x, y, z]
- rotation nx4 float row format is [rw, rx, ry, rz] where r(wxyz) define a quaternion

Note: quaternion order follows the convention of e.g. blender (wxyz) but not e.g. ros. (xyzw)

## flow: (per-pixel flow events)

- ts  n float
- x   n np.uint16
- y   n np.uint16  
- vx   n np.uint16
- vy   n np.uint16  

## skinEvents: (intended for iCub neuromorphic skin events; could be generalised)

- ts n float
- taxel n np.unit16
- bodyPart n np.uint8
- pol n bool

## skinSamples: (intended for dense iCub skin samples; could be generalised)

- ts n float
- pressure nxm float; m is the number of taxels concurrently sampled. Note:
there exist in the wild examples where the pressure value is a raw 8-bit sample.

## ear: (intended for cochlea events from UDS / Gutierrez-Galan, could be generalised)

- ts n float
- freq n np.uint8
- pol n bool
- (There follow a number of model-specific fields which contribute to the full address: xsoType, auditoryModel, itdNeuronIds)

## cam:

Following ros msg camera info, the fields this might contain include:

- height           1   int
- width            1   int
- distortion_model     string
- D                5   float distortion params
- K                3x3 float Intrinsic camera matrix
- R                3x4 float Rectification matrix
- P                4x4 float projection matrix



            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/event-driven-robotics/bimvee",
    "name": "bimvee",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "event, event camera, event-based, event-driven, spike, dvs, dynamic vision sensor, neuromorphic, aer, address-event representationspiking neural network, davis, atis, celex",
    "author": "Event-driven Perception for Robotics group at Istituto Italiano di Tecnologia: Simeon Bamford, Suman Ghosh, Aiko Dinale, Massimiliano Iacono, Ander Arriandiaga, etc",
    "author_email": "simbamford@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/45/17/9578a970c116566dc82f7b02389800d1bf2904ff166ac10dfbb1e06518ef/bimvee-1.0.22.tar.gz",
    "platform": null,
    "description": "# bimvee - Batch Import, Manipulation, Visualisation, and Export of Events etc.\n\n<img src=\"https://raw.githubusercontent.com/event-driven-robotics/bimvee/master/images/events.png\" width=\"300\"/> <img src=\"https://raw.githubusercontent.com/event-driven-robotics/bimvee/master/images/frames.png\" width=\"300\"/>\n<img src=\"https://raw.githubusercontent.com/event-driven-robotics/bimvee/master/images/imu.png\" width=\"300\"/>\n<img src=\"https://raw.githubusercontent.com/event-driven-robotics/bimvee/master/images/pose.png\" width=\"300\"/>\n<img src=\"https://raw.githubusercontent.com/event-driven-robotics/bimvee/master/images/dvslastts.png\" width=\"300\"/>\n<img src=\"https://raw.githubusercontent.com/event-driven-robotics/bimvee/master/images/eventrate.png\" width=\"300\"/>\n\n# Quickstart\n\n## Installation\n\nThere is a pip installer:\n\n    pip install bimvee\n\nImportant! If you clone this repo, use --recurse-submodules option, as this repo uses 'importRosbag' library as a submodule.\n\n## Usage\n\nLook at [examples.py](https://github.com/event-driven-robotics/bimvee/blob/master/examples/examples.py) for examples of how to use the functionality in this library.\n\nWant to play back your timestamped multi-channel data? Consider using https://github.com/event-driven-robotics/mustard\n\n# Introduction\n\nWorking with timestamped address-event data from event cameras (dvs), and\npossibly other neuromorphic sensors alongside other timestamped data\nthat we need for our experiments, including but not limited to:\n- frame-based camera images\n- IMU\n- 6-DOF poses\n- derived datatypes, such as optical (flow) events, or labelled dvs events (dvsL) etc\n- Camera calibration info is also imported from e.g. ros (cam)\n\nFile formats supported include:\n- IIT YARP .log - ATIS Gen1 and IMU, also iCub skin\n- rpg_dvs_ros - DVS/DAVIS .bag\n- Third-party datasets recorded by using the above rosbag importer (e.g. Penn MvSEC, UMD EvIMO, Intel Realsense etc)\n- Vicon - as dumped by yarpDumper\n- Samsung (SEC) Gen3 VGA .bin\n- Universidad de Sevilla / PyNavis .aedat\n- TU Graz .aer2\n- INI .aedat (partial implementation - audio / DAS data only)\n- Prophesee .raw .dat\n- Pull requests welcome for importers or exporters of other file formats.\n\n# Contents of library\n\n## Import functions:\n\nThe aim is to bring the different formats into as common a format as possible.\nParameters: at least the param \"filePathOrName\" (otherwise working from current directory)\nReturns a dict containing:\n\n    {'info': {<filePathOrName, any other info derivable from file headers>},\n\n    'data': {\n\n         channel0: {}\n         channel1: {}\n         ...\n         }}\n\nThe 'data' branch contains a dict for each channel. A 'channel' is an arbitrary\ngrouping of datasets. It might be that there is one channel for each sensor,\nso for example a file might contain 'left' and 'right'\ncamera channels, and each of these channels might contain dvs events alongside\nother data types like frames.\nEach channel is a dict containing one dict for each type of data.\nData types may include:\n- dvs (Timestamped (ts) 2D address-events (x, y) with polarity (pol), from an event camera)\n- frame\n- imu\n- flow\n- pose\n- etc\n\ndvs data type, for example, then contains:\n\n- \"pol\": numpy array of bool\n- \"x\": numpy array of np.uint16\n- \"y\": numpy array of np.uint16\n- \"ts\": numpy array of float\n\ntimestamps are always converted to seconds;\n(raw formats are, however, e.g. int with unit increments of 80 ns for ATIS,\nint with unit increments of 1 us for DAVIS, etc)\n\nTo the extent possible, dvs polarity is imported so that 1/True = ON/increase-in-light and\n0/False = OFF/decrease-in-light. Be aware that individual datasets may contain the opposite convention.\n\nMultiple files imported simultaneously appear in a list of dicts;\nlists and dicts are referred to jointly as containers,\nand the manipulation, visualistation and export functions which follow\ntend toward accepting containers with an arbitrarily deep hierarchy.\n\n## Visualisation functions\n\nThere is a set of general functions for common visualisations of imported datasets, using matplotlib or seaborn.\n\n- plotDvsContrast\n- plotDvsLastTs\n- plotSpikeogram\n- plotEventRate\n- plotFrame\n- plotImu\n- plotPose\n- plotCorrelogram\n- plotFlow\n\nThese functions have several kwargs to modify their behaviour, and they support a 'callback' kwarg so you can pass in a function to do post-modification of the plots.\n\nThere are two different visualisation concepts. In the 'continuous' concept, a single plot shows all timestamped data for a given container. This might be limited to a certain time range, as defined by kwargs minTime and maxTime. Examples include:\n- plotEventRate\n- plotImu\n- plotPose\n- plotSpikeogram\n\nIn the 'snapshot' concept, a representation is generated for a chosen moment in time. In the case of frames this might be the nearest frame to the chosen time. In the case of dvs events this might be an image composed of events recruited from around that moment in time, for which there is a concept of the time window. In the case of poses this might be a projected view of the pose at the given time, where the pose might be linearly interpolated between the two nearest timestamped poses. Examples include:\n- plotDvsContrastSingle\n- plotDvsLastTs (in this case, the visualisation is based on all data up to the chosen time)\n\nIn the case of the snapshot views, there are general functions which when passed a data container will choose a set of times distributed throughout the time range of that data and generate a snapshot view for each of these moments. Examples include:\n- plotDvsContrast\n- plotFrame\n\n'visualiser.py' defines a set of classes, one for each of a selection of data types, which generate snapshot views. These are output as numpy arrays, to be rendered by an external application.\n\ninfo.py includes various functions to give quick text info about the contents of the containers that result from imports.\n\n## Manipulation functions\n\nThere are some functions for standard manipulations of data:\n\ntimestamps.py contains timestamp manipulations\nincluding jointly zeroing timestamps across multiple files, channels and datatypes.\nsplit.py includes various common ways by which datasets need to be split, e.g. splitByPolarity\n\n## Export functions\n\nexportIitYarp - exports to IIT's EDPR YARP format. Alongside data.log and\ninfo.log files, it exports an xml which specifies to yarpmanager how to\nvisualise the resulting data.\n\n# Dependencies:\n\nThis library uses importRosbag library to import rosbag data without needing a ros installation.\nThis is included as a submodule.\n\nBeyond the python standard library, the main dependencies are:\n\n- numpy\n- tqdm (for progress bars during import and export functions)\n\nFor the 'plot' family of visualisation functions:\n\n- matplotlib\n- mpl_toolkits (only for certain 3d visualisations)\n- seaborn\n\nThe \"visualiser\", however, generates graphics as numpy arrays\nwithout reference to matplotlib, for rendering by an external application.\n\nplotDvsLastTs uses rankdata from scipy; however if it's not installed,\nit defaults to a local definition; scipy is therefore an optional dependency.\n\nundistortEvents function in events.py uses cv2 (openCv).\n\nimport/export Hdf5 functions use:\n\n- hickle\n\n# Type definitions\n\nbimvee doesn't use classes for datatypes. Consequently, the code doesn't have a central place to refer to for the definition of datatypes. The types are intended to be used loosely, with minimal features which can be extended by adding optional fields. There is an optional container class which gives some functions for easier data manipulation.\n\nThere are some datatypes which are simply dicts which act as containers to group information, for example the 'cam' type. However most of the functionality of the library is based around the idea of a datatype dict containing a set of keys where each is a numpy array (or other iterable) where there is a 'ts' key, containing a numpy array of float timestamps, and then each iterable key should have the same number of elements (in the zeroth dimension) as the ts field. Thus a set of timestamped 'events' or other data type is defined. Other keys may be included which either aren't iterables or don't have the same number of elements in the zeroth dimension. These are therefore not interpreted as contributing dimensions to the set of data points. Concretely the datatypes which have some kind of support are:\n\n- dvs\n- frame\n- sample\n- imu\n- pose6q\n- point3\n- flow\n- skinSamples\n- skinEvents\n- ear\n- cam\n\nDefinitions of minimal and optional(*) fields follow.\n\n- fieldName   dimensions  datatype(numpy array unless otherwise stated) notes\n\n## dvs:\n\n- ts  n float\n- x   n np.uint16\n- y   n np.uint16 As the sensor outputs it; plot functions assume that y increases in downward direction, following https://arxiv.org/pdf/1610.08336.pdf\n- pol n bool To the extent possible, True means increase in light, False means decrease.\n- dimX* 1 int\n- dimY* 1 int\n\n## frame:\n\n- ts    n float\n- frame n list (of np.array of 2 or 3 dimensions np.uint8)\n\n## sample:\n\n- ts     n float\n- sensor n np.uint8\n- value  n int\n\n## imu:\n\n- ts  n    float\n- acc  nx3 float accelerometer readings [x,y,z] in m/s\n- angV nx3 float angV readings [yaw, pitch roll?] in rad/s\n- mag  nx3 float magnetometer readings [x, y, z] in tesla\n- temp n   float\n\n## point3:\n\n- ts    n   float\n- point nx3 float row format is [x, y, z]\n\n\n## pose6q (effectively extends point3):\n\n- ts       n   float\n- point    nx3 float row format is [x, y, z]\n- rotation nx4 float row format is [rw, rx, ry, rz] where r(wxyz) define a quaternion\n\nNote: quaternion order follows the convention of e.g. blender (wxyz) but not e.g. ros. (xyzw)\n\n## flow: (per-pixel flow events)\n\n- ts  n float\n- x   n np.uint16\n- y   n np.uint16  \n- vx   n np.uint16\n- vy   n np.uint16  \n\n## skinEvents: (intended for iCub neuromorphic skin events; could be generalised)\n\n- ts n float\n- taxel n np.unit16\n- bodyPart n np.uint8\n- pol n bool\n\n## skinSamples: (intended for dense iCub skin samples; could be generalised)\n\n- ts n float\n- pressure nxm float; m is the number of taxels concurrently sampled. Note:\nthere exist in the wild examples where the pressure value is a raw 8-bit sample.\n\n## ear: (intended for cochlea events from UDS / Gutierrez-Galan, could be generalised)\n\n- ts n float\n- freq n np.uint8\n- pol n bool\n- (There follow a number of model-specific fields which contribute to the full address: xsoType, auditoryModel, itdNeuronIds)\n\n## cam:\n\nFollowing ros msg camera info, the fields this might contain include:\n\n- height           1   int\n- width            1   int\n- distortion_model     string\n- D                5   float distortion params\n- K                3x3 float Intrinsic camera matrix\n- R                3x4 float Rectification matrix\n- P                4x4 float projection matrix\n\n\n",
    "bugtrack_url": null,
    "license": "gpl",
    "summary": "Batch Import, Manipulation, Visualisation and Export of Events etc",
    "version": "1.0.22",
    "project_urls": {
        "Download": "https://github.com/event-driven-robotics/bimvee_pkg/archive/v1.0.tar.gz",
        "Homepage": "https://github.com/event-driven-robotics/bimvee"
    },
    "split_keywords": [
        "event",
        " event camera",
        " event-based",
        " event-driven",
        " spike",
        " dvs",
        " dynamic vision sensor",
        " neuromorphic",
        " aer",
        " address-event representationspiking neural network",
        " davis",
        " atis",
        " celex"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "45179578a970c116566dc82f7b02389800d1bf2904ff166ac10dfbb1e06518ef",
                "md5": "019b029bd49b1d05becb22116c99cc78",
                "sha256": "2a2db380b85066a8419000cbbe74195af6ee61c83a2427c9fb551d387d243c84"
            },
            "downloads": -1,
            "filename": "bimvee-1.0.22.tar.gz",
            "has_sig": false,
            "md5_digest": "019b029bd49b1d05becb22116c99cc78",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 108516,
            "upload_time": "2024-04-19T09:20:35",
            "upload_time_iso_8601": "2024-04-19T09:20:35.313408Z",
            "url": "https://files.pythonhosted.org/packages/45/17/9578a970c116566dc82f7b02389800d1bf2904ff166ac10dfbb1e06518ef/bimvee-1.0.22.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-19 09:20:35",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "event-driven-robotics",
    "github_project": "bimvee",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "bimvee"
}
        
Elapsed time: 0.53798s