Name | goofi JSON |
Version |
2.1.7
JSON |
| download |
home_page | None |
Summary | Real-time neuro-/biosignal processing and streaming pipeline. |
upload_time | 2024-10-09 23:42:29 |
maintainer | None |
docs_url | None |
author | None |
requires_python | >=3.9 |
license | MIT License Copyright (c) 2023 Philipp Thölke Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. |
keywords |
signal-processing
neurofeedback
biofeedback
real-time
eeg
ecg
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
<p align="center">
<img src=https://github.com/PhilippThoelke/goofi-pipe/assets/36135990/60fb2ba9-4124-4ca4-96e2-ae450d55596d width="150">
</p>
<h1 align="center">goofi-pipe</h1>
<h3 align="center">Generative Organic Oscillation Feedback Isomorphism Pipeline</h3>
# Installation
If you only want to run goofi-pipe and not edit any of the code, make sure you activated the desired Python environment with Python>=3.9 and run the following commands in your terminal:
```bash
pip install goofi # install goofi-pipe
goofi-pipe # start the application
```
> [!NOTE]
> On some platforms (specifically Linux and Mac) it might be necessary to install the `liblsl` package for some of goofi-pipe's features (everything related to LSL streams).
> Follow the instructions provided [here](https://github.com/sccn/liblsl?tab=readme-ov-file#getting-and-using-liblsl), or simply install it via
> ```bash
> conda install -c conda-forge liblsl
> ```
## Development
Follow these steps if you want to adapt the code of existing nodes, or create custom new nodes. In your terminal, make sure you activated the desired Python environment with Python>=3.9, and that you are in the directory where you want to install goofi-pipe. Then, run the following commands:
```bash
git clone git@github.com:PhilippThoelke/goofi-pipe.git # download the repository
cd goofi-pipe # navigate into the repository
pip install -e . # install goofi-pipe in development mode
goofi-pipe # start the application to make sure the installation was successful
```
# Basic Usage
## Accessing the Node Menu
<p align="center">
<img src="https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/358a897f-3947-495e-849a-e6d7ebce2238" width="small">
</p>
To access the node menu, simply double-click anywhere within the application window or press the 'Tab' key. The node menu allows you to add various functionalities to your pipeline. Nodes are categorized for easy access, but if you're looking for something specific, the search bar at the top is a handy tool.
## Common Parameters and Metadata
<p align="center">
<img src="https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/23ba6df7-7f28-4505-acff-205e42e48dcb" alt="Common Parameters" width="small">
</p>
**Common Parameters**: All nodes within goofi have a set of common parameters. These settings consistently dictate how the node operates within the pipeline.
- **AutoTrigger**: This option, when enabled, allows the node to be triggered automatically. When disabled,
the node is triggered when it receives input.
- **Max_Frequency**: This denotes the maximum rate at which computations are set for the node.
<p align="center">
<img src="https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/54604cfb-6611-4ce8-92b2-0b353584c5f5" alt="Metadata" width="small">
</p>
**Metadata**: This section conveys essential information passed between nodes. Each output node will be accompanied by its metadata, providing clarity and consistency throughout the workflow.
Here are some conventional components present in the metadata
- **Channel Dictionary**: A conventional representation of EEG channels names.
- **Sampling Frequency**: The rate at which data samples are measured. It's crucial for maintaining consistent data input and output across various nodes.
- **Shape of the Output**: Details the format and structure of the node's output.
## Playing with Pre-recorded EEG Signal using LslStream
<p align="center">
<img src="https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/db340bd9-07af-470e-a791-f3c2dcf4935e" width="small">
</p>
This image showcases the process of utilizing a pre-recorded EEG signal through the `LslStream` node. It's crucial to ensure that the `Stream Name` in the `LslStream` node matches the stream name in the node receiving the data. This ensures data integrity and accurate signal processing in real-time.
# Patch examples
## Basic Signal Processing Patch
<p align="center">
<img src="https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/52f85dd4-6395-4eb2-a347-6cf489d659da" width="medium">
</p>
This patch provides a demonstration of basic EEG signal processing using goofi-pipe.
1. **EegRecording**: This is the starting point where the EEG data originates.
2. **LslClient**: The `LslClient` node retrieves the EEG data from `EegRecording`. Here, the visual representation of the EEG data being streamed in real-time is depicted. By default, the multiple lines in the plot correspond to the different EEG channels.
3. **Buffer**: This node holds the buffered EEG data.
4. **Psd**: Power Spectral Density (PSD) is a technique to measure a signal's power content versus frequency. In this node, the raw EEG data is transformed to exhibit its power distribution across distinct frequency bands.
5. **Math**: This node is employed to execute mathematical operations on the data. In this context, it's rescaling the values to ensure a harmonious dynamic range between 0 and 1, which is ideal for image representation. The resultant data is then visualized as an image.
One of the user-friendly features of goofi-pipe is the capability to toggle between different visualizations. By 'Ctrl+clicking' on any plot within a node, you can effortlessly switch between a line plot and an image representation, offering flexibility in data analysis.
## Sending Power Bands via Open Sound Control (OSC)
<p align="center">
<img src="https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/97576017-a737-47b9-aac6-bd0d00e0e7e9" width="medium">
</p>
Expanding on the basic patch, the advanced additions include:
- **Select**: Chooses specific EEG channels.
- **PowerBandEEG**: Computes EEG signal power across various frequency bands.
- **ExtendedTable**: Prepares data for transmission in a structured format.
- **OscOut**: Sends data using the Open-Sound-Control (OSC) protocol.
These nodes elevate data processing and communication capabilities.
## Real-Time Connectivity and Spectrogram
<p align="center">
<img src="https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/7c63a869-d20a-4f41-99fe-eb0931cebdc9" width="medium">
</p>
This patch highlights:
- **Connectivity**: Analyzes relationships between EEG channels, offering selectable methods like `wPLI`, `coherence`, `PLI`, and more.
- **Spectrogram**: Created using the `PSD` node followed by a `Buffer`, it provides a time-resolved view of the EEG signal's frequency content.
## Principal Component Analysis (PCA)
![PCA](https://github.com/PhilippThoelke/goofi-pipe/assets/36135990/d239eed8-4552-4256-9caf-d7c2fbb937e9)
Using PCA (Principal Component Analysis) allows us to reduce the dimensionality of raw EEG data, while retaining most of the variance. We use the first three components and visualize their trajectory, allowing us to identify patterns in the data over time. The topographical maps show the contrbution of each channel to the first four principal components (PCs).
## Realtime Classification
leverage the multimodal framework of goofi, state-of-the-art machine learning classifiers can be built on-the-fly to predict behavior from an array of different sources. Here's a brief walkthrough of three distinct examples:
### 1. EEG Signal Classification
![EEG Signal Classification](https://github.com/PhilippThoelke/goofi-pipe/assets/36135990/2da6b555-9f79-40c7-9bd8-1f863dcf4137)
This patch captures raw EEG signals using the `EEGrecording` and `LslStream`module. The classifier module allows
to capture data from different states indicated by the user from *n* features, which in the present case are the 64 EEG channels. Some classifiers allow for visualization of feature importance. Here we show a topomap of the distribution of features importances on the scalp. The classifier outputs probability of being in each of the states in the training data. This prediction is smoothed using a buffer for less jiterry results.
![Classifier parameters](https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/da2a86e3-efc8-4088-8d52-fb8c528dfb87)
### 2. Audio Input Classification
![Audio Input Classification](https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/4e50b13e-185d-414e-a39d-f6d39dc3e57f)
The audio input stream captures real-time sound data, which can also be passed through a classifier. Different sonic states can be predicted in realtime.
### 3. Video Input Classification
![Video Input Classification](https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/e7988ae9-cd2c-4b9f-907a-f438fd52328b)
![image_classification2](https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/77d33f2e-014f-4e3b-99fb-179f4bca1db0)
In this example, video frames are extracted using the `VideoStream` module. Similarly, prediction of labelled visual states can be achieved in realtime.
The images show how two states (being on the left or the right side of the image) can be detected using classification
These patches demonstrate the versatility of our framework in handling various types of real-time data streams for classification tasks.
## Musical Features using Biotuner
<p align="center">
<img src="https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/b426ce44-bf23-4b88-a772-5d183dc36a93" width="medium">
</p>
This patch presents a pipeline for processing EEG data to extract musical features:
- Data flows from the EEG recording through several preprocessing nodes and culminates in the **Biotuner** node, which specializes in deriving musical attributes from the EEG.
- **Biotuner** Node: With its sophisticated algorithms, Biotuner pinpoints harmonic relationships, tension, peaks, and more, essential for music theory analysis.
<p align="center">
<img src="https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/042692ae-a558-48f2-9693-d09e33240373" width="medium">
</p>
Delving into the parameters of the Biotuner node:
- `N Peaks`: The number of spectral peaks to consider.
- `F Min` & `F Max`: Defines the frequency range for analysis.
- `Precision`: Sets the precision in Hz for peak extraction.
- `Peaks Function`: Method to compute the peaks, like EMD, fixed band, or harmonic recurrence.
- `N Harm Subharm` & `N Harm Extended`: Configures number of harmonics used in different computations.
- `Delta Lim`: Defines the maximal distance between two subharmonics to include in subharmonic tension computation.
For a deeper understanding and advanced configurations, consult the [Biotuner repository](https://github.com/AntoineBellemare/biotuner).
# Data Types
To simplify understanding, we've associated specific shapes with data types at the inputs and outputs of nodes:
- **Circles**: Represent arrays.
- **Triangles**: Represent strings.
- **Squares**: Represent tables.
# Node Categories
<!-- AUTO-GENERATED NODE LIST -->
<!-- !!GOOFI_PIPE_NODE_LIST_START!! -->
## Analysis
Nodes that perform analysis on the data.
<details><summary>View Nodes</summary>
<details><summary> AudioTagging</summary>
- **Inputs:**
- audioIn: ARRAY
- **Outputs:**
- tags: STRING
- probabilities: ARRAY
- embedding: ARRAY
</details>
<details><summary> Avalanches</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- size: ARRAY
- duration: ARRAY
</details>
<details><summary> Binarize</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- bin_data: ARRAY
</details>
<details><summary> Bioelements</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- elements: TABLE
</details>
<details><summary> Bioplanets</summary>
- **Inputs:**
- peaks: ARRAY
- **Outputs:**
- planets: TABLE
- top_planets: STRING
</details>
<details><summary> Biorhythms</summary>
- **Inputs:**
- tuning: ARRAY
- **Outputs:**
- pulses: ARRAY
- steps: ARRAY
- offsets: ARRAY
</details>
<details><summary> Biotuner</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- harmsim: ARRAY
- tenney: ARRAY
- subharm_tension: ARRAY
- cons: ARRAY
- peaks_ratios_tuning: ARRAY
- harm_tuning: ARRAY
- peaks: ARRAY
- amps: ARRAY
- extended_peaks: ARRAY
- extended_amps: ARRAY
</details>
<details><summary> CardiacRespiration</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- cardiac: ARRAY
</details>
<details><summary> CardioRespiratoryVariability</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- MeanNN: ARRAY
- SDNN: ARRAY
- SDSD: ARRAY
- RMSSD: ARRAY
- pNN50: ARRAY
- LF: ARRAY
- HF: ARRAY
- LF/HF: ARRAY
- LZC: ARRAY
</details>
<details><summary> Classifier</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- probs: ARRAY
- feature_importances: ARRAY
</details>
<details><summary> Clustering</summary>
- **Inputs:**
- matrix: ARRAY
- **Outputs:**
- cluster_labels: ARRAY
- cluster_centers: ARRAY
</details>
<details><summary> Compass</summary>
- **Inputs:**
- north: ARRAY
- south: ARRAY
- east: ARRAY
- west: ARRAY
- **Outputs:**
- angle: ARRAY
</details>
<details><summary> Connectivity</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- matrix: ARRAY
</details>
<details><summary> Coord2loc</summary>
- **Inputs:**
- latitude: ARRAY
- longitude: ARRAY
- **Outputs:**
- coord_info: TABLE
</details>
<details><summary> Correlation</summary>
- **Inputs:**
- data1: ARRAY
- data2: ARRAY
- **Outputs:**
- pearson: ARRAY
</details>
<details><summary> DissonanceCurve</summary>
- **Inputs:**
- peaks: ARRAY
- amps: ARRAY
- **Outputs:**
- dissonance_curve: ARRAY
- tuning: ARRAY
- avg_dissonance: ARRAY
</details>
<details><summary> EigenDecomposition</summary>
- **Inputs:**
- matrix: ARRAY
- **Outputs:**
- eigenvalues: ARRAY
- eigenvectors: ARRAY
</details>
<details><summary> ERP</summary>
- **Inputs:**
- signal: ARRAY
- trigger: ARRAY
- **Outputs:**
- erp: ARRAY
</details>
<details><summary> FacialExpression</summary>
- **Inputs:**
- image: ARRAY
- **Outputs:**
- emotion_probabilities: ARRAY
- action_units: ARRAY
- main_emotion: STRING
</details>
<details><summary> Fractality</summary>
- **Inputs:**
- data_input: ARRAY
- **Outputs:**
- fractal_dimension: ARRAY
</details>
<details><summary> GraphMetrics</summary>
- **Inputs:**
- matrix: ARRAY
- **Outputs:**
- clustering_coefficient: ARRAY
- characteristic_path_length: ARRAY
- betweenness_centrality: ARRAY
- degree_centrality: ARRAY
- assortativity: ARRAY
- transitivity: ARRAY
</details>
<details><summary> HarmonicSpectrum</summary>
- **Inputs:**
- psd: ARRAY
- **Outputs:**
- harmonic_spectrum: ARRAY
- max_harmonicity: ARRAY
- avg_harmonicity: ARRAY
</details>
<details><summary> Img2Txt</summary>
- **Inputs:**
- image: ARRAY
- **Outputs:**
- generated_text: STRING
</details>
<details><summary> LempelZiv</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- lzc: ARRAY
</details>
<details><summary> PCA</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- principal_components: ARRAY
</details>
<details><summary> PoseEstimation</summary>
- **Inputs:**
- image: ARRAY
- **Outputs:**
- pose: ARRAY
</details>
<details><summary> PowerBand</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- power: ARRAY
</details>
<details><summary> PowerBandEEG</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- delta: ARRAY
- theta: ARRAY
- alpha: ARRAY
- lowbeta: ARRAY
- highbeta: ARRAY
- gamma: ARRAY
</details>
<details><summary> ProbabilityMatrix</summary>
- **Inputs:**
- input_data: ARRAY
- **Outputs:**
- data: ARRAY
</details>
<details><summary> SpectroMorphology</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- spectro: ARRAY
</details>
<details><summary> SpeechSynthesis</summary>
- **Inputs:**
- text: STRING
- voice: ARRAY
- **Outputs:**
- speech: ARRAY
- transcript: STRING
</details>
<details><summary> TransitionalHarmony</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- trans_harm: ARRAY
- melody: ARRAY
</details>
<details><summary> TuningColors</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- hue: ARRAY
- saturation: ARRAY
- value: ARRAY
- color_names: STRING
</details>
<details><summary> TuningMatrix</summary>
- **Inputs:**
- tuning: ARRAY
- **Outputs:**
- matrix: ARRAY
- metric_per_step: ARRAY
- metric: ARRAY
</details>
<details><summary> TuningReduction</summary>
- **Inputs:**
- tuning: ARRAY
- **Outputs:**
- reduced: ARRAY
</details>
<details><summary> VAMP</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- comps: ARRAY
</details>
<details><summary> VocalExpression</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- prosody_label: STRING
- burst_label: STRING
- prosody_score: ARRAY
- burst_score: ARRAY
</details>
</details>
## Array
Nodes implementing array operations.
<details><summary>View Nodes</summary>
<details><summary> Clip</summary>
- **Inputs:**
- array: ARRAY
- **Outputs:**
- out: ARRAY
</details>
<details><summary> Join</summary>
- **Inputs:**
- a: ARRAY
- b: ARRAY
- **Outputs:**
- out: ARRAY
</details>
<details><summary> Math</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- out: ARRAY
</details>
<details><summary> Operation</summary>
- **Inputs:**
- a: ARRAY
- b: ARRAY
- **Outputs:**
- out: ARRAY
</details>
<details><summary> Reduce</summary>
- **Inputs:**
- array: ARRAY
- **Outputs:**
- out: ARRAY
</details>
<details><summary> Reshape</summary>
- **Inputs:**
- array: ARRAY
- **Outputs:**
- out: ARRAY
</details>
<details><summary> Select</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- out: ARRAY
</details>
<details><summary> Transpose</summary>
- **Inputs:**
- array: ARRAY
- **Outputs:**
- out: ARRAY
</details>
</details>
## Inputs
Nodes that provide data to the pipeline.
<details><summary>View Nodes</summary>
<details><summary> Audiocraft</summary>
- **Inputs:**
- prompt: STRING
- **Outputs:**
- wav: ARRAY
</details>
<details><summary> AudioStream</summary>
- **Inputs:**
- **Outputs:**
- out: ARRAY
</details>
<details><summary> ConstantArray</summary>
- **Inputs:**
- **Outputs:**
- out: ARRAY
</details>
<details><summary> ConstantString</summary>
- **Inputs:**
- **Outputs:**
- out: STRING
</details>
<details><summary> EEGRecording</summary>
- **Inputs:**
- **Outputs:**
</details>
<details><summary> ExtendedTable</summary>
- **Inputs:**
- base: TABLE
- array_input1: ARRAY
- array_input2: ARRAY
- array_input3: ARRAY
- array_input4: ARRAY
- array_input5: ARRAY
- string_input1: STRING
- string_input2: STRING
- string_input3: STRING
- string_input4: STRING
- string_input5: STRING
- **Outputs:**
- table: TABLE
</details>
<details><summary> FractalImage</summary>
- **Inputs:**
- complexity: ARRAY
- **Outputs:**
- image: ARRAY
</details>
<details><summary> ImageGeneration</summary>
- **Inputs:**
- prompt: STRING
- negative_prompt: STRING
- base_image: ARRAY
- **Outputs:**
- img: ARRAY
</details>
<details><summary> Kuramoto</summary>
- **Inputs:**
- initial_phases: ARRAY
- **Outputs:**
- phases: ARRAY
- coupling: ARRAY
- order_parameter: ARRAY
- waveforms: ARRAY
</details>
<details><summary> LoadFile</summary>
- **Inputs:**
- **Outputs:**
- data_output: ARRAY
</details>
<details><summary> LSLClient</summary>
- **Inputs:**
- **Outputs:**
- out: ARRAY
</details>
<details><summary> MeteoMedia</summary>
- **Inputs:**
- latitude: ARRAY
- longitude: ARRAY
- location_name: STRING
- **Outputs:**
- weather_data_table: TABLE
</details>
<details><summary> OSCIn</summary>
- **Inputs:**
- **Outputs:**
- message: TABLE
</details>
<details><summary> PromptBook</summary>
- **Inputs:**
- input_prompt: STRING
- **Outputs:**
- out: STRING
</details>
<details><summary> Reservoir</summary>
- **Inputs:**
- connectivity: ARRAY
- **Outputs:**
- data: ARRAY
</details>
<details><summary> SerialStream</summary>
- **Inputs:**
- **Outputs:**
- out: ARRAY
</details>
<details><summary> Sine</summary>
- **Inputs:**
- **Outputs:**
- out: ARRAY
</details>
<details><summary> Table</summary>
- **Inputs:**
- base: TABLE
- new_entry: ARRAY
- **Outputs:**
- table: TABLE
</details>
<details><summary> TextGeneration</summary>
- **Inputs:**
- prompt: STRING
- **Outputs:**
- generated_text: STRING
</details>
<details><summary> VideoStream</summary>
- **Inputs:**
- **Outputs:**
- frame: ARRAY
</details>
<details><summary> ZeroMQIn</summary>
- **Inputs:**
- **Outputs:**
- data: ARRAY
</details>
</details>
## Misc
Miscellaneous nodes that do not fit into other categories.
<details><summary>View Nodes</summary>
<details><summary> AppendTables</summary>
- **Inputs:**
- table1: TABLE
- table2: TABLE
- **Outputs:**
- output_table: TABLE
</details>
<details><summary> ColorEnhancer</summary>
- **Inputs:**
- image: ARRAY
- **Outputs:**
- enhanced_image: ARRAY
</details>
<details><summary> EdgeDetector</summary>
- **Inputs:**
- image: ARRAY
- **Outputs:**
- edges: ARRAY
</details>
<details><summary> FormatString</summary>
- **Inputs:**
- input_string_1: STRING
- input_string_2: STRING
- input_string_3: STRING
- input_string_4: STRING
- input_string_5: STRING
- input_string_6: STRING
- input_string_7: STRING
- input_string_8: STRING
- input_string_9: STRING
- input_string_10: STRING
- **Outputs:**
- output_string: STRING
</details>
<details><summary> HSVtoRGB</summary>
- **Inputs:**
- hsv_image: ARRAY
- **Outputs:**
- rgb_image: ARRAY
</details>
<details><summary> JoinString</summary>
- **Inputs:**
- string1: STRING
- string2: STRING
- string3: STRING
- string4: STRING
- string5: STRING
- **Outputs:**
- output: STRING
</details>
<details><summary> RGBtoHSV</summary>
- **Inputs:**
- rgb_image: ARRAY
- **Outputs:**
- hsv_image: ARRAY
</details>
<details><summary> SetMeta</summary>
- **Inputs:**
- array: ARRAY
- **Outputs:**
- out: ARRAY
</details>
<details><summary> StringAwait</summary>
- **Inputs:**
- message: STRING
- trigger: ARRAY
- **Outputs:**
- out: STRING
</details>
<details><summary> TableSelectArray</summary>
- **Inputs:**
- input_table: TABLE
- **Outputs:**
- output_array: ARRAY
</details>
<details><summary> TableSelectString</summary>
- **Inputs:**
- input_table: TABLE
- **Outputs:**
- output_string: STRING
</details>
</details>
## Outputs
Nodes that send data to external systems.
<details><summary>View Nodes</summary>
<details><summary> AudioOut</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- finished: ARRAY
</details>
<details><summary> LSLOut</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
</details>
<details><summary> MidiCCout</summary>
- **Inputs:**
- cc1: ARRAY
- cc2: ARRAY
- cc3: ARRAY
- cc4: ARRAY
- cc5: ARRAY
- **Outputs:**
- midi_status: STRING
</details>
<details><summary> MidiOut</summary>
- **Inputs:**
- note: ARRAY
- velocity: ARRAY
- duration: ARRAY
- **Outputs:**
- midi_status: STRING
</details>
<details><summary> OSCOut</summary>
- **Inputs:**
- data: TABLE
- **Outputs:**
</details>
<details><summary> SharedMemOut</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
</details>
<details><summary> WriteCsv</summary>
- **Inputs:**
- table_input: TABLE
- **Outputs:**
</details>
<details><summary> ZeroMQOut</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
</details>
</details>
## Signal
Nodes implementing signal processing operations.
<details><summary>View Nodes</summary>
<details><summary> Buffer</summary>
- **Inputs:**
- val: ARRAY
- **Outputs:**
- out: ARRAY
</details>
<details><summary> Cycle</summary>
- **Inputs:**
- signal: ARRAY
- **Outputs:**
- cycle: ARRAY
</details>
<details><summary> EMD</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- IMFs: ARRAY
</details>
<details><summary> FFT</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- mag: ARRAY
- phase: ARRAY
</details>
<details><summary> Filter</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- filtered_data: ARRAY
</details>
<details><summary> FOOOFaperiodic</summary>
- **Inputs:**
- psd_data: ARRAY
- **Outputs:**
- offset: ARRAY
- exponent: ARRAY
- cf_peaks: ARRAY
- cleaned_psd: ARRAY
</details>
<details><summary> FrequencyShift</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- out: ARRAY
</details>
<details><summary> Hilbert</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- inst_amplitude: ARRAY
- inst_phase: ARRAY
- inst_frequency: ARRAY
</details>
<details><summary> IFFT</summary>
- **Inputs:**
- spectrum: ARRAY
- phase: ARRAY
- **Outputs:**
- reconstructed: ARRAY
</details>
<details><summary> PSD</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- psd: ARRAY
</details>
<details><summary> Recurrence</summary>
- **Inputs:**
- input_array: ARRAY
- **Outputs:**
- recurrence_matrix: ARRAY
- RR: ARRAY
- DET: ARRAY
- LAM: ARRAY
</details>
<details><summary> Resample</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- out: ARRAY
</details>
<details><summary> ResampleJoint</summary>
- **Inputs:**
- data1: ARRAY
- data2: ARRAY
- **Outputs:**
- out1: ARRAY
- out2: ARRAY
</details>
<details><summary> Smooth</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- out: ARRAY
</details>
<details><summary> StaticBaseline</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- normalized: ARRAY
</details>
<details><summary> Threshold</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- thresholded: ARRAY
</details>
<details><summary> TimeDelayEmbedding</summary>
- **Inputs:**
- input_array: ARRAY
- **Outputs:**
- embedded_array: ARRAY
</details>
<details><summary> WelfordsZTransform</summary>
- **Inputs:**
- data: ARRAY
- **Outputs:**
- normalized: ARRAY
</details>
</details>
<!-- !!GOOFI_PIPE_NODE_LIST_END!! -->
Raw data
{
"_id": null,
"home_page": null,
"name": "goofi",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": "signal-processing, neurofeedback, biofeedback, real-time, EEG, ECG",
"author": null,
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/9f/bc/a3ad4cb470e9b0ba5ebab1343441aa5c2a9637708c9577a8f4d40830b220/goofi-2.1.7.tar.gz",
"platform": null,
"description": "<p align=\"center\">\n<img src=https://github.com/PhilippThoelke/goofi-pipe/assets/36135990/60fb2ba9-4124-4ca4-96e2-ae450d55596d width=\"150\">\n</p>\n\n<h1 align=\"center\">goofi-pipe</h1>\n<h3 align=\"center\">Generative Organic Oscillation Feedback Isomorphism Pipeline</h3>\n\n# Installation\nIf you only want to run goofi-pipe and not edit any of the code, make sure you activated the desired Python environment with Python>=3.9 and run the following commands in your terminal:\n```bash\npip install goofi # install goofi-pipe\ngoofi-pipe # start the application\n```\n\n> [!NOTE]\n> On some platforms (specifically Linux and Mac) it might be necessary to install the `liblsl` package for some of goofi-pipe's features (everything related to LSL streams).\n> Follow the instructions provided [here](https://github.com/sccn/liblsl?tab=readme-ov-file#getting-and-using-liblsl), or simply install it via\n> ```bash\n> conda install -c conda-forge liblsl\n> ```\n\n## Development\nFollow these steps if you want to adapt the code of existing nodes, or create custom new nodes. In your terminal, make sure you activated the desired Python environment with Python>=3.9, and that you are in the directory where you want to install goofi-pipe. Then, run the following commands:\n```bash\ngit clone git@github.com:PhilippThoelke/goofi-pipe.git # download the repository\ncd goofi-pipe # navigate into the repository\npip install -e . # install goofi-pipe in development mode\ngoofi-pipe # start the application to make sure the installation was successful\n```\n\n# Basic Usage\n\n## Accessing the Node Menu\n\n<p align=\"center\">\n<img src=\"https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/358a897f-3947-495e-849a-e6d7ebce2238\" width=\"small\">\n</p>\n\nTo access the node menu, simply double-click anywhere within the application window or press the 'Tab' key. The node menu allows you to add various functionalities to your pipeline. Nodes are categorized for easy access, but if you're looking for something specific, the search bar at the top is a handy tool.\n\n## Common Parameters and Metadata\n\n<p align=\"center\">\n<img src=\"https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/23ba6df7-7f28-4505-acff-205e42e48dcb\" alt=\"Common Parameters\" width=\"small\">\n</p>\n\n**Common Parameters**: All nodes within goofi have a set of common parameters. These settings consistently dictate how the node operates within the pipeline.\n\n- **AutoTrigger**: This option, when enabled, allows the node to be triggered automatically. When disabled,\nthe node is triggered when it receives input.\n \n- **Max_Frequency**: This denotes the maximum rate at which computations are set for the node.\n\n<p align=\"center\">\n<img src=\"https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/54604cfb-6611-4ce8-92b2-0b353584c5f5\" alt=\"Metadata\" width=\"small\">\n</p>\n\n**Metadata**: This section conveys essential information passed between nodes. Each output node will be accompanied by its metadata, providing clarity and consistency throughout the workflow.\n\nHere are some conventional components present in the metadata\n\n- **Channel Dictionary**: A conventional representation of EEG channels names.\n \n- **Sampling Frequency**: The rate at which data samples are measured. It's crucial for maintaining consistent data input and output across various nodes.\n\n- **Shape of the Output**: Details the format and structure of the node's output.\n\n\n## Playing with Pre-recorded EEG Signal using LslStream\n\n<p align=\"center\">\n<img src=\"https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/db340bd9-07af-470e-a791-f3c2dcf4935e\" width=\"small\">\n</p>\n\nThis image showcases the process of utilizing a pre-recorded EEG signal through the `LslStream` node. It's crucial to ensure that the `Stream Name` in the `LslStream` node matches the stream name in the node receiving the data. This ensures data integrity and accurate signal processing in real-time.\n\n# Patch examples\n\n## Basic Signal Processing Patch\n\n<p align=\"center\">\n<img src=\"https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/52f85dd4-6395-4eb2-a347-6cf489d659da\" width=\"medium\">\n</p>\n\nThis patch provides a demonstration of basic EEG signal processing using goofi-pipe.\n\n1. **EegRecording**: This is the starting point where the EEG data originates. \n\n2. **LslClient**: The `LslClient` node retrieves the EEG data from `EegRecording`. Here, the visual representation of the EEG data being streamed in real-time is depicted. By default, the multiple lines in the plot correspond to the different EEG channels.\n\n3. **Buffer**: This node holds the buffered EEG data.\n\n4. **Psd**: Power Spectral Density (PSD) is a technique to measure a signal's power content versus frequency. In this node, the raw EEG data is transformed to exhibit its power distribution across distinct frequency bands.\n\n5. **Math**: This node is employed to execute mathematical operations on the data. In this context, it's rescaling the values to ensure a harmonious dynamic range between 0 and 1, which is ideal for image representation. The resultant data is then visualized as an image.\n\nOne of the user-friendly features of goofi-pipe is the capability to toggle between different visualizations. By 'Ctrl+clicking' on any plot within a node, you can effortlessly switch between a line plot and an image representation, offering flexibility in data analysis.\n\n## Sending Power Bands via Open Sound Control (OSC)\n\n<p align=\"center\">\n<img src=\"https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/97576017-a737-47b9-aac6-bd0d00e0e7e9\" width=\"medium\">\n</p>\n\nExpanding on the basic patch, the advanced additions include:\n\n- **Select**: Chooses specific EEG channels.\n- **PowerBandEEG**: Computes EEG signal power across various frequency bands.\n- **ExtendedTable**: Prepares data for transmission in a structured format.\n- **OscOut**: Sends data using the Open-Sound-Control (OSC) protocol.\n\nThese nodes elevate data processing and communication capabilities.\n\n## Real-Time Connectivity and Spectrogram\n\n<p align=\"center\">\n<img src=\"https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/7c63a869-d20a-4f41-99fe-eb0931cebdc9\" width=\"medium\">\n</p>\n\nThis patch highlights:\n\n- **Connectivity**: Analyzes relationships between EEG channels, offering selectable methods like `wPLI`, `coherence`, `PLI`, and more.\n\n- **Spectrogram**: Created using the `PSD` node followed by a `Buffer`, it provides a time-resolved view of the EEG signal's frequency content.\n\n## Principal Component Analysis (PCA)\n![PCA](https://github.com/PhilippThoelke/goofi-pipe/assets/36135990/d239eed8-4552-4256-9caf-d7c2fbb937e9)\n\nUsing PCA (Principal Component Analysis) allows us to reduce the dimensionality of raw EEG data, while retaining most of the variance. We use the first three components and visualize their trajectory, allowing us to identify patterns in the data over time. The topographical maps show the contrbution of each channel to the first four principal components (PCs).\n\n## Realtime Classification\n\nleverage the multimodal framework of goofi, state-of-the-art machine learning classifiers can be built on-the-fly to predict behavior from an array of different sources. Here's a brief walkthrough of three distinct examples:\n\n### 1. EEG Signal Classification\n![EEG Signal Classification](https://github.com/PhilippThoelke/goofi-pipe/assets/36135990/2da6b555-9f79-40c7-9bd8-1f863dcf4137)\nThis patch captures raw EEG signals using the `EEGrecording` and `LslStream`module. The classifier module allows\nto capture data from different states indicated by the user from *n* features, which in the present case are the 64 EEG channels. Some classifiers allow for visualization of feature importance. Here we show a topomap of the distribution of features importances on the scalp. The classifier outputs probability of being in each of the states in the training data. This prediction is smoothed using a buffer for less jiterry results. \n![Classifier parameters](https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/da2a86e3-efc8-4088-8d52-fb8c528dfb87)\n\n### 2. Audio Input Classification\n![Audio Input Classification](https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/4e50b13e-185d-414e-a39d-f6d39dc3e57f)\nThe audio input stream captures real-time sound data, which can also be passed through a classifier. Different sonic states can be predicted in realtime.\n\n### 3. Video Input Classification\n![Video Input Classification](https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/e7988ae9-cd2c-4b9f-907a-f438fd52328b)\n![image_classification2](https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/77d33f2e-014f-4e3b-99fb-179f4bca1db0)\nIn this example, video frames are extracted using the `VideoStream` module. Similarly, prediction of labelled visual states can be achieved in realtime.\nThe images show how two states (being on the left or the right side of the image) can be detected using classification\n\nThese patches demonstrate the versatility of our framework in handling various types of real-time data streams for classification tasks.\n\n## Musical Features using Biotuner\n\n<p align=\"center\">\n<img src=\"https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/b426ce44-bf23-4b88-a772-5d183dc36a93\" width=\"medium\">\n</p>\n\nThis patch presents a pipeline for processing EEG data to extract musical features:\n\n- Data flows from the EEG recording through several preprocessing nodes and culminates in the **Biotuner** node, which specializes in deriving musical attributes from the EEG.\n\n- **Biotuner** Node: With its sophisticated algorithms, Biotuner pinpoints harmonic relationships, tension, peaks, and more, essential for music theory analysis.\n\n<p align=\"center\">\n<img src=\"https://github.com/PhilippThoelke/goofi-pipe/assets/49297774/042692ae-a558-48f2-9693-d09e33240373\" width=\"medium\">\n</p>\n\nDelving into the parameters of the Biotuner node:\n\n- `N Peaks`: The number of spectral peaks to consider.\n- `F Min` & `F Max`: Defines the frequency range for analysis.\n- `Precision`: Sets the precision in Hz for peak extraction.\n- `Peaks Function`: Method to compute the peaks, like EMD, fixed band, or harmonic recurrence.\n- `N Harm Subharm` & `N Harm Extended`: Configures number of harmonics used in different computations.\n- `Delta Lim`: Defines the maximal distance between two subharmonics to include in subharmonic tension computation.\n\nFor a deeper understanding and advanced configurations, consult the [Biotuner repository](https://github.com/AntoineBellemare/biotuner).\n\n\n# Data Types\n\nTo simplify understanding, we've associated specific shapes with data types at the inputs and outputs of nodes:\n\n- **Circles**: Represent arrays.\n- **Triangles**: Represent strings.\n- **Squares**: Represent tables.\n\n\n# Node Categories\n\n<!-- AUTO-GENERATED NODE LIST -->\n<!-- !!GOOFI_PIPE_NODE_LIST_START!! -->\n## Analysis\n\nNodes that perform analysis on the data.\n\n<details><summary>View Nodes</summary>\n\n<details><summary> AudioTagging</summary>\n\n - **Inputs:**\n - audioIn: ARRAY\n - **Outputs:**\n - tags: STRING\n - probabilities: ARRAY\n - embedding: ARRAY\n </details>\n\n<details><summary> Avalanches</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - size: ARRAY\n - duration: ARRAY\n </details>\n\n<details><summary> Binarize</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - bin_data: ARRAY\n </details>\n\n<details><summary> Bioelements</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - elements: TABLE\n </details>\n\n<details><summary> Bioplanets</summary>\n\n - **Inputs:**\n - peaks: ARRAY\n - **Outputs:**\n - planets: TABLE\n - top_planets: STRING\n </details>\n\n<details><summary> Biorhythms</summary>\n\n - **Inputs:**\n - tuning: ARRAY\n - **Outputs:**\n - pulses: ARRAY\n - steps: ARRAY\n - offsets: ARRAY\n </details>\n\n<details><summary> Biotuner</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - harmsim: ARRAY\n - tenney: ARRAY\n - subharm_tension: ARRAY\n - cons: ARRAY\n - peaks_ratios_tuning: ARRAY\n - harm_tuning: ARRAY\n - peaks: ARRAY\n - amps: ARRAY\n - extended_peaks: ARRAY\n - extended_amps: ARRAY\n </details>\n\n<details><summary> CardiacRespiration</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - cardiac: ARRAY\n </details>\n\n<details><summary> CardioRespiratoryVariability</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - MeanNN: ARRAY\n - SDNN: ARRAY\n - SDSD: ARRAY\n - RMSSD: ARRAY\n - pNN50: ARRAY\n - LF: ARRAY\n - HF: ARRAY\n - LF/HF: ARRAY\n - LZC: ARRAY\n </details>\n\n<details><summary> Classifier</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - probs: ARRAY\n - feature_importances: ARRAY\n </details>\n\n<details><summary> Clustering</summary>\n\n - **Inputs:**\n - matrix: ARRAY\n - **Outputs:**\n - cluster_labels: ARRAY\n - cluster_centers: ARRAY\n </details>\n\n<details><summary> Compass</summary>\n\n - **Inputs:**\n - north: ARRAY\n - south: ARRAY\n - east: ARRAY\n - west: ARRAY\n - **Outputs:**\n - angle: ARRAY\n </details>\n\n<details><summary> Connectivity</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - matrix: ARRAY\n </details>\n\n<details><summary> Coord2loc</summary>\n\n - **Inputs:**\n - latitude: ARRAY\n - longitude: ARRAY\n - **Outputs:**\n - coord_info: TABLE\n </details>\n\n<details><summary> Correlation</summary>\n\n - **Inputs:**\n - data1: ARRAY\n - data2: ARRAY\n - **Outputs:**\n - pearson: ARRAY\n </details>\n\n<details><summary> DissonanceCurve</summary>\n\n - **Inputs:**\n - peaks: ARRAY\n - amps: ARRAY\n - **Outputs:**\n - dissonance_curve: ARRAY\n - tuning: ARRAY\n - avg_dissonance: ARRAY\n </details>\n\n<details><summary> EigenDecomposition</summary>\n\n - **Inputs:**\n - matrix: ARRAY\n - **Outputs:**\n - eigenvalues: ARRAY\n - eigenvectors: ARRAY\n </details>\n\n<details><summary> ERP</summary>\n\n - **Inputs:**\n - signal: ARRAY\n - trigger: ARRAY\n - **Outputs:**\n - erp: ARRAY\n </details>\n\n<details><summary> FacialExpression</summary>\n\n - **Inputs:**\n - image: ARRAY\n - **Outputs:**\n - emotion_probabilities: ARRAY\n - action_units: ARRAY\n - main_emotion: STRING\n </details>\n\n<details><summary> Fractality</summary>\n\n - **Inputs:**\n - data_input: ARRAY\n - **Outputs:**\n - fractal_dimension: ARRAY\n </details>\n\n<details><summary> GraphMetrics</summary>\n\n - **Inputs:**\n - matrix: ARRAY\n - **Outputs:**\n - clustering_coefficient: ARRAY\n - characteristic_path_length: ARRAY\n - betweenness_centrality: ARRAY\n - degree_centrality: ARRAY\n - assortativity: ARRAY\n - transitivity: ARRAY\n </details>\n\n<details><summary> HarmonicSpectrum</summary>\n\n - **Inputs:**\n - psd: ARRAY\n - **Outputs:**\n - harmonic_spectrum: ARRAY\n - max_harmonicity: ARRAY\n - avg_harmonicity: ARRAY\n </details>\n\n<details><summary> Img2Txt</summary>\n\n - **Inputs:**\n - image: ARRAY\n - **Outputs:**\n - generated_text: STRING\n </details>\n\n<details><summary> LempelZiv</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - lzc: ARRAY\n </details>\n\n<details><summary> PCA</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - principal_components: ARRAY\n </details>\n\n<details><summary> PoseEstimation</summary>\n\n - **Inputs:**\n - image: ARRAY\n - **Outputs:**\n - pose: ARRAY\n </details>\n\n<details><summary> PowerBand</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - power: ARRAY\n </details>\n\n<details><summary> PowerBandEEG</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - delta: ARRAY\n - theta: ARRAY\n - alpha: ARRAY\n - lowbeta: ARRAY\n - highbeta: ARRAY\n - gamma: ARRAY\n </details>\n\n<details><summary> ProbabilityMatrix</summary>\n\n - **Inputs:**\n - input_data: ARRAY\n - **Outputs:**\n - data: ARRAY\n </details>\n\n<details><summary> SpectroMorphology</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - spectro: ARRAY\n </details>\n\n<details><summary> SpeechSynthesis</summary>\n\n - **Inputs:**\n - text: STRING\n - voice: ARRAY\n - **Outputs:**\n - speech: ARRAY\n - transcript: STRING\n </details>\n\n<details><summary> TransitionalHarmony</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - trans_harm: ARRAY\n - melody: ARRAY\n </details>\n\n<details><summary> TuningColors</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - hue: ARRAY\n - saturation: ARRAY\n - value: ARRAY\n - color_names: STRING\n </details>\n\n<details><summary> TuningMatrix</summary>\n\n - **Inputs:**\n - tuning: ARRAY\n - **Outputs:**\n - matrix: ARRAY\n - metric_per_step: ARRAY\n - metric: ARRAY\n </details>\n\n<details><summary> TuningReduction</summary>\n\n - **Inputs:**\n - tuning: ARRAY\n - **Outputs:**\n - reduced: ARRAY\n </details>\n\n<details><summary> VAMP</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - comps: ARRAY\n </details>\n\n<details><summary> VocalExpression</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - prosody_label: STRING\n - burst_label: STRING\n - prosody_score: ARRAY\n - burst_score: ARRAY\n </details>\n\n</details>\n\n## Array\n\nNodes implementing array operations.\n\n<details><summary>View Nodes</summary>\n\n<details><summary> Clip</summary>\n\n - **Inputs:**\n - array: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> Join</summary>\n\n - **Inputs:**\n - a: ARRAY\n - b: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> Math</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> Operation</summary>\n\n - **Inputs:**\n - a: ARRAY\n - b: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> Reduce</summary>\n\n - **Inputs:**\n - array: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> Reshape</summary>\n\n - **Inputs:**\n - array: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> Select</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> Transpose</summary>\n\n - **Inputs:**\n - array: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n</details>\n\n## Inputs\n\nNodes that provide data to the pipeline.\n\n<details><summary>View Nodes</summary>\n\n<details><summary> Audiocraft</summary>\n\n - **Inputs:**\n - prompt: STRING\n - **Outputs:**\n - wav: ARRAY\n </details>\n\n<details><summary> AudioStream</summary>\n\n - **Inputs:**\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> ConstantArray</summary>\n\n - **Inputs:**\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> ConstantString</summary>\n\n - **Inputs:**\n - **Outputs:**\n - out: STRING\n </details>\n\n<details><summary> EEGRecording</summary>\n\n - **Inputs:**\n - **Outputs:**\n </details>\n\n<details><summary> ExtendedTable</summary>\n\n - **Inputs:**\n - base: TABLE\n - array_input1: ARRAY\n - array_input2: ARRAY\n - array_input3: ARRAY\n - array_input4: ARRAY\n - array_input5: ARRAY\n - string_input1: STRING\n - string_input2: STRING\n - string_input3: STRING\n - string_input4: STRING\n - string_input5: STRING\n - **Outputs:**\n - table: TABLE\n </details>\n\n<details><summary> FractalImage</summary>\n\n - **Inputs:**\n - complexity: ARRAY\n - **Outputs:**\n - image: ARRAY\n </details>\n\n<details><summary> ImageGeneration</summary>\n\n - **Inputs:**\n - prompt: STRING\n - negative_prompt: STRING\n - base_image: ARRAY\n - **Outputs:**\n - img: ARRAY\n </details>\n\n<details><summary> Kuramoto</summary>\n\n - **Inputs:**\n - initial_phases: ARRAY\n - **Outputs:**\n - phases: ARRAY\n - coupling: ARRAY\n - order_parameter: ARRAY\n - waveforms: ARRAY\n </details>\n\n<details><summary> LoadFile</summary>\n\n - **Inputs:**\n - **Outputs:**\n - data_output: ARRAY\n </details>\n\n<details><summary> LSLClient</summary>\n\n - **Inputs:**\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> MeteoMedia</summary>\n\n - **Inputs:**\n - latitude: ARRAY\n - longitude: ARRAY\n - location_name: STRING\n - **Outputs:**\n - weather_data_table: TABLE\n </details>\n\n<details><summary> OSCIn</summary>\n\n - **Inputs:**\n - **Outputs:**\n - message: TABLE\n </details>\n\n<details><summary> PromptBook</summary>\n\n - **Inputs:**\n - input_prompt: STRING\n - **Outputs:**\n - out: STRING\n </details>\n\n<details><summary> Reservoir</summary>\n\n - **Inputs:**\n - connectivity: ARRAY\n - **Outputs:**\n - data: ARRAY\n </details>\n\n<details><summary> SerialStream</summary>\n\n - **Inputs:**\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> Sine</summary>\n\n - **Inputs:**\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> Table</summary>\n\n - **Inputs:**\n - base: TABLE\n - new_entry: ARRAY\n - **Outputs:**\n - table: TABLE\n </details>\n\n<details><summary> TextGeneration</summary>\n\n - **Inputs:**\n - prompt: STRING\n - **Outputs:**\n - generated_text: STRING\n </details>\n\n<details><summary> VideoStream</summary>\n\n - **Inputs:**\n - **Outputs:**\n - frame: ARRAY\n </details>\n\n<details><summary> ZeroMQIn</summary>\n\n - **Inputs:**\n - **Outputs:**\n - data: ARRAY\n </details>\n\n</details>\n\n## Misc\n\nMiscellaneous nodes that do not fit into other categories.\n\n<details><summary>View Nodes</summary>\n\n<details><summary> AppendTables</summary>\n\n - **Inputs:**\n - table1: TABLE\n - table2: TABLE\n - **Outputs:**\n - output_table: TABLE\n </details>\n\n<details><summary> ColorEnhancer</summary>\n\n - **Inputs:**\n - image: ARRAY\n - **Outputs:**\n - enhanced_image: ARRAY\n </details>\n\n<details><summary> EdgeDetector</summary>\n\n - **Inputs:**\n - image: ARRAY\n - **Outputs:**\n - edges: ARRAY\n </details>\n\n<details><summary> FormatString</summary>\n\n - **Inputs:**\n - input_string_1: STRING\n - input_string_2: STRING\n - input_string_3: STRING\n - input_string_4: STRING\n - input_string_5: STRING\n - input_string_6: STRING\n - input_string_7: STRING\n - input_string_8: STRING\n - input_string_9: STRING\n - input_string_10: STRING\n - **Outputs:**\n - output_string: STRING\n </details>\n\n<details><summary> HSVtoRGB</summary>\n\n - **Inputs:**\n - hsv_image: ARRAY\n - **Outputs:**\n - rgb_image: ARRAY\n </details>\n\n<details><summary> JoinString</summary>\n\n - **Inputs:**\n - string1: STRING\n - string2: STRING\n - string3: STRING\n - string4: STRING\n - string5: STRING\n - **Outputs:**\n - output: STRING\n </details>\n\n<details><summary> RGBtoHSV</summary>\n\n - **Inputs:**\n - rgb_image: ARRAY\n - **Outputs:**\n - hsv_image: ARRAY\n </details>\n\n<details><summary> SetMeta</summary>\n\n - **Inputs:**\n - array: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> StringAwait</summary>\n\n - **Inputs:**\n - message: STRING\n - trigger: ARRAY\n - **Outputs:**\n - out: STRING\n </details>\n\n<details><summary> TableSelectArray</summary>\n\n - **Inputs:**\n - input_table: TABLE\n - **Outputs:**\n - output_array: ARRAY\n </details>\n\n<details><summary> TableSelectString</summary>\n\n - **Inputs:**\n - input_table: TABLE\n - **Outputs:**\n - output_string: STRING\n </details>\n\n</details>\n\n## Outputs\n\nNodes that send data to external systems.\n\n<details><summary>View Nodes</summary>\n\n<details><summary> AudioOut</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - finished: ARRAY\n </details>\n\n<details><summary> LSLOut</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n </details>\n\n<details><summary> MidiCCout</summary>\n\n - **Inputs:**\n - cc1: ARRAY\n - cc2: ARRAY\n - cc3: ARRAY\n - cc4: ARRAY\n - cc5: ARRAY\n - **Outputs:**\n - midi_status: STRING\n </details>\n\n<details><summary> MidiOut</summary>\n\n - **Inputs:**\n - note: ARRAY\n - velocity: ARRAY\n - duration: ARRAY\n - **Outputs:**\n - midi_status: STRING\n </details>\n\n<details><summary> OSCOut</summary>\n\n - **Inputs:**\n - data: TABLE\n - **Outputs:**\n </details>\n\n<details><summary> SharedMemOut</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n </details>\n\n<details><summary> WriteCsv</summary>\n\n - **Inputs:**\n - table_input: TABLE\n - **Outputs:**\n </details>\n\n<details><summary> ZeroMQOut</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n </details>\n\n</details>\n\n## Signal\n\nNodes implementing signal processing operations.\n\n<details><summary>View Nodes</summary>\n\n<details><summary> Buffer</summary>\n\n - **Inputs:**\n - val: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> Cycle</summary>\n\n - **Inputs:**\n - signal: ARRAY\n - **Outputs:**\n - cycle: ARRAY\n </details>\n\n<details><summary> EMD</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - IMFs: ARRAY\n </details>\n\n<details><summary> FFT</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - mag: ARRAY\n - phase: ARRAY\n </details>\n\n<details><summary> Filter</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - filtered_data: ARRAY\n </details>\n\n<details><summary> FOOOFaperiodic</summary>\n\n - **Inputs:**\n - psd_data: ARRAY\n - **Outputs:**\n - offset: ARRAY\n - exponent: ARRAY\n - cf_peaks: ARRAY\n - cleaned_psd: ARRAY\n </details>\n\n<details><summary> FrequencyShift</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> Hilbert</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - inst_amplitude: ARRAY\n - inst_phase: ARRAY\n - inst_frequency: ARRAY\n </details>\n\n<details><summary> IFFT</summary>\n\n - **Inputs:**\n - spectrum: ARRAY\n - phase: ARRAY\n - **Outputs:**\n - reconstructed: ARRAY\n </details>\n\n<details><summary> PSD</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - psd: ARRAY\n </details>\n\n<details><summary> Recurrence</summary>\n\n - **Inputs:**\n - input_array: ARRAY\n - **Outputs:**\n - recurrence_matrix: ARRAY\n - RR: ARRAY\n - DET: ARRAY\n - LAM: ARRAY\n </details>\n\n<details><summary> Resample</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> ResampleJoint</summary>\n\n - **Inputs:**\n - data1: ARRAY\n - data2: ARRAY\n - **Outputs:**\n - out1: ARRAY\n - out2: ARRAY\n </details>\n\n<details><summary> Smooth</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - out: ARRAY\n </details>\n\n<details><summary> StaticBaseline</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - normalized: ARRAY\n </details>\n\n<details><summary> Threshold</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - thresholded: ARRAY\n </details>\n\n<details><summary> TimeDelayEmbedding</summary>\n\n - **Inputs:**\n - input_array: ARRAY\n - **Outputs:**\n - embedded_array: ARRAY\n </details>\n\n<details><summary> WelfordsZTransform</summary>\n\n - **Inputs:**\n - data: ARRAY\n - **Outputs:**\n - normalized: ARRAY\n </details>\n\n</details>\n<!-- !!GOOFI_PIPE_NODE_LIST_END!! -->\n",
"bugtrack_url": null,
"license": "MIT License Copyright (c) 2023 Philipp Th\u00f6lke Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
"summary": "Real-time neuro-/biosignal processing and streaming pipeline.",
"version": "2.1.7",
"project_urls": null,
"split_keywords": [
"signal-processing",
" neurofeedback",
" biofeedback",
" real-time",
" eeg",
" ecg"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "0a8d06b5ece93b409b511bcd9d54705b7082ba0a2c7b7719744b3b0f8e2083c0",
"md5": "d9fc8467fe9ee6bee97f3f9290edfaca",
"sha256": "383db78d55c371f9b36f41c4bae3becb15bbbfa20fc3130b360f35c1a7b9ce67"
},
"downloads": -1,
"filename": "goofi-2.1.7-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d9fc8467fe9ee6bee97f3f9290edfaca",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 509756,
"upload_time": "2024-10-09T23:42:27",
"upload_time_iso_8601": "2024-10-09T23:42:27.769863Z",
"url": "https://files.pythonhosted.org/packages/0a/8d/06b5ece93b409b511bcd9d54705b7082ba0a2c7b7719744b3b0f8e2083c0/goofi-2.1.7-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "9fbca3ad4cb470e9b0ba5ebab1343441aa5c2a9637708c9577a8f4d40830b220",
"md5": "f70d14d8f16cb6d9df244dded80e0e1f",
"sha256": "20e2e0d79be948426944a1a498edb93d2c7afbed81e9c854279e7f146bca43b5"
},
"downloads": -1,
"filename": "goofi-2.1.7.tar.gz",
"has_sig": false,
"md5_digest": "f70d14d8f16cb6d9df244dded80e0e1f",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 469659,
"upload_time": "2024-10-09T23:42:29",
"upload_time_iso_8601": "2024-10-09T23:42:29.673200Z",
"url": "https://files.pythonhosted.org/packages/9f/bc/a3ad4cb470e9b0ba5ebab1343441aa5c2a9637708c9577a8f4d40830b220/goofi-2.1.7.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-09 23:42:29",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "goofi"
}