# High-Level Space Field (HLSF) Toolkit
[](https://github.com/awlondon/YOWR-RR5-ht8h3/actions/workflows/ci.yml)
This repository provides minimal building blocks for an audio-driven High-Level Space Field (HLSF) engine. The code tokenises audio or text into frequency-band motifs, maps those motifs to simple polygonal geometry, optionally visualises the results, and outputs a response after recursive field collapse, reweighting, and backpropagation.
## Installation
Install the package from PyPI:
```bash
pip install hlsf_module
```
Optional extras provide additional functionality:
```bash
pip install "hlsf_module[audio]" # microphone support
pip install "hlsf_module[fft]" # FFT helpers
pip install "hlsf_module[gpu]" # GPU acceleration via CuPy
pip install "hlsf_module[visualization]" # plotting utilities
```
Sample vocabulary and weight files are bundled under `hlsf_module/data` for quick experiments.
## Features and objectives
- **Core utilities** – audio and text tokenisers, geometry generators and visualisation helpers provide the foundation for HLSF research.
- **Recent additions** – microphone and text front ends, asynchronous capture, semantic adjacency expansion via an LLM bridge, live viewers and a lightweight web UI with persistent weight caching broaden the modality mix.
- **Forthcoming goals** – richer cross-modal training, web-based viewers and optional GPU acceleration are planned to streamline larger-scale experiments.
## Roadmap
Planned enhancements include:
- richer cross-modal training;
- expanded web-based viewers; and
- optional GPU acceleration to speed up large experiments.
Track progress on the [Project board](https://github.com/YOWR-RR5-ht8h3/YOWR-RR5-ht8h3/projects/1) or search [open issues labelled `roadmap`](https://github.com/YOWR-RR5-ht8h3/YOWR-RR5-ht8h3/issues?q=label%3Aroadmap).
## Repository structure
### Core package: `hlsf_module`
- `__init__.py` – package marker for the toolkit【F:hlsf_module/__init__.py†L1-L1】
- `adjacency_expander.py` – retrieve semantic neighbours via an LLM and cache results【F:hlsf_module/adjacency_expander.py†L1-L38】
- `adjacency_mapping.py` – manage relationship symbols and adjacency caches【F:hlsf_module/adjacency_mapping.py†L8-L25】
- `agency_gates.py` – resonance-based gate deciding when motifs are externalised【F:hlsf_module/agency_gates.py†L1-L26】
- `cli.py` – command-line interface wiring geometry and visualisation options【F:hlsf_module/cli.py†L1-L35】
- `clusterer.py` – merge low-energy bands into stronger motifs【F:hlsf_module/clusterer.py†L1-L22】
- `enc_audio.py` – convert audio frames to `SymbolToken` objects with optional resonance scores【F:hlsf_module/enc_audio.py†L3-L58】
- `fft_tokenizer.py` – minimal FFT tokenizer tracking magnitudes and unwrapped phase deltas【F:hlsf_module/fft_tokenizer.py†L1-L38】
- `geometry.py` – polygonal motif utilities without external dependencies【F:hlsf_module/geometry.py†L1-L33】
- `history_viewer.py` – Tk viewer for navigating token history snapshots【F:hlsf_module/history_viewer.py†L15-L38】
- `llm_client.py` – protocol and clients for language-model neighbour lookups【F:hlsf_module/llm_client.py†L3-L33】
- `llm_weights.py` – lightweight LLM bridge and training database for token weights【F:hlsf_module/llm_weights.py†L1-L33】【F:hlsf_module/llm_weights.py†L61-L76】
- `live_visualizer.py` – asynchronous Matplotlib visualiser for resonance and geometry【F:hlsf_module/live_visualizer.py†L1-L26】
- `main.py` – entry point invoking the command-line interface【F:hlsf_module/main.py†L1-L7】
- `modal_stream.py` – modal stream helpers for 1-D, 2-D and 3-D data【F:hlsf_module/modal_stream.py†L1-L27】
- `multimodal_out.py` – export pipeline state and resynthesise bands as audio【F:hlsf_module/multimodal_out.py†L1-L33】
- `ngram_text_encoder.py` – n‑gram tokenizer emitting multi-level token lists【F:hlsf_module/ngram_text_encoder.py†L1-L27】
- `pipeline_frame.py` – Tk frame exposing save/load controls for HLSF snapshots【F:hlsf_module/pipeline_frame.py†L1-L27】
- `pruning.py` – utilities for pruning weak bands and deduplicating text tokens【F:hlsf_module/pruning.py†L1-L35】
- `prototypes.py` – trainer enforcing minimum spectral distance between prototypes【F:hlsf_module/prototypes.py†L1-L27】
- `recursion_ctrl.py` – sliding-window heuristic that stops recursion when gains diminish【F:hlsf_module/recursion_ctrl.py†L1-L25】
- `resonator.py` – maintain symbol prototypes and compute resonance scores【F:hlsf_module/resonator.py†L1-L18】
- `rotation_rules.py` – derive motif rotation angles from phase deltas【F:hlsf_module/rotation_rules.py†L1-L20】
- `signal_io.py` – normalised `SignalStream` and asynchronous capture helpers【F:hlsf_module/signal_io.py†L1-L27】【F:hlsf_module/signal_io.py†L59-L78】
- `stream_pipeline.py` – asynchronous audio processing pipeline with gating and mapping【F:hlsf_module/stream_pipeline.py†L1-L49】
- `tensor_mapper.py` – collect token metrics and map them to triangle geometry【F:hlsf_module/tensor_mapper.py†L1-L28】【F:hlsf_module/tensor_mapper.py†L77-L80】
- `text_encoder.py` – encode characters into deterministic symbolic tokens【F:hlsf_module/text_encoder.py†L1-L18】
- `text_fft.py` – text‑driven FFT pipeline and adjacency expansion utilities【F:hlsf_module/text_fft.py†L1-L28】【F:hlsf_module/text_fft.py†L67-L72】
- `text_signal.py` – represent text tokens as synthetic audio frames【F:hlsf_module/text_signal.py†L1-L33】
- `verification.py` – inverse synthesis and residual comparison helpers【F:hlsf_module/verification.py†L1-L12】
- `weights_bp.py` – integer weight back‑propagation for band metrics【F:hlsf_module/weights_bp.py†L1-L20】
- `visualization.py` – minimal polygon visualisation helpers and GUI wrappers【F:hlsf_module/visualization.py†L1-L23】【F:hlsf_module/visualization.py†L45-L58】
- `weight_cache.py` – persist numeric weights and expose rotation/collapse helpers【F:hlsf_module/weight_cache.py†L1-L80】
- `web_api.py` – FastAPI server wiring the weight cache and helper endpoints【F:hlsf_module/web_api.py†L1-L68】
#### `symbols` subpackage
- `encoding.py` – encode/decode symbol IDs using band/phase sequences【F:hlsf_module/symbols/encoding.py†L1-L24】
- `graph.py` – sliding‑window graph tracking token weights and frequency【F:hlsf_module/symbols/graph.py†L1-L33】
- `resonator.py` – resonate modality vectors against weighted prototypes【F:hlsf_module/symbols/resonator.py†L3-L65】
- `schema.py` – shared token schema with JSON helpers【F:hlsf_module/symbols/schema.py†L1-L28】
- `vocab.py` – deterministic vocabulary mapping `(mod, code)` pairs to IDs【F:hlsf_module/symbols/vocab.py†L1-L38】
### Stand-alone GUI
- `PDCo_Generate_Space_Field__FFT-Integration.py` – Tkinter GUI and visualiser integrating the text FFT pipeline【F:PDCo_Generate_Space_Field__FFT-Integration.py†L1-L31】
### Example scripts
- `adjacency_relationships.py` – expand tokens using a stub LLM for semantic neighbours【F:examples/adjacency_relationships.py†L1-L13】
- `async_capture.py` – demonstrate asynchronous file and socket capture【F:examples/async_capture.py†L1-L23】
- `custom_edges_mel.py` – show mel banding with a custom edge file【F:examples/custom_edges_mel.py†L1-L16】
- `demo_audio_loop.py` – synthesize a sine sweep and run the full pipeline【F:examples/demo_audio_loop.py†L1-L24】
- `mixed_media_demo.py` – mix microphone, text and live visualisation【F:examples/mixed_media_demo.py†L1-L10】
- `stream_pipeline_demo.py` – real-time streaming with `StreamPipeline` and `LiveVisualizer`【F:examples/stream_pipeline_demo.py†L1-L29】
- `text_fft_pipeline_gui.py` – Tkinter interface for the text FFT pipeline【F:examples/text_fft_pipeline_gui.py†L1-L38】
- `tokenize_multilevel.py` – run the n‑gram tokenizer and print results【F:examples/tokenize_multilevel.py†L1-L13】
- `web_ui_demo.py` – start the FastAPI server and interact with its endpoints【F:examples/web_ui_demo.py†L1-L47】
- `color_encoding_demo.py` – minimal encoder translating RGB tuples【F:examples/color_encoding_demo.py†L1-L28】
- `image_encoding_demo.py` – tokenise a small image via `ImageEncoder`【F:examples/image_encoding_demo.py†L1-L12】
## Tests and documentation
Comprehensive unit tests cover tokenisation, geometry mapping, pruning and more【F:tests/test_fft_tokenizer.py†L1-L32】. Benchmark tests reside under `tests/benchmarks` for deterministic performance checks【F:tests/benchmarks/bench_fft.py†L1-L36】. Additional documentation including architectural notes, web UI usage and troubleshooting guides lives in the `docs/` directory【F:docs/architecture.md†L1-L10】【F:docs/web_ui.md†L1-L34】【F:docs/troubleshooting.md†L1-L14】. A quick start guide for the plugin system is available in `docs/getting_started_plugins.md`【F:docs/getting_started_plugins.md†L1-L20】.
## Architecture
The pipeline flows from tokenisation to mapping and finally visualisation, as outlined in the architecture notes【F:docs/architecture.md†L1-L10】.
## Usage examples
Generate a synthetic sine sweep and process it through the FFT pipeline:
```bash
python examples/demo_audio_loop.py --enable-fft --fft-size 256 --banding linear
```
Record two seconds from the default microphone with a custom front end:
```bash
python -m hlsf_module.cli --mic 2.0 --norm-mode rms --preemphasis 0.95 --window blackman --fft-size 4096 --banding linear
```
Run the multi-stage text→FFT pipeline on a string:
```bash
# Unix shell
HLSF_GATE_DURATION=4 python -m hlsf_module.cli --text "hello world"
# PowerShell
$env:HLSF_GATE_DURATION=4; python -m hlsf_module.cli --text "hello world"
```
PowerShell uses `$env:VAR=VALUE;` to set environment variables, whereas Unix
shells prefix the command with `VAR=value`.
Use the n‑gram tokenizer directly:
```bash
python examples/tokenize_multilevel.py
```
Stream frames asynchronously from files or sockets:
```bash
python examples/async_capture.py
```
Start the FastAPI server and open the web viewer:
```bash
uvicorn hlsf_module.server:app
# In another terminal:
curl -X POST http://127.0.0.1:8000/text -H "Content-Type: application/json" -d '{"prompt": "hello"}'
```
Then browse to ``http://127.0.0.1:8000/viewer`` to see geometry and gating scores.
## JSON output
Use `multimodal_out.snapshot_state` to export the current `HLSFState` to JSON and `resynth_bands` to generate synthetic audio from band magnitudes【F:hlsf_module/multimodal_out.py†L12-L33】. `symbols.schema.SymbolBatch` offers `to_json`/`from_json` helpers for serialising token batches【F:hlsf_module/symbols/schema.py†L20-L31】.
## Training procedure
`llm_weights.TrainingDB` accumulates pairwise weights between tokens by calling a lightweight LLM bridge and updating its in-memory database【F:hlsf_module/llm_weights.py†L61-L76】.
## Benchmarks
Deterministic micro‑benchmarks reside under `tests/benchmarks` and compare naive DFT against the mixed‑radix FFT implementation【F:tests/benchmarks/bench_fft.py†L1-L37】.
## Troubleshooting
Common issues and fixes are collected in the troubleshooting guide【F:docs/troubleshooting.md†L1-L14】.
Raw data
{
"_id": null,
"home_page": null,
"name": "hlsf-module",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "audio, fft, visualization",
"author": "HLSF Developers",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/61/36/1695c081856eb1d0b5881abe08ada20ee22e9ed8942d940e91e58543e67f/hlsf_module-0.1.3.tar.gz",
"platform": null,
"description": "# High-Level Space Field (HLSF) Toolkit\r\n\r\n[](https://github.com/awlondon/YOWR-RR5-ht8h3/actions/workflows/ci.yml)\r\n\r\nThis repository provides minimal building blocks for an audio-driven High-Level Space Field (HLSF) engine. The code tokenises audio or text into frequency-band motifs, maps those motifs to simple polygonal geometry, optionally visualises the results, and outputs a response after recursive field collapse, reweighting, and backpropagation.\r\n\r\n## Installation\r\n\r\nInstall the package from PyPI:\r\n\r\n```bash\r\npip install hlsf_module\r\n```\r\n\r\nOptional extras provide additional functionality:\r\n\r\n```bash\r\npip install \"hlsf_module[audio]\" # microphone support\r\npip install \"hlsf_module[fft]\" # FFT helpers\r\npip install \"hlsf_module[gpu]\" # GPU acceleration via CuPy\r\npip install \"hlsf_module[visualization]\" # plotting utilities\r\n```\r\n\r\nSample vocabulary and weight files are bundled under `hlsf_module/data` for quick experiments.\r\n\r\n## Features and objectives\r\n- **Core utilities** \u2013 audio and text tokenisers, geometry generators and visualisation helpers provide the foundation for HLSF research.\r\n- **Recent additions** \u2013 microphone and text front ends, asynchronous capture, semantic adjacency expansion via an LLM bridge, live viewers and a lightweight web UI with persistent weight caching broaden the modality mix.\r\n- **Forthcoming goals** \u2013 richer cross-modal training, web-based viewers and optional GPU acceleration are planned to streamline larger-scale experiments.\r\n\r\n## Roadmap\r\nPlanned enhancements include:\r\n- richer cross-modal training;\r\n- expanded web-based viewers; and\r\n- optional GPU acceleration to speed up large experiments.\r\n\r\nTrack progress on the [Project board](https://github.com/YOWR-RR5-ht8h3/YOWR-RR5-ht8h3/projects/1) or search [open issues labelled `roadmap`](https://github.com/YOWR-RR5-ht8h3/YOWR-RR5-ht8h3/issues?q=label%3Aroadmap).\r\n\r\n## Repository structure\r\n### Core package: `hlsf_module`\r\n- `__init__.py` \u2013 package marker for the toolkit\u3010F:hlsf_module/__init__.py\u2020L1-L1\u3011\r\n- `adjacency_expander.py` \u2013 retrieve semantic neighbours via an LLM and cache results\u3010F:hlsf_module/adjacency_expander.py\u2020L1-L38\u3011\r\n- `adjacency_mapping.py` \u2013 manage relationship symbols and adjacency caches\u3010F:hlsf_module/adjacency_mapping.py\u2020L8-L25\u3011\r\n- `agency_gates.py` \u2013 resonance-based gate deciding when motifs are externalised\u3010F:hlsf_module/agency_gates.py\u2020L1-L26\u3011\r\n- `cli.py` \u2013 command-line interface wiring geometry and visualisation options\u3010F:hlsf_module/cli.py\u2020L1-L35\u3011\r\n- `clusterer.py` \u2013 merge low-energy bands into stronger motifs\u3010F:hlsf_module/clusterer.py\u2020L1-L22\u3011\r\n- `enc_audio.py` \u2013 convert audio frames to `SymbolToken` objects with optional resonance scores\u3010F:hlsf_module/enc_audio.py\u2020L3-L58\u3011\r\n- `fft_tokenizer.py` \u2013 minimal FFT tokenizer tracking magnitudes and unwrapped phase deltas\u3010F:hlsf_module/fft_tokenizer.py\u2020L1-L38\u3011\r\n- `geometry.py` \u2013 polygonal motif utilities without external dependencies\u3010F:hlsf_module/geometry.py\u2020L1-L33\u3011\r\n- `history_viewer.py` \u2013 Tk viewer for navigating token history snapshots\u3010F:hlsf_module/history_viewer.py\u2020L15-L38\u3011\r\n- `llm_client.py` \u2013 protocol and clients for language-model neighbour lookups\u3010F:hlsf_module/llm_client.py\u2020L3-L33\u3011\r\n- `llm_weights.py` \u2013 lightweight LLM bridge and training database for token weights\u3010F:hlsf_module/llm_weights.py\u2020L1-L33\u3011\u3010F:hlsf_module/llm_weights.py\u2020L61-L76\u3011\r\n- `live_visualizer.py` \u2013 asynchronous Matplotlib visualiser for resonance and geometry\u3010F:hlsf_module/live_visualizer.py\u2020L1-L26\u3011\r\n- `main.py` \u2013 entry point invoking the command-line interface\u3010F:hlsf_module/main.py\u2020L1-L7\u3011\r\n- `modal_stream.py` \u2013 modal stream helpers for 1-D, 2-D and 3-D data\u3010F:hlsf_module/modal_stream.py\u2020L1-L27\u3011\r\n- `multimodal_out.py` \u2013 export pipeline state and resynthesise bands as audio\u3010F:hlsf_module/multimodal_out.py\u2020L1-L33\u3011\r\n- `ngram_text_encoder.py` \u2013 n\u2011gram tokenizer emitting multi-level token lists\u3010F:hlsf_module/ngram_text_encoder.py\u2020L1-L27\u3011\r\n- `pipeline_frame.py` \u2013 Tk frame exposing save/load controls for HLSF snapshots\u3010F:hlsf_module/pipeline_frame.py\u2020L1-L27\u3011\r\n- `pruning.py` \u2013 utilities for pruning weak bands and deduplicating text tokens\u3010F:hlsf_module/pruning.py\u2020L1-L35\u3011\r\n- `prototypes.py` \u2013 trainer enforcing minimum spectral distance between prototypes\u3010F:hlsf_module/prototypes.py\u2020L1-L27\u3011\r\n- `recursion_ctrl.py` \u2013 sliding-window heuristic that stops recursion when gains diminish\u3010F:hlsf_module/recursion_ctrl.py\u2020L1-L25\u3011\r\n- `resonator.py` \u2013 maintain symbol prototypes and compute resonance scores\u3010F:hlsf_module/resonator.py\u2020L1-L18\u3011\r\n- `rotation_rules.py` \u2013 derive motif rotation angles from phase deltas\u3010F:hlsf_module/rotation_rules.py\u2020L1-L20\u3011\r\n- `signal_io.py` \u2013 normalised `SignalStream` and asynchronous capture helpers\u3010F:hlsf_module/signal_io.py\u2020L1-L27\u3011\u3010F:hlsf_module/signal_io.py\u2020L59-L78\u3011\r\n- `stream_pipeline.py` \u2013 asynchronous audio processing pipeline with gating and mapping\u3010F:hlsf_module/stream_pipeline.py\u2020L1-L49\u3011\r\n- `tensor_mapper.py` \u2013 collect token metrics and map them to triangle geometry\u3010F:hlsf_module/tensor_mapper.py\u2020L1-L28\u3011\u3010F:hlsf_module/tensor_mapper.py\u2020L77-L80\u3011\r\n- `text_encoder.py` \u2013 encode characters into deterministic symbolic tokens\u3010F:hlsf_module/text_encoder.py\u2020L1-L18\u3011\r\n- `text_fft.py` \u2013 text\u2011driven FFT pipeline and adjacency expansion utilities\u3010F:hlsf_module/text_fft.py\u2020L1-L28\u3011\u3010F:hlsf_module/text_fft.py\u2020L67-L72\u3011\r\n- `text_signal.py` \u2013 represent text tokens as synthetic audio frames\u3010F:hlsf_module/text_signal.py\u2020L1-L33\u3011\r\n- `verification.py` \u2013 inverse synthesis and residual comparison helpers\u3010F:hlsf_module/verification.py\u2020L1-L12\u3011\r\n- `weights_bp.py` \u2013 integer weight back\u2011propagation for band metrics\u3010F:hlsf_module/weights_bp.py\u2020L1-L20\u3011\r\n- `visualization.py` \u2013 minimal polygon visualisation helpers and GUI wrappers\u3010F:hlsf_module/visualization.py\u2020L1-L23\u3011\u3010F:hlsf_module/visualization.py\u2020L45-L58\u3011\r\n- `weight_cache.py` \u2013 persist numeric weights and expose rotation/collapse helpers\u3010F:hlsf_module/weight_cache.py\u2020L1-L80\u3011\r\n- `web_api.py` \u2013 FastAPI server wiring the weight cache and helper endpoints\u3010F:hlsf_module/web_api.py\u2020L1-L68\u3011\r\n\r\n#### `symbols` subpackage\r\n- `encoding.py` \u2013 encode/decode symbol IDs using band/phase sequences\u3010F:hlsf_module/symbols/encoding.py\u2020L1-L24\u3011\r\n- `graph.py` \u2013 sliding\u2011window graph tracking token weights and frequency\u3010F:hlsf_module/symbols/graph.py\u2020L1-L33\u3011\r\n- `resonator.py` \u2013 resonate modality vectors against weighted prototypes\u3010F:hlsf_module/symbols/resonator.py\u2020L3-L65\u3011\r\n- `schema.py` \u2013 shared token schema with JSON helpers\u3010F:hlsf_module/symbols/schema.py\u2020L1-L28\u3011\r\n- `vocab.py` \u2013 deterministic vocabulary mapping `(mod, code)` pairs to IDs\u3010F:hlsf_module/symbols/vocab.py\u2020L1-L38\u3011\r\n\r\n### Stand-alone GUI\r\n- `PDCo_Generate_Space_Field__FFT-Integration.py` \u2013 Tkinter GUI and visualiser integrating the text FFT pipeline\u3010F:PDCo_Generate_Space_Field__FFT-Integration.py\u2020L1-L31\u3011\r\n\r\n### Example scripts\r\n- `adjacency_relationships.py` \u2013 expand tokens using a stub LLM for semantic neighbours\u3010F:examples/adjacency_relationships.py\u2020L1-L13\u3011\r\n- `async_capture.py` \u2013 demonstrate asynchronous file and socket capture\u3010F:examples/async_capture.py\u2020L1-L23\u3011\r\n- `custom_edges_mel.py` \u2013 show mel banding with a custom edge file\u3010F:examples/custom_edges_mel.py\u2020L1-L16\u3011\r\n- `demo_audio_loop.py` \u2013 synthesize a sine sweep and run the full pipeline\u3010F:examples/demo_audio_loop.py\u2020L1-L24\u3011\r\n- `mixed_media_demo.py` \u2013 mix microphone, text and live visualisation\u3010F:examples/mixed_media_demo.py\u2020L1-L10\u3011\r\n- `stream_pipeline_demo.py` \u2013 real-time streaming with `StreamPipeline` and `LiveVisualizer`\u3010F:examples/stream_pipeline_demo.py\u2020L1-L29\u3011\r\n- `text_fft_pipeline_gui.py` \u2013 Tkinter interface for the text FFT pipeline\u3010F:examples/text_fft_pipeline_gui.py\u2020L1-L38\u3011\r\n- `tokenize_multilevel.py` \u2013 run the n\u2011gram tokenizer and print results\u3010F:examples/tokenize_multilevel.py\u2020L1-L13\u3011\r\n- `web_ui_demo.py` \u2013 start the FastAPI server and interact with its endpoints\u3010F:examples/web_ui_demo.py\u2020L1-L47\u3011\r\n- `color_encoding_demo.py` \u2013 minimal encoder translating RGB tuples\u3010F:examples/color_encoding_demo.py\u2020L1-L28\u3011\r\n- `image_encoding_demo.py` \u2013 tokenise a small image via `ImageEncoder`\u3010F:examples/image_encoding_demo.py\u2020L1-L12\u3011\r\n\r\n## Tests and documentation\r\nComprehensive unit tests cover tokenisation, geometry mapping, pruning and more\u3010F:tests/test_fft_tokenizer.py\u2020L1-L32\u3011. Benchmark tests reside under `tests/benchmarks` for deterministic performance checks\u3010F:tests/benchmarks/bench_fft.py\u2020L1-L36\u3011. Additional documentation including architectural notes, web UI usage and troubleshooting guides lives in the `docs/` directory\u3010F:docs/architecture.md\u2020L1-L10\u3011\u3010F:docs/web_ui.md\u2020L1-L34\u3011\u3010F:docs/troubleshooting.md\u2020L1-L14\u3011. A quick start guide for the plugin system is available in `docs/getting_started_plugins.md`\u3010F:docs/getting_started_plugins.md\u2020L1-L20\u3011.\r\n\r\n## Architecture\r\nThe pipeline flows from tokenisation to mapping and finally visualisation, as outlined in the architecture notes\u3010F:docs/architecture.md\u2020L1-L10\u3011.\r\n\r\n## Usage examples\r\nGenerate a synthetic sine sweep and process it through the FFT pipeline:\r\n```bash\r\npython examples/demo_audio_loop.py --enable-fft --fft-size 256 --banding linear\r\n```\r\nRecord two seconds from the default microphone with a custom front end:\r\n```bash\r\npython -m hlsf_module.cli --mic 2.0 --norm-mode rms --preemphasis 0.95 --window blackman --fft-size 4096 --banding linear\r\n```\r\nRun the multi-stage text\u2192FFT pipeline on a string:\r\n```bash\r\n# Unix shell\r\nHLSF_GATE_DURATION=4 python -m hlsf_module.cli --text \"hello world\"\r\n\r\n# PowerShell\r\n$env:HLSF_GATE_DURATION=4; python -m hlsf_module.cli --text \"hello world\"\r\n```\r\nPowerShell uses `$env:VAR=VALUE;` to set environment variables, whereas Unix\r\nshells prefix the command with `VAR=value`.\r\nUse the n\u2011gram tokenizer directly:\r\n```bash\r\npython examples/tokenize_multilevel.py\r\n```\r\nStream frames asynchronously from files or sockets:\r\n```bash\r\npython examples/async_capture.py\r\n```\r\n\r\nStart the FastAPI server and open the web viewer:\r\n```bash\r\nuvicorn hlsf_module.server:app\r\n# In another terminal:\r\ncurl -X POST http://127.0.0.1:8000/text -H \"Content-Type: application/json\" -d '{\"prompt\": \"hello\"}'\r\n```\r\nThen browse to ``http://127.0.0.1:8000/viewer`` to see geometry and gating scores.\r\n\r\n## JSON output\r\nUse `multimodal_out.snapshot_state` to export the current `HLSFState` to JSON and `resynth_bands` to generate synthetic audio from band magnitudes\u3010F:hlsf_module/multimodal_out.py\u2020L12-L33\u3011. `symbols.schema.SymbolBatch` offers `to_json`/`from_json` helpers for serialising token batches\u3010F:hlsf_module/symbols/schema.py\u2020L20-L31\u3011.\r\n\r\n## Training procedure\r\n`llm_weights.TrainingDB` accumulates pairwise weights between tokens by calling a lightweight LLM bridge and updating its in-memory database\u3010F:hlsf_module/llm_weights.py\u2020L61-L76\u3011.\r\n\r\n## Benchmarks\r\nDeterministic micro\u2011benchmarks reside under `tests/benchmarks` and compare naive DFT against the mixed\u2011radix FFT implementation\u3010F:tests/benchmarks/bench_fft.py\u2020L1-L37\u3011.\r\n\r\n## Troubleshooting\r\nCommon issues and fixes are collected in the troubleshooting guide\u3010F:docs/troubleshooting.md\u2020L1-L14\u3011.\r\n\r\n",
"bugtrack_url": null,
"license": null,
"summary": "High-Level Space Field toolkit",
"version": "0.1.3",
"project_urls": null,
"split_keywords": [
"audio",
" fft",
" visualization"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "5caada6e9fbfddda6bbf5aeb278dd43fe1bddd4bff2662d84e5b8bb204ac2a12",
"md5": "2f026bbf00a9daf59d21d42fddf10467",
"sha256": "6fa002e2d70542d33b4a990cb0b6d05a17a27370696bc715ca25849c95059cbf"
},
"downloads": -1,
"filename": "hlsf_module-0.1.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "2f026bbf00a9daf59d21d42fddf10467",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 106388,
"upload_time": "2025-08-15T22:16:18",
"upload_time_iso_8601": "2025-08-15T22:16:18.724323Z",
"url": "https://files.pythonhosted.org/packages/5c/aa/da6e9fbfddda6bbf5aeb278dd43fe1bddd4bff2662d84e5b8bb204ac2a12/hlsf_module-0.1.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "61361695c081856eb1d0b5881abe08ada20ee22e9ed8942d940e91e58543e67f",
"md5": "f6666923429c6aa2a90d3d7a1157544c",
"sha256": "c4ca4f768cb1cb563c2ea1a76e22984a157f8c8288ee0904e593541846c7969b"
},
"downloads": -1,
"filename": "hlsf_module-0.1.3.tar.gz",
"has_sig": false,
"md5_digest": "f6666923429c6aa2a90d3d7a1157544c",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 113115,
"upload_time": "2025-08-15T22:16:19",
"upload_time_iso_8601": "2025-08-15T22:16:19.763950Z",
"url": "https://files.pythonhosted.org/packages/61/36/1695c081856eb1d0b5881abe08ada20ee22e9ed8942d940e91e58543e67f/hlsf_module-0.1.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-15 22:16:19",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "hlsf-module"
}