# Multiarrangement — Video & Audio Similarity Arrangement Toolkit
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
## Overview
Multiarrangement is a Python toolkit for collecting human similarity judgements by arranging stimuli (videos or audio) on a 2D canvas. The spatial arrangement encodes perceived similarity and is converted into a full Representational Dissimilarity Matrix (RDM) for downstream analysis.
Two complementary experiment paradigms are supported:
- Set‑Cover (fixed batches): Precompute batches that efficiently cover pairs; run them in a controlled sequence.
- Adaptive LTW (Lift‑the‑Weakest): After each trial, select the next subset that maximizes evidence gain for the weakest‑evidence pairs, with optional inverse‑MDS refinement.
The package ships with windowed and fullscreen UIs, packaged demo media (15 videos and 15 audios), instruction videos, bundled LJCR covering‑design cache (offline‑first), and Python APIs.
## Quick Demo

*Demo showing the Multiarrangement interface for collecting similarity judgments*
## What’s Included
- Package code: `multiarrangement/*` (UI, core, adaptive LTW), `coverlib/*` (covering‑design tools)
- Demo media (installed): `multiarrangement/15videos/*`, `multiarrangement/15audios/*`, `multiarrangement/sample_audio/*`, and `multiarrangement/demovids/*`
- LJCR cache (installed): `multiarrangement/ljcr_cache/*.txt` used by covering‑design CLIs by default (offline‑first)
## Install
From source:
```bash
git clone https://github.com/UYildiz12/Multiarrangement-for-videos.git
cd Multiarrangement-for-videos
pip install multiarrangement
```
Requirements: Python 3.8+, NumPy ≥ 1.20, pandas ≥ 1.3, pygame ≥ 2.0, opencv‑python ≥ 4.5, openpyxl ≥ 3.0.
## Python API
Set‑cover Demo (fixed batches):
```python
import multiarrangement as ma
ma.demo()
```
Adaptive LTW Demo (Lift‑the‑Weakest):
```python
import multiarrangement as ma
ma.demo_adaptive()
```
Both demos use the packaged `15videos` and show default instruction screens (with bundled instruction clips).
## The simplest way to use Multiarrangement is with the minimum arguments
```python
import multiarrangement as ma
input_dir = "path/to/input/directory"
output_dir = "path/to/output/directory"
batches = ma.create_batches(ma.auto_detect_stimuli(input_dir), 8)
# For variable-size batches instead, set flex=True:
# batches = ma.create_batches(ma.auto_detect_stimuli(input_dir), 8, flex=True)
results = ma.multiarrangement(input_dir, batches, output_dir)
results.vis()
results.savefig(f"{output_dir}/rdm_setcover.png", title="Set‑Cover RDM")
```
Or if you'd like to use the LTW algorithm
```python
import multiarrangement as ma
input_dir = "path/to/input/directory"
output_dir = "path/to/output/directory"
results = ma.multiarrangement_adaptive(input_dir, output_dir)
results.vis()
results.savefig(f"{output_dir}/rdm_adaptive.png", title="Adaptive LTW RDM")
```
Results file will be available via .xlsx and .csv versions in "datetime.xlsx/csv" format at output directory.
### Set‑Cover Experiment (More detailed)
```python
import multiarrangement as ma
# Build batches for 24 items, size 8 (hybrid by default)
# Fixed-size batches (flex=False)
batches = ma.create_batches(24, 8, seed=42, flex=False)
# Or variable-size batches (shrink-only):
# batches = ma.create_batches(24, 8, seed=42, flex=True)
# Run experiment (English, windowed)
results = ma.multiarrangement(
input_dir="./videos", #Where your videos or audios are
batches=batches,
output_dir="./results", #Where your results will appear
show_first_frames=True,
fullscreen=False,
language="en", # Or tr if you'd like Turkish instructions
instructions="default" # or None, or ["Custom", "lines"]
)
results.vis(title="Set‑Cover RDM")
results.savefig("results/rdm_setcover.png", title="Set‑Cover RDM")
```
### Adaptive LTW Experiment (More detailed)
```python
import multiarrangement as ma
results = ma.multiarrangement_adaptive(
input_dir="./videos",
output_dir="./results",
participant_id="participant",
fullscreen=True,
language="en",
evidence_threshold=0.35, # stop when min pair evidence ≥ threshold
utility_exponent=10.0,
time_limit_minutes=None,
min_subset_size=4,
max_subset_size=6,
use_inverse_mds=True, # optional inverse‑MDS refinement
inverse_mds_max_iter=15,
inverse_mds_step_c=0.3,
inverse_mds_tol=1e-4,
instructions="default",
)
results.vis(title="Adaptive LTW RDM")
results.savefig("results/rdm_adaptive.png", title="Adaptive LTW RDM")
```
### Run the examples
We include four examples for both paradigms (video/audio). They save heatmaps to `./results`.
```bash
# Set-cover examples
python -m multiarrangement.examples.setcover_video
python -m multiarrangement.examples.setcover_audio
# Adaptive LTW examples
python -m multiarrangement.examples.ltw_video
python -m multiarrangement.examples.ltw_audio
```
These examples auto‑resolve the packaged media and create `./results` if missing.
### Custom Instructions (both paradigms)
```python
custom = [
"Welcome to the lab.",
"Drag each item inside the white circle.",
"Double‑click to play/replay.",
"Press SPACE to continue."
]
# Set‑cover
ma.multiarrangement(
input_dir="./videos",
batches=batches,
output_dir="./results",
instructions=custom, # show these lines instead of defaults
)
# Adaptive LTW
ma.multiarrangement_adaptive(
input_dir="./videos",
output_dir="./results",
instructions=custom, # also supported here
)
```
Key ideas:
- Evidence is normalized per trial: `w_ij = (d_ij / max_d)^2` so absolute pixel scale does not dominate.
- Next subset is chosen greedily to maximize (utility gain)/(time cost), starting from the globally weakest‑evidence pair.
- Optional inverse‑MDS refinement reduces arrangement prediction error across trials.
## Instruction Screens
- Default instructions include short videos (bundled in `demovids/`) showing drag, double‑click, and completion.
- To skip instructions, pass `instructions=None`. To customize, pass a list of strings.
## Outputs
- Set‑cover: `participant_<id>_results.xlsx`, `participant_<id>_rdm.npy`, CSV (optional)
- Adaptive LTW: `adaptive_results_results.xlsx`, `adaptive_results_rdm.npy`, `adaptive_results_evidence.npy`, `adaptive_results_meta.json`
## Covering Designs
- Two optimizers are provided:
- `optimize-cover`: fixed k; cache‑first LJCR seed, repair/prune, local search + group DFS
- `optimize-cover-flex`: shrink‑only; starts from fixed k and may reduce block sizes down to `--min-k-size`
- Both prefer the installed cache path by default and support `--seed-file` to run from your own seeds.
## Troubleshooting
- Pygame/OpenCV: on minimal Linux, install SDL2 and video codecs via your package manager.
- Audio playback: Windows uses Windows Media Player (fallback), macOS `afplay`, Linux `paplay`/`aplay`.
## References
- Inverse MDS (adaptive refinement):
- Kriegeskorte, N., & Mur, M. (2012). Inverse MDS: optimizing the stimulus arrangements for pairwise dissimilarity measures. Frontiers in Psychology, 3, 245. https://doi.org/10.3389/fpsyg.2012.00245
- Demo video dataset:
- Urgen, B. A., Nizamoğlu, H., Eroğlu, A., & Orban, G. A. (2023). A large video set of natural human actions for visual and cognitive neuroscience studies and its validation with fMRI. Brain Sciences, 13(1), 61. https://doi.org/10.3390/brainsci13010061
## License
MIT License. See `LICENSE`.
## Contributing
Issues and PRs are welcome. Please add tests for new functionality and keep changes focused.
Raw data
{
"_id": null,
"home_page": null,
"name": "multiarrangement",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "psychology, experiment, video, similarity, multiarrangement, rdm",
"author": "Umur Y\u0131ld\u0131z",
"author_email": null,
"download_url": null,
"platform": null,
"description": "# Multiarrangement \u2014 Video & Audio Similarity Arrangement Toolkit\r\n\r\n[](https://www.python.org/downloads/)\r\n[](https://opensource.org/licenses/MIT)\r\n\r\n## Overview\r\n\r\nMultiarrangement is a Python toolkit for collecting human similarity judgements by arranging stimuli (videos or audio) on a 2D canvas. The spatial arrangement encodes perceived similarity and is converted into a full Representational Dissimilarity Matrix (RDM) for downstream analysis.\r\n\r\nTwo complementary experiment paradigms are supported:\r\n\r\n- Set\u2011Cover (fixed batches): Precompute batches that efficiently cover pairs; run them in a controlled sequence.\r\n- Adaptive LTW (Lift\u2011the\u2011Weakest): After each trial, select the next subset that maximizes evidence gain for the weakest\u2011evidence pairs, with optional inverse\u2011MDS refinement.\r\n\r\nThe package ships with windowed and fullscreen UIs, packaged demo media (15 videos and 15 audios), instruction videos, bundled LJCR covering\u2011design cache (offline\u2011first), and Python APIs.\r\n\r\n## Quick Demo\r\n\r\n\r\n\r\n*Demo showing the Multiarrangement interface for collecting similarity judgments*\r\n\r\n\r\n\r\n## What\u2019s Included\r\n\r\n- Package code: `multiarrangement/*` (UI, core, adaptive LTW), `coverlib/*` (covering\u2011design tools)\r\n- Demo media (installed): `multiarrangement/15videos/*`, `multiarrangement/15audios/*`, `multiarrangement/sample_audio/*`, and `multiarrangement/demovids/*`\r\n- LJCR cache (installed): `multiarrangement/ljcr_cache/*.txt` used by covering\u2011design CLIs by default (offline\u2011first)\r\n\r\n## Install\r\n\r\nFrom source:\r\n\r\n```bash\r\ngit clone https://github.com/UYildiz12/Multiarrangement-for-videos.git\r\ncd Multiarrangement-for-videos\r\npip install multiarrangement\r\n```\r\n\r\nRequirements: Python 3.8+, NumPy \u2265 1.20, pandas \u2265 1.3, pygame \u2265 2.0, opencv\u2011python \u2265 4.5, openpyxl \u2265 3.0.\r\n\r\n\r\n\r\n\r\n \r\n## Python API\r\n\r\n\r\nSet\u2011cover Demo (fixed batches):\r\n\r\n```python\r\nimport multiarrangement as ma\r\n\r\nma.demo()\r\n\r\n```\r\n\r\nAdaptive LTW Demo (Lift\u2011the\u2011Weakest):\r\n\r\n```python\r\nimport multiarrangement as ma\r\n\r\nma.demo_adaptive()\r\n\r\n```\r\n\r\nBoth demos use the packaged `15videos` and show default instruction screens (with bundled instruction clips).\r\n\r\n## The simplest way to use Multiarrangement is with the minimum arguments\r\n```python\r\n\r\nimport multiarrangement as ma\r\n\r\ninput_dir = \"path/to/input/directory\"\r\n\r\noutput_dir = \"path/to/output/directory\"\r\n\r\nbatches = ma.create_batches(ma.auto_detect_stimuli(input_dir), 8)\r\n# For variable-size batches instead, set flex=True:\r\n# batches = ma.create_batches(ma.auto_detect_stimuli(input_dir), 8, flex=True)\r\nresults = ma.multiarrangement(input_dir, batches, output_dir)\r\nresults.vis()\r\nresults.savefig(f\"{output_dir}/rdm_setcover.png\", title=\"Set\u2011Cover RDM\")\r\n\r\n```\r\n\r\nOr if you'd like to use the LTW algorithm\r\n\r\n```python\r\n\r\nimport multiarrangement as ma\r\n\r\ninput_dir = \"path/to/input/directory\"\r\noutput_dir = \"path/to/output/directory\"\r\n\r\nresults = ma.multiarrangement_adaptive(input_dir, output_dir)\r\nresults.vis()\r\nresults.savefig(f\"{output_dir}/rdm_adaptive.png\", title=\"Adaptive LTW RDM\")\r\n\r\n\r\n```\r\n\r\nResults file will be available via .xlsx and .csv versions in \"datetime.xlsx/csv\" format at output directory.\r\n\r\n### Set\u2011Cover Experiment (More detailed)\r\n\r\n```python\r\nimport multiarrangement as ma\r\n\r\n# Build batches for 24 items, size 8 (hybrid by default)\r\n# Fixed-size batches (flex=False)\r\nbatches = ma.create_batches(24, 8, seed=42, flex=False)\r\n# Or variable-size batches (shrink-only):\r\n# batches = ma.create_batches(24, 8, seed=42, flex=True)\r\n\r\n# Run experiment (English, windowed)\r\nresults = ma.multiarrangement(\r\n input_dir=\"./videos\", #Where your videos or audios are\r\n batches=batches,\r\n output_dir=\"./results\", #Where your results will appear \r\n show_first_frames=True,\r\n fullscreen=False,\r\n language=\"en\", # Or tr if you'd like Turkish instructions\r\n instructions=\"default\" # or None, or [\"Custom\", \"lines\"]\r\n)\r\nresults.vis(title=\"Set\u2011Cover RDM\")\r\nresults.savefig(\"results/rdm_setcover.png\", title=\"Set\u2011Cover RDM\")\r\n```\r\n\r\n### Adaptive LTW Experiment (More detailed) \r\n\r\n```python\r\nimport multiarrangement as ma\r\n\r\nresults = ma.multiarrangement_adaptive(\r\n input_dir=\"./videos\",\r\n output_dir=\"./results\",\r\n participant_id=\"participant\",\r\n fullscreen=True,\r\n language=\"en\",\r\n evidence_threshold=0.35, # stop when min pair evidence \u2265 threshold\r\n utility_exponent=10.0,\r\n time_limit_minutes=None,\r\n min_subset_size=4,\r\n max_subset_size=6,\r\n use_inverse_mds=True, # optional inverse\u2011MDS refinement\r\n inverse_mds_max_iter=15,\r\n inverse_mds_step_c=0.3,\r\n inverse_mds_tol=1e-4,\r\n instructions=\"default\",\r\n)\r\nresults.vis(title=\"Adaptive LTW RDM\")\r\nresults.savefig(\"results/rdm_adaptive.png\", title=\"Adaptive LTW RDM\")\r\n\r\n```\r\n\r\n### Run the examples\r\n\r\nWe include four examples for both paradigms (video/audio). They save heatmaps to `./results`.\r\n\r\n```bash\r\n# Set-cover examples\r\npython -m multiarrangement.examples.setcover_video\r\npython -m multiarrangement.examples.setcover_audio\r\n\r\n# Adaptive LTW examples \r\npython -m multiarrangement.examples.ltw_video\r\npython -m multiarrangement.examples.ltw_audio\r\n```\r\nThese examples auto\u2011resolve the packaged media and create `./results` if missing.\r\n\r\n### Custom Instructions (both paradigms)\r\n\r\n```python\r\ncustom = [\r\n \"Welcome to the lab.\",\r\n \"Drag each item inside the white circle.\",\r\n \"Double\u2011click to play/replay.\",\r\n \"Press SPACE to continue.\"\r\n]\r\n\r\n# Set\u2011cover\r\nma.multiarrangement(\r\n input_dir=\"./videos\",\r\n batches=batches,\r\n output_dir=\"./results\",\r\n instructions=custom, # show these lines instead of defaults\r\n)\r\n\r\n# Adaptive LTW\r\nma.multiarrangement_adaptive(\r\n input_dir=\"./videos\",\r\n output_dir=\"./results\",\r\n instructions=custom, # also supported here\r\n)\r\n```\r\n\r\nKey ideas:\r\n\r\n- Evidence is normalized per trial: `w_ij = (d_ij / max_d)^2` so absolute pixel scale does not dominate.\r\n- Next subset is chosen greedily to maximize (utility gain)/(time cost), starting from the globally weakest\u2011evidence pair.\r\n- Optional inverse\u2011MDS refinement reduces arrangement prediction error across trials.\r\n\r\n## Instruction Screens\r\n\r\n- Default instructions include short videos (bundled in `demovids/`) showing drag, double\u2011click, and completion.\r\n- To skip instructions, pass `instructions=None`. To customize, pass a list of strings.\r\n\r\n## Outputs\r\n\r\n- Set\u2011cover: `participant_<id>_results.xlsx`, `participant_<id>_rdm.npy`, CSV (optional)\r\n- Adaptive LTW: `adaptive_results_results.xlsx`, `adaptive_results_rdm.npy`, `adaptive_results_evidence.npy`, `adaptive_results_meta.json`\r\n\r\n## Covering Designs\r\n\r\n- Two optimizers are provided:\r\n - `optimize-cover`: fixed k; cache\u2011first LJCR seed, repair/prune, local search + group DFS\r\n - `optimize-cover-flex`: shrink\u2011only; starts from fixed k and may reduce block sizes down to `--min-k-size`\r\n- Both prefer the installed cache path by default and support `--seed-file` to run from your own seeds.\r\n\r\n## Troubleshooting\r\n\r\n- Pygame/OpenCV: on minimal Linux, install SDL2 and video codecs via your package manager.\r\n- Audio playback: Windows uses Windows Media Player (fallback), macOS `afplay`, Linux `paplay`/`aplay`.\r\n\r\n## References\r\n\r\n- Inverse MDS (adaptive refinement):\r\n - Kriegeskorte, N., & Mur, M. (2012). Inverse MDS: optimizing the stimulus arrangements for pairwise dissimilarity measures. Frontiers in Psychology, 3, 245. https://doi.org/10.3389/fpsyg.2012.00245\r\n- Demo video dataset:\r\n - Urgen, B. A., Nizamo\u011flu, H., Ero\u011flu, A., & Orban, G. A. (2023). A large video set of natural human actions for visual and cognitive neuroscience studies and its validation with fMRI. Brain Sciences, 13(1), 61. https://doi.org/10.3390/brainsci13010061\r\n\r\n## License\r\n\r\nMIT License. See `LICENSE`.\r\n\r\n## Contributing\r\n\r\nIssues and PRs are welcome. Please add tests for new functionality and keep changes focused.\r\n",
"bugtrack_url": null,
"license": null,
"summary": "Video & audio similarity arrangement toolkit (set-cover and adaptive LTW)",
"version": "0.1.1",
"project_urls": {
"Bug Tracker": "https://github.com/UYildiz12/Multiarrangement-for-videos/issues",
"Documentation": "https://github.com/UYildiz12/Multiarrangement-for-videos#readme",
"Homepage": "https://github.com/UYildiz12/Multiarrangement-for-videos",
"Repository": "https://github.com/UYildiz12/Multiarrangement-for-videos"
},
"split_keywords": [
"psychology",
" experiment",
" video",
" similarity",
" multiarrangement",
" rdm"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "c24e9200eec51c7144b5c6c6a1b7868b920254c07e1b6123b7cdc4e5c6e4561d",
"md5": "1d1a3856c0625fa0fa984a360d6a43c2",
"sha256": "127f186ccff1c9ca886041f46770cdace13eeecc63fb5b14bca4b671018e0495"
},
"downloads": -1,
"filename": "multiarrangement-0.1.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1d1a3856c0625fa0fa984a360d6a43c2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 38903797,
"upload_time": "2025-08-21T17:20:46",
"upload_time_iso_8601": "2025-08-21T17:20:46.931863Z",
"url": "https://files.pythonhosted.org/packages/c2/4e/9200eec51c7144b5c6c6a1b7868b920254c07e1b6123b7cdc4e5c6e4561d/multiarrangement-0.1.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-21 17:20:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "UYildiz12",
"github_project": "Multiarrangement-for-videos",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "multiarrangement"
}