Name | sinter JSON |
Version |
1.14.0
JSON |
| download |
home_page | None |
Summary | Samples stim circuits and decodes them using pymatching. |
upload_time | 2024-09-24 08:40:03 |
maintainer | None |
docs_url | None |
author | Craig Gidney |
requires_python | >=3.7.0 |
license | Apache 2 |
keywords |
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# sinter: fast QEC sampling
Sinter is a software tool/library for doing fast monte carlo sampling of
quantum error correction circuits.
- [How it works](#how_it_works)
- [How to install](#how_to_install)
- [How to use: Python API](#how_to_use_python)
- [Sinter Python API Reference](doc/sinter_api.md)
- [How to use: Linux Command Line](#how_to_use_linux)
- [Sinter Command Line Reference](doc/sinter_command_line.md)
- [The csv format for sample statistics](#csv_format)
<a name="how_to_works"></a>
# How it works
Sinter takes Stim circuits annotated with noise, detectors, and logical
observables.
It uses stim to sample the circuits and a decoder such as pymatching to predict
whether the logical observables were flipped or not, given the detector data.
It records how often this succeeds, and how often it fails (the error rate).
Sinter uses python multiprocessing to do parallel sampling across multiple CPU
cores, dynamically decides which circuits need more samples based on parameters
specified by the user (such as a target number of errors), saves the results to
as simple CSV format, and has some basic plotting functionality for viewing the
results.
Sinter doesn't support cloud compute, but it does scale well on a single
machine.
I've tested it on 2 core machines, 4 core machines, and 96 core machines.
Although there are potential pitfalls (e.g. setting batch sizes too large causes
thrashing), sinter generally achieves good resource utilization of the processes
you assign to it.
<a name="how_to_install"></a>
# How to install
Sinter is available as a pypi package. It can be installed using pip:
```
pip install sinter
```
When you are in a python virtual environment with sinter installed, you have
access to a command line command `sinter` which can be used to perform tasks
from the command line. You can also `import sinter` in a python program in order
to use sinter's python API.
<a name="how_to_use_python"></a>
# How to use: Python API
This example assumes you are in a python environment with `sinter` and
`pymatching` installed.
```python
import stim
import sinter
import matplotlib.pyplot as plt
# Generates surface code circuit tasks using Stim's circuit generation.
def generate_example_tasks():
for p in [0.001, 0.005, 0.01]:
for d in [3, 5]:
yield sinter.Task(
circuit=stim.Circuit.generated(
rounds=d,
distance=d,
after_clifford_depolarization=p,
code_task=f'surface_code:rotated_memory_x',
),
json_metadata={
'p': p,
'd': d,
},
)
def main():
# Collect the samples (takes a few minutes).
samples = sinter.collect(
num_workers=4,
max_shots=1_000_000,
max_errors=1000,
tasks=generate_example_tasks(),
decoders=['pymatching'],
)
# Print samples as CSV data.
print(sinter.CSV_HEADER)
for sample in samples:
print(sample.to_csv_line())
# Render a matplotlib plot of the data.
fig, ax = plt.subplots(1, 1)
sinter.plot_error_rate(
ax=ax,
stats=samples,
group_func=lambda stat: f"Rotated Surface Code d={stat.json_metadata['d']}",
x_func=lambda stat: stat.json_metadata['p'],
)
ax.loglog()
ax.set_ylim(1e-5, 1)
ax.grid()
ax.set_title('Logical Error Rate vs Physical Error Rate')
ax.set_ylabel('Logical Error Probability (per shot)')
ax.set_xlabel('Physical Error Rate')
ax.legend()
# Save to file and also open in a window.
fig.savefig('plot.png')
plt.show()
# NOTE: This is actually necessary! If the code inside 'main()' was at the
# module level, the multiprocessing children spawned by sinter.collect would
# also attempt to run that code.
if __name__ == '__main__':
main()
```
Example output to stdout:
```
shots, errors, discards, seconds,decoder,strong_id,json_metadata
1000000, 837, 0, 36.6,pymatching,9f7e20c54fec45b6aef7491b774dd5c0a3b9a005aa82faf5b9c051d6e40d60a9,"{""d"":3,""p"":0.001}"
53498, 1099, 0, 6.52,pymatching,3f40432443a99b933fb548b831fb54e7e245d9d73a35c03ea5a2fb2ce270f8c8,"{""d"":3,""p"":0.005}"
16269, 1023, 0, 3.23,pymatching,17b2e0c99560d20307204494ac50e31b33e50721b4ebae99d9e3577ae7248874,"{""d"":3,""p"":0.01}"
1000000, 151, 0, 77.3,pymatching,e179a18739201250371ffaae0197d8fa19d26b58dfc2942f9f1c85568645387a,"{""d"":5,""p"":0.001}"
11363, 1068, 0, 12.5,pymatching,a4dec28934a033215ff1389651a26114ecc22016a6e122008830cf7dd04ba5ad,"{""d"":5,""p"":0.01}"
61569, 1001, 0, 24.5,pymatching,2fefcc356752482fb4c6d912c228f6d18762f5752796c668b6abeb7775f5de92,"{""d"":5,""p"":0.005}"
```
and the corresponding image saved to `plot.png`:

## python API utility methods
Sinter's python module exposes a variety of methods that are handy for plotting
or analyzing QEC data.
See the [sinter API reference](https://github.com/quantumlib/Stim/blob/main/doc/sinter_api.md).
<a name="how_to_use_linux"></a>
# How to use: Linux Command Line
This example assumes you are using a linux command line in a python virtualenv with `sinter` installed.
## pick circuits
For this example, we will use Stim's circuit generation functionality to produce
circuits to benchmark.
We will make rotated surface code circuits with various physical error rates,
with filenames like `rotated_d5_p0.001_surface_code.stim`.
```bash
mkdir -p circuits
python -c "
import stim
for p in [0.001, 0.005, 0.01]:
for d in [3, 5]:
with open(f'circuits/d={d},p={p},b=X,type=rotated_surface_memory.stim', 'w') as f:
c = stim.Circuit.generated(
rounds=d,
distance=d,
after_clifford_depolarization=p,
after_reset_flip_probability=p,
before_measure_flip_probability=p,
before_round_data_depolarization=p,
code_task=f'surface_code:rotated_memory_x')
print(c, file=f)
"
```
Normally, making the circuit files is the hardest step, because they are what
specifies the problem you are sampling from.
Almost all of the work you do will generally involve creating the exact perfect
circuit file for your needs.
But this is just an example, so we'll use normal surface code circuits.
# collect
You can use sinter to collect statistics on each circuit by using the `sinter collect` command.
This command takes options specifying how much data to collect, how to do decoding, etc.
The `processes` argument decides how many workers to use. Set it to `auto` to set
it to the number of CPUs on your machine.
The `metadata_func` argument can be used to specify custom python expression that turns the `path`
into a dictionary or other JSON object associated with the circuit.
If you set `metadata_func` to `auto` then will use the method
`sinter.comma_separated_key_values(path)` which parses
stim circuit paths like `folder/a=2,b=test.stim` into a dictionary like `{'a': 2, 'b': 'test'}`.
By default, sinter writes the collected statistics to stdout as CSV data.
One particularly important option that changes this behavior is `--save_resume_filepath`,
which allows the command to be interrupted and restarted without losing data.
Any data already at the file specified by `--save_resume_filepath` will count towards the
amount of statistics asked to be collected, and sinter will append new statistics to this file
instead of overwriting it.
```bash
sinter collect \
--processes auto \
--circuits circuits/*.stim \
--metadata_func auto \
--decoders pymatching \
--max_shots 1_000_000 \
--max_errors 1000 \
--save_resume_filepath stats.csv
```
Beware that if you SIGKILL or SIGTEM sinter, instead of just using SIGINT, it's possible
(though unlikely) that you are killing it just as it writes a row of CSV data. This truncates
the data, which requires manual intervention on your part to fix (e.g. by deleting the partial row
using a text editor).
# combine
Note that the CSV data written by sinter will contain multiple rows for each case, because
sinter starts by running small batches to see roughly what the error rate is before moving
to larger batch sizes.
You can get a single-row-per-case CSV file by using `sinter combine`:
```bash
sinter combine stats.csv
```
```
shots, errors, discards, seconds,decoder,strong_id,json_metadata
58591, 1067, 0, 5.50,pymatching,bb46c8fca4d9fd9d4d27a5039686332ac5e24011a7f2aea5a65f6040445567c0,"{""b"":""X"",""d"":3,""p"":0.005,""type"":""rotated_surface_memory""}"
1000000, 901, 0, 73.4,pymatching,4c0780830fe1747ab22767b69d1178f803943c83dd4afa6d241acf02e6dfa71f,"{""b"":""X"",""d"":3,""p"":0.001,""type"":""rotated_surface_memory""}"
16315, 1026, 0, 2.39,pymatching,64d81b177ef1a455644ac3e03f374394cd8ad385ba2ee0ac147b2405107564fc,"{""b"":""X"",""d"":3,""p"":0.01,""type"":""rotated_surface_memory""}"
1000000, 157, 0, 116.5,pymatching,100855c078af0936d098cecbd8bfb7591c0951ae69527c002c9c5f4c79bde129,"{""b"":""X"",""d"":5,""p"":0.001,""type"":""rotated_surface_memory""}"
61677, 1005, 0, 21.2,pymatching,6d7b8b312a5460c7fe08119d3c7a040daa25bd34d524611160e4aac6196293fe,"{""b"":""X"",""d"":5,""p"":0.005,""type"":""rotated_surface_memory""}"
10891, 1021, 0, 7.43,pymatching,477252e968f0f22f64ccb058c0e1e9c77b765f60f74df8b6707de7ec65ed13b7,"{""b"":""X"",""d"":5,""p"":0.01,""type"":""rotated_surface_memory""}"
```
# plot
You can use `sinter plot` to view the results you've collected.
This command takes a CSV file, an argument `--group_func` indicating how to
group the statistics into curves, an argument `--x_func` indicating how to
pick the X coordinate of each point, and various other arguments. Each `*_func`
argument takes a string that will be evaluated as a python expression, with
various useful values in scope such as a `metadata` value containing the
json metadata for the various points being evaluated. There is also a special
`m` value where `m.key` is shorthand for `metadata.get('key', None)`.
Here is an example of a `sinter plot` command:
```bash
sinter plot \
--in stats.csv \
--group_func "f'''Rotated Surface Code d={m.d}'''" \
--x_func m.p \
--xaxis "[log]Physical Error Rate" \
--fig_size 1024 1024 \
--out surface_code_figure.png \
--show
```
Which will save a png image of, and also open a window showing, a plot like this one:

<a name="csv_format"></a>
# The csv format for sample statistics
Sinter saves samples as a table using a Comma Separated Value format.
For example:
```
shots,errors,discards,seconds,decoder,strong_id,json_metadata
1000000, 837, 0, 36.6,pymatching,9f7e20c54fec45b6aef7491b774dd5c0a3b9a005aa82faf5b9c051d6e40d60a9,"{""d"":3,""p"":0.001}"
53498, 1099, 0, 6.52,pymatching,3f40432443a99b933fb548b831fb54e7e245d9d73a35c03ea5a2fb2ce270f8c8,"{""d"":3,""p"":0.005}"
16269, 1023, 0, 3.23,pymatching,17b2e0c99560d20307204494ac50e31b33e50721b4ebae99d9e3577ae7248874,"{""d"":3,""p"":0.01}"
1000000, 151, 0, 77.3,pymatching,e179a18739201250371ffaae0197d8fa19d26b58dfc2942f9f1c85568645387a,"{""d"":5,""p"":0.001}"
11363, 1068, 0, 12.5,pymatching,a4dec28934a033215ff1389651a26114ecc22016a6e122008830cf7dd04ba5ad,"{""d"":5,""p"":0.01}"
61569, 1001, 0, 24.5,pymatching,2fefcc356752482fb4c6d912c228f6d18762f5752796c668b6abeb7775f5de92,"{""d"":5,""p"":0.005}"
```
The columns are:
- `shots` (unsigned int): How many times the circuit was sampled.
- `errors` (unsigned int): How many times the decoder failed to predict any logical observable.
- `discards` (unsigned int): How many times a shot was discarded because a postselected detector fired or because the decoder incorrectly predicted the value of a postselected observable. Discarded shots never count as errors.
- `seconds` (non-negative float): How many CPU core seconds it took to simulate and decode these shots.
- `decoder` (str): Which decoder was used.
- `strong_id` (str):
Hex representation of a cryptographic hash of the problem
being sampled from.
The hashed data includes the exact circuit that was simulated,
the decoder that was used,
the exact detector error model that was given to the decoder,
the postselection rules that were applied,
and the metadata associated with the circuit.
The purpose of the strong id is to make it impossible to accidentally combine
shots that were from separate circuits or separate versions of a circuit.
- `json_metadata` (json): A free form field that can store any value representable in
[Java Script Object Notation](https://json.org). For example, this could be a
dictionary with helpful keys like "noise_level" or "circuit_name". The json
value is serialized into JSON and then escaped so that it can be put into the
CSV data (e.g. quotes get doubled up).
- `custom_counts` (json[Dict[str, int]]): An optional field that can store a
dictionary from string keys to integer counts represented in
[Java Script Object Notation](https://json.org).
The counts can be a huge variety of things, ranging from per-observable error
counts to detection event counts. In general, any value that should be added
when merging rows could be in these counters.
Note shots may be spread across multiple rows.
For example, this data:
```
shots,errors,discards,seconds,decoder,strong_id,json_metadata
500000, 437, 0, 20.5,pymatching,9f7e20c54fec45b6aef7491b774dd5c0a3b9a005aa82faf5b9c051d6e40d60a9,"{""d"":3,""p"":0.001}"
500000, 400, 0, 16.1,pymatching,9f7e20c54fec45b6aef7491b774dd5c0a3b9a005aa82faf5b9c051d6e40d60a9,"{""d"":3,""p"":0.001}"
```
has the same total statistics as this data:
```
shots,errors,discards,seconds,decoder,strong_id,json_metadata
1000000, 837, 0, 36.6,pymatching,9f7e20c54fec45b6aef7491b774dd5c0a3b9a005aa82faf5b9c051d6e40d60a9,"{""d"":3,""p"":0.001}"
```
just split over two rows instead of combined into one.
Raw data
{
"_id": null,
"home_page": null,
"name": "sinter",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7.0",
"maintainer_email": null,
"keywords": null,
"author": "Craig Gidney",
"author_email": "craig.gidney@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/cf/47/4968b5c570d03c116c475b18420cfe8d2ae15a2449016d6852e3f92a131f/sinter-1.14.0.tar.gz",
"platform": null,
"description": "# sinter: fast QEC sampling\n\nSinter is a software tool/library for doing fast monte carlo sampling of\nquantum error correction circuits.\n\n- [How it works](#how_it_works)\n- [How to install](#how_to_install)\n- [How to use: Python API](#how_to_use_python)\n - [Sinter Python API Reference](doc/sinter_api.md)\n- [How to use: Linux Command Line](#how_to_use_linux)\n - [Sinter Command Line Reference](doc/sinter_command_line.md)\n- [The csv format for sample statistics](#csv_format)\n\n<a name=\"how_to_works\"></a>\n# How it works\n\nSinter takes Stim circuits annotated with noise, detectors, and logical\nobservables.\nIt uses stim to sample the circuits and a decoder such as pymatching to predict\nwhether the logical observables were flipped or not, given the detector data.\nIt records how often this succeeds, and how often it fails (the error rate).\n\nSinter uses python multiprocessing to do parallel sampling across multiple CPU\ncores, dynamically decides which circuits need more samples based on parameters\nspecified by the user (such as a target number of errors), saves the results to\nas simple CSV format, and has some basic plotting functionality for viewing the\nresults.\n\nSinter doesn't support cloud compute, but it does scale well on a single\nmachine.\nI've tested it on 2 core machines, 4 core machines, and 96 core machines.\nAlthough there are potential pitfalls (e.g. setting batch sizes too large causes\nthrashing), sinter generally achieves good resource utilization of the processes\nyou assign to it.\n\n<a name=\"how_to_install\"></a>\n# How to install\n\nSinter is available as a pypi package. It can be installed using pip:\n\n```\npip install sinter\n```\n\nWhen you are in a python virtual environment with sinter installed, you have\naccess to a command line command `sinter` which can be used to perform tasks\nfrom the command line. You can also `import sinter` in a python program in order\nto use sinter's python API.\n\n<a name=\"how_to_use_python\"></a>\n# How to use: Python API\n\nThis example assumes you are in a python environment with `sinter` and\n`pymatching` installed.\n\n```python\nimport stim\nimport sinter\nimport matplotlib.pyplot as plt\n\n\n# Generates surface code circuit tasks using Stim's circuit generation.\ndef generate_example_tasks():\n for p in [0.001, 0.005, 0.01]:\n for d in [3, 5]:\n yield sinter.Task(\n circuit=stim.Circuit.generated(\n rounds=d,\n distance=d,\n after_clifford_depolarization=p,\n code_task=f'surface_code:rotated_memory_x',\n ),\n json_metadata={\n 'p': p,\n 'd': d,\n },\n )\n\n\ndef main():\n # Collect the samples (takes a few minutes).\n samples = sinter.collect(\n num_workers=4,\n max_shots=1_000_000,\n max_errors=1000,\n tasks=generate_example_tasks(),\n decoders=['pymatching'],\n )\n\n # Print samples as CSV data.\n print(sinter.CSV_HEADER)\n for sample in samples:\n print(sample.to_csv_line())\n\n # Render a matplotlib plot of the data.\n fig, ax = plt.subplots(1, 1)\n sinter.plot_error_rate(\n ax=ax,\n stats=samples,\n group_func=lambda stat: f\"Rotated Surface Code d={stat.json_metadata['d']}\",\n x_func=lambda stat: stat.json_metadata['p'],\n )\n ax.loglog()\n ax.set_ylim(1e-5, 1)\n ax.grid()\n ax.set_title('Logical Error Rate vs Physical Error Rate')\n ax.set_ylabel('Logical Error Probability (per shot)')\n ax.set_xlabel('Physical Error Rate')\n ax.legend()\n\n # Save to file and also open in a window.\n fig.savefig('plot.png')\n plt.show()\n\n\n# NOTE: This is actually necessary! If the code inside 'main()' was at the\n# module level, the multiprocessing children spawned by sinter.collect would\n# also attempt to run that code.\nif __name__ == '__main__':\n main()\n```\n\nExample output to stdout:\n\n```\n shots, errors, discards, seconds,decoder,strong_id,json_metadata\n 1000000, 837, 0, 36.6,pymatching,9f7e20c54fec45b6aef7491b774dd5c0a3b9a005aa82faf5b9c051d6e40d60a9,\"{\"\"d\"\":3,\"\"p\"\":0.001}\"\n 53498, 1099, 0, 6.52,pymatching,3f40432443a99b933fb548b831fb54e7e245d9d73a35c03ea5a2fb2ce270f8c8,\"{\"\"d\"\":3,\"\"p\"\":0.005}\"\n 16269, 1023, 0, 3.23,pymatching,17b2e0c99560d20307204494ac50e31b33e50721b4ebae99d9e3577ae7248874,\"{\"\"d\"\":3,\"\"p\"\":0.01}\"\n 1000000, 151, 0, 77.3,pymatching,e179a18739201250371ffaae0197d8fa19d26b58dfc2942f9f1c85568645387a,\"{\"\"d\"\":5,\"\"p\"\":0.001}\"\n 11363, 1068, 0, 12.5,pymatching,a4dec28934a033215ff1389651a26114ecc22016a6e122008830cf7dd04ba5ad,\"{\"\"d\"\":5,\"\"p\"\":0.01}\"\n 61569, 1001, 0, 24.5,pymatching,2fefcc356752482fb4c6d912c228f6d18762f5752796c668b6abeb7775f5de92,\"{\"\"d\"\":5,\"\"p\"\":0.005}\"\n```\n\nand the corresponding image saved to `plot.png`:\n\n\n\n## python API utility methods\n\nSinter's python module exposes a variety of methods that are handy for plotting\nor analyzing QEC data.\nSee the [sinter API reference](https://github.com/quantumlib/Stim/blob/main/doc/sinter_api.md).\n\n<a name=\"how_to_use_linux\"></a>\n# How to use: Linux Command Line\n\nThis example assumes you are using a linux command line in a python virtualenv with `sinter` installed.\n\n## pick circuits\n\nFor this example, we will use Stim's circuit generation functionality to produce\ncircuits to benchmark.\nWe will make rotated surface code circuits with various physical error rates,\nwith filenames like `rotated_d5_p0.001_surface_code.stim`.\n\n```bash\nmkdir -p circuits\npython -c \"\n\nimport stim\n\nfor p in [0.001, 0.005, 0.01]:\n for d in [3, 5]:\n with open(f'circuits/d={d},p={p},b=X,type=rotated_surface_memory.stim', 'w') as f:\n c = stim.Circuit.generated(\n rounds=d,\n distance=d,\n after_clifford_depolarization=p,\n after_reset_flip_probability=p,\n before_measure_flip_probability=p,\n before_round_data_depolarization=p,\n code_task=f'surface_code:rotated_memory_x')\n print(c, file=f)\n\"\n```\n\nNormally, making the circuit files is the hardest step, because they are what\nspecifies the problem you are sampling from.\nAlmost all of the work you do will generally involve creating the exact perfect\ncircuit file for your needs.\nBut this is just an example, so we'll use normal surface code circuits.\n\n# collect\n\nYou can use sinter to collect statistics on each circuit by using the `sinter collect` command.\nThis command takes options specifying how much data to collect, how to do decoding, etc.\n\nThe `processes` argument decides how many workers to use. Set it to `auto` to set\nit to the number of CPUs on your machine.\n\nThe `metadata_func` argument can be used to specify custom python expression that turns the `path`\ninto a dictionary or other JSON object associated with the circuit.\nIf you set `metadata_func` to `auto` then will use the method\n`sinter.comma_separated_key_values(path)` which parses\nstim circuit paths like `folder/a=2,b=test.stim` into a dictionary like `{'a': 2, 'b': 'test'}`.\n\nBy default, sinter writes the collected statistics to stdout as CSV data.\nOne particularly important option that changes this behavior is `--save_resume_filepath`,\nwhich allows the command to be interrupted and restarted without losing data.\nAny data already at the file specified by `--save_resume_filepath` will count towards the\namount of statistics asked to be collected, and sinter will append new statistics to this file\ninstead of overwriting it.\n\n```bash\nsinter collect \\\n --processes auto \\\n --circuits circuits/*.stim \\\n --metadata_func auto \\\n --decoders pymatching \\\n --max_shots 1_000_000 \\\n --max_errors 1000 \\\n --save_resume_filepath stats.csv\n```\n\nBeware that if you SIGKILL or SIGTEM sinter, instead of just using SIGINT, it's possible\n(though unlikely) that you are killing it just as it writes a row of CSV data. This truncates\nthe data, which requires manual intervention on your part to fix (e.g. by deleting the partial row\nusing a text editor).\n\n# combine\n\nNote that the CSV data written by sinter will contain multiple rows for each case, because\nsinter starts by running small batches to see roughly what the error rate is before moving\nto larger batch sizes. \n\nYou can get a single-row-per-case CSV file by using `sinter combine`:\n\n```bash\nsinter combine stats.csv\n```\n\n```\n shots, errors, discards, seconds,decoder,strong_id,json_metadata\n 58591, 1067, 0, 5.50,pymatching,bb46c8fca4d9fd9d4d27a5039686332ac5e24011a7f2aea5a65f6040445567c0,\"{\"\"b\"\":\"\"X\"\",\"\"d\"\":3,\"\"p\"\":0.005,\"\"type\"\":\"\"rotated_surface_memory\"\"}\"\n 1000000, 901, 0, 73.4,pymatching,4c0780830fe1747ab22767b69d1178f803943c83dd4afa6d241acf02e6dfa71f,\"{\"\"b\"\":\"\"X\"\",\"\"d\"\":3,\"\"p\"\":0.001,\"\"type\"\":\"\"rotated_surface_memory\"\"}\"\n 16315, 1026, 0, 2.39,pymatching,64d81b177ef1a455644ac3e03f374394cd8ad385ba2ee0ac147b2405107564fc,\"{\"\"b\"\":\"\"X\"\",\"\"d\"\":3,\"\"p\"\":0.01,\"\"type\"\":\"\"rotated_surface_memory\"\"}\"\n 1000000, 157, 0, 116.5,pymatching,100855c078af0936d098cecbd8bfb7591c0951ae69527c002c9c5f4c79bde129,\"{\"\"b\"\":\"\"X\"\",\"\"d\"\":5,\"\"p\"\":0.001,\"\"type\"\":\"\"rotated_surface_memory\"\"}\"\n 61677, 1005, 0, 21.2,pymatching,6d7b8b312a5460c7fe08119d3c7a040daa25bd34d524611160e4aac6196293fe,\"{\"\"b\"\":\"\"X\"\",\"\"d\"\":5,\"\"p\"\":0.005,\"\"type\"\":\"\"rotated_surface_memory\"\"}\"\n 10891, 1021, 0, 7.43,pymatching,477252e968f0f22f64ccb058c0e1e9c77b765f60f74df8b6707de7ec65ed13b7,\"{\"\"b\"\":\"\"X\"\",\"\"d\"\":5,\"\"p\"\":0.01,\"\"type\"\":\"\"rotated_surface_memory\"\"}\"\n```\n\n# plot\n\nYou can use `sinter plot` to view the results you've collected.\nThis command takes a CSV file, an argument `--group_func` indicating how to\ngroup the statistics into curves, an argument `--x_func` indicating how to\npick the X coordinate of each point, and various other arguments. Each `*_func`\nargument takes a string that will be evaluated as a python expression, with\nvarious useful values in scope such as a `metadata` value containing the\njson metadata for the various points being evaluated. There is also a special\n`m` value where `m.key` is shorthand for `metadata.get('key', None)`.\n\nHere is an example of a `sinter plot` command:\n\n```bash\nsinter plot \\\n --in stats.csv \\\n --group_func \"f'''Rotated Surface Code d={m.d}'''\" \\\n --x_func m.p \\\n --xaxis \"[log]Physical Error Rate\" \\\n --fig_size 1024 1024 \\\n --out surface_code_figure.png \\\n --show\n```\n\nWhich will save a png image of, and also open a window showing, a plot like this one:\n\n\n\n<a name=\"csv_format\"></a>\n# The csv format for sample statistics\n\nSinter saves samples as a table using a Comma Separated Value format.\nFor example:\n\n```\n shots,errors,discards,seconds,decoder,strong_id,json_metadata\n1000000, 837, 0, 36.6,pymatching,9f7e20c54fec45b6aef7491b774dd5c0a3b9a005aa82faf5b9c051d6e40d60a9,\"{\"\"d\"\":3,\"\"p\"\":0.001}\"\n 53498, 1099, 0, 6.52,pymatching,3f40432443a99b933fb548b831fb54e7e245d9d73a35c03ea5a2fb2ce270f8c8,\"{\"\"d\"\":3,\"\"p\"\":0.005}\"\n 16269, 1023, 0, 3.23,pymatching,17b2e0c99560d20307204494ac50e31b33e50721b4ebae99d9e3577ae7248874,\"{\"\"d\"\":3,\"\"p\"\":0.01}\"\n1000000, 151, 0, 77.3,pymatching,e179a18739201250371ffaae0197d8fa19d26b58dfc2942f9f1c85568645387a,\"{\"\"d\"\":5,\"\"p\"\":0.001}\"\n 11363, 1068, 0, 12.5,pymatching,a4dec28934a033215ff1389651a26114ecc22016a6e122008830cf7dd04ba5ad,\"{\"\"d\"\":5,\"\"p\"\":0.01}\"\n 61569, 1001, 0, 24.5,pymatching,2fefcc356752482fb4c6d912c228f6d18762f5752796c668b6abeb7775f5de92,\"{\"\"d\"\":5,\"\"p\"\":0.005}\"\n```\n\nThe columns are:\n\n- `shots` (unsigned int): How many times the circuit was sampled.\n- `errors` (unsigned int): How many times the decoder failed to predict any logical observable.\n- `discards` (unsigned int): How many times a shot was discarded because a postselected detector fired or because the decoder incorrectly predicted the value of a postselected observable. Discarded shots never count as errors.\n- `seconds` (non-negative float): How many CPU core seconds it took to simulate and decode these shots.\n- `decoder` (str): Which decoder was used.\n- `strong_id` (str):\nHex representation of a cryptographic hash of the problem\nbeing sampled from.\nThe hashed data includes the exact circuit that was simulated,\nthe decoder that was used,\nthe exact detector error model that was given to the decoder,\nthe postselection rules that were applied,\nand the metadata associated with the circuit.\nThe purpose of the strong id is to make it impossible to accidentally combine\nshots that were from separate circuits or separate versions of a circuit.\n- `json_metadata` (json): A free form field that can store any value representable in\n[Java Script Object Notation](https://json.org). For example, this could be a\ndictionary with helpful keys like \"noise_level\" or \"circuit_name\". The json\nvalue is serialized into JSON and then escaped so that it can be put into the\nCSV data (e.g. quotes get doubled up).\n- `custom_counts` (json[Dict[str, int]]): An optional field that can store a\ndictionary from string keys to integer counts represented in\n[Java Script Object Notation](https://json.org).\nThe counts can be a huge variety of things, ranging from per-observable error\ncounts to detection event counts. In general, any value that should be added\nwhen merging rows could be in these counters.\n\nNote shots may be spread across multiple rows.\nFor example, this data:\n\n```\n shots,errors,discards,seconds,decoder,strong_id,json_metadata\n 500000, 437, 0, 20.5,pymatching,9f7e20c54fec45b6aef7491b774dd5c0a3b9a005aa82faf5b9c051d6e40d60a9,\"{\"\"d\"\":3,\"\"p\"\":0.001}\"\n 500000, 400, 0, 16.1,pymatching,9f7e20c54fec45b6aef7491b774dd5c0a3b9a005aa82faf5b9c051d6e40d60a9,\"{\"\"d\"\":3,\"\"p\"\":0.001}\"\n```\n\nhas the same total statistics as this data:\n\n```\n shots,errors,discards,seconds,decoder,strong_id,json_metadata\n1000000, 837, 0, 36.6,pymatching,9f7e20c54fec45b6aef7491b774dd5c0a3b9a005aa82faf5b9c051d6e40d60a9,\"{\"\"d\"\":3,\"\"p\"\":0.001}\"\n```\n\njust split over two rows instead of combined into one.\n",
"bugtrack_url": null,
"license": "Apache 2",
"summary": "Samples stim circuits and decodes them using pymatching.",
"version": "1.14.0",
"project_urls": null,
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "cf474968b5c570d03c116c475b18420cfe8d2ae15a2449016d6852e3f92a131f",
"md5": "1bee2926d7ed3138c5b77dbc4917ba4a",
"sha256": "b40498d9bb7752e28a18bcc06875b34f45befc9f957d7c51c240a37124e8c4d3"
},
"downloads": -1,
"filename": "sinter-1.14.0.tar.gz",
"has_sig": false,
"md5_digest": "1bee2926d7ed3138c5b77dbc4917ba4a",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7.0",
"size": 174330,
"upload_time": "2024-09-24T08:40:03",
"upload_time_iso_8601": "2024-09-24T08:40:03.542790Z",
"url": "https://files.pythonhosted.org/packages/cf/47/4968b5c570d03c116c475b18420cfe8d2ae15a2449016d6852e3f92a131f/sinter-1.14.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-24 08:40:03",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "sinter"
}