ataraxis-data-structures


Nameataraxis-data-structures JSON
Version 1.1.4 PyPI version JSON
download
home_pageNone
SummaryProvides classes and structures for storing, manipulating, and sharing data between Python processes.
upload_time2024-11-18 19:49:40
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseGNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year> <name of author> This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see <https://www.gnu.org/licenses/>. Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: <program> Copyright (C) <year> <name of author> This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <https://www.gnu.org/licenses/>. The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read <https://www.gnu.org/licenses/why-not-lgpl.html>.
keywords ataraxis data-manipulation data-structures nested-dictionary shared-memory
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ataraxis-data-structures

Provides classes and structures for storing, manipulating, and sharing data between Python processes.

![PyPI - Version](https://img.shields.io/pypi/v/ataraxis-data-structures)
![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ataraxis-data-structures)
[![uv](https://tinyurl.com/uvbadge)](https://github.com/astral-sh/uv)
[![Ruff](https://tinyurl.com/ruffbadge)](https://github.com/astral-sh/ruff)
![type-checked: mypy](https://img.shields.io/badge/type--checked-mypy-blue?style=flat-square&logo=python)
![PyPI - License](https://img.shields.io/pypi/l/ataraxis-data-structures)
![PyPI - Status](https://img.shields.io/pypi/status/ataraxis-data-structures)
![PyPI - Wheel](https://img.shields.io/pypi/wheel/ataraxis-data-structures)
___

## Detailed Description

This library aggregates the classes and methods that broadly help working with data. This includes 
classes to manipulate the data, share (move) the data between different Python processes and save and load the 
data from storage. 

Generally, these classes either implement novel functionality not available through other popular libraries or extend 
existing functionality to match specific needs of other project Ataraxis modules. That said, the library is written 
in a way that it can be used as a standalone module with minimum dependency on other Ataraxis modules.
___

## Features

- Supports Windows, Linux, and macOS.
- Provides a Process- and Thread-safe way of sharing data between Python processes through a NumPy array structure.
- Provides tools for working with complex nested dictionaries using a path-like API.
- Provides a set of classes for converting between a wide range of Python and NumPy scalar and iterable datatypes.
- Extends standard Python dataclass to enable it to save and load itself to / from YAML files.
- Pure-python API.
- Provides a massively-scalable data logger optimized for saving byte-serialized data from multiple input Processes.
- GPL 3 License.

___

## Table of Contents

- [Dependencies](#dependencies)
- [Installation](#installation)
- [Usage](#usage)
- [API Documentation](#api-documentation)
- [Developers](#developers)
- [Versioning](#versioning)
- [Authors](#authors)
- [License](#license)
- [Acknowledgements](#Acknowledgments)
___

## Dependencies

For users, all library dependencies are installed automatically for all supported installation methods 
(see [Installation](#installation) section). For developers, see the [Developers](#developers) section for 
information on installing additional development dependencies.
___

## Installation

### Source

1. Download this repository to your local machine using your preferred method, such as git-cloning. Optionally, use one
   of the stable releases that include precompiled binary wheels in addition to source code.
2. ```cd``` to the root directory of the project using your command line interface of choice.
3. Run ```python -m pip install .``` to install the project. Alternatively, if using a distribution with precompiled
   binaries, use ```python -m pip install WHEEL_PATH```, replacing 'WHEEL_PATH' with the path to the wheel file.

### PIP

Use the following command to install the library using PIP: ```pip install ataraxis-data-structures```

### Conda / Mamba

**_Note. Due to conda-forge contributing process being more nuanced than pip uploads, conda versions may lag behind
pip and source code distributions._**

Use the following command to install the library using Conda or Mamba: ```conda install ataraxis-data-structures```
___

## Usage

This section is broken into subsections for each exposed utility class or module. For each, it progresses from a 
minimalistic example and / or 'quickstart' to detailed notes on nuanced class functionality 
(if the class has such functionality).

### Data Converters
Generally, Data Converters are designed to in some way mimic the functionality of the
[pydantic](https://docs.pydantic.dev/latest/) project. Unlike pydantic, which is primarily a data validator, 
our Converters are designed specifically for flexible data conversion. While pydantic provides a fairly 
inflexible 'coercion' mechanism to cast input data to desired types, Converter classes offer a flexible and 
nuanced mechanism for casting Python variables between different types.

#### Base Converters
To assist converting to specific Python scalar types, we provide 4 'Base' converters: NumericConverter, 
BooleanConverter, StringConverter, and NoneConverter. After initial configuration, each converter takes in any input 
and conditionally converts it to the specific Python scalar datatype using __validate_value()__ class method.

__NumericConverter:__ Converts inputs to integers, floats, or both:
```
from ataraxis_data_structures.data_converters import NumericConverter

# NumericConverter is used to convert inputs into integers, floats or both. By default, it is configured to return
# both types. Depending on configuration, the class can be constrained to one type of outputs:
num_converter = NumericConverter(allow_integer_output=False, allow_float_output=True)
assert num_converter.validate_value(3) == 3.0

# When converting floats to integers, the class will only carry out the conversion if doing so does not require
# rounding or otherwise altering the value.
num_converter = NumericConverter(allow_integer_output=True, allow_float_output=False)
assert num_converter.validate_value(3.0) == 3

# The class can convert number-equivalents to numeric types depending on configuration. When possible, it prefers
# floating-point numbers over integers:
num_converter = NumericConverter(allow_integer_output=True, allow_float_output=True, parse_number_strings=True)
assert num_converter.validate_value('3.0') == 3.0

# NumericConverter can also filter input values based on a specified range. If the value fails validation, the method 
# returns None.
num_converter = NumericConverter(number_lower_limit=1, number_upper_limit=2, allow_float_output=False)
assert num_converter.validate_value('3.0') is None
```

__BooleanConverter:__ Converts inputs to booleans:
```
from ataraxis_data_structures.data_converters import BooleanConverter

# Boolean converter only has one additional parameter: whether to convert boolean-equivalents.
bool_converter = BooleanConverter(parse_boolean_equivalents=True)

assert bool_converter.validate_value(1) is True
assert bool_converter.validate_value(True) is True
assert bool_converter.validate_value('true') is True

assert bool_converter.validate_value(0) is False
assert bool_converter.validate_value(False) is False
assert bool_converter.validate_value('false') is False

# If valdiation fails for any input, the emthod returns None
bool_converter = BooleanConverter(parse_boolean_equivalents=False)
assert bool_converter.validate_value(1) is None
```

__NoneConverter:__ Converts inputs to None:
```
from ataraxis_data_structures.data_converters import NoneConverter

# None converter only has one additional parameter: whether to convert None equivalents.
bool_converter = NoneConverter(parse_none_equivalents=True)

assert bool_converter.validate_value('Null') is None
assert bool_converter.validate_value(None) is None
assert bool_converter.validate_value('none') is None

# If the method is not able to convert or validate the input, it returns string "None":
assert bool_converter.validate_value("Not an equivalent") == 'None'
```

__StringConverter:__ Converts inputs to strings. Since most Python scalar types are string-convertible, the default 
class configuration is to NOT convert inputs (to validate them without a conversion):
```
from ataraxis_data_structures.data_converters import StringConverter

# By default, string converter is configured to only validate, but not convert inputs:
str_converter = StringConverter()
assert str_converter.validate_value("True") == 'True'
assert str_converter.validate_value(1) is None  # Conversion failed

# To enable conversion, set the appropriate class initialization argument:
str_converter = StringConverter(allow_string_conversion=True)
assert str_converter.validate_value(1) == '1'

# Additionally, the class can be sued to filter inputs based on a predefined list and force strings to be lower-case.
# Note, filtering is NOT case-sensitive:
str_converter = StringConverter(allow_string_conversion=True, string_force_lower=True, string_options=['1', 'ok'])
assert str_converter.validate_value(1) == '1'
assert str_converter.validate_value('OK') == 'ok'  # Valid option, converted to the lower case
assert str_converter.validate_value('2') is None  # Not a valid option
```

#### PythonDataConverter
The PythonDataConverter class expands upon the functionality of the 'Base' Converter classes. To do so, it accepts 
pre-configured instances of the 'Base' Converter classes and applies them to inputs via its' __validate_value()__ 
method.

__PythonDataConverter__ extends converter functionality to __one-dimensional iterable inputs and outputs__ by applying 
a 'Base' converter to each element of the iterable. It also works with scalars:
```
from ataraxis_data_structures.data_converters import NumericConverter, PythonDataConverter

# Each input converter has to be preconfigured
numeric_converter = NumericConverter(allow_integer_output=True, allow_float_output=False, parse_number_strings=True)

# PythonDataConverter has arguments that allow providing the class with an instance for each of the 'Base' converters.
# By default, all 'Converter' arguments are set to None, indicating they are not in use. The class requires at least one
# converter to work.
python_converter = PythonDataConverter(numeric_converter=numeric_converter)

# PythonDataConverter class extends wrapped 'Base' converter functionality to iterables:
assert python_converter.validate_value("33") == 33

# Defaults to tuple outputs. Unlike 'Base' Converters, the class uses a long 'Validation/ConversionError' string to
# denote outputs that failed to be converted
assert python_converter.validate_value(["33", 11, 14.0, 3.32]) == (33, 11, 14, "Validation/ConversionError")

# Optionally, the class can be configured to filter 'failed' iterable elements out and return a list instead of a tuple
python_converter = PythonDataConverter(
    numeric_converter=numeric_converter, filter_failed_elements=True, iterable_output_type="list"
)
assert python_converter.validate_value(["33", 11, 14.0, 3.32]) == [33, 11, 14]
```

__PythonDataConverter__ also allows combining __multiple 'Base' converters__ to allow multiple output types. 
*__Note:__* The outputs are preferentially converted in this order float > integer > boolean > None > string:
```
from ataraxis_data_structures.data_converters import (
    NumericConverter,
    BooleanConverter,
    StringConverter,
    PythonDataConverter,
)

# Configured converters to be combined through PythonDataConverter
numeric_converter = NumericConverter(allow_integer_output=True, allow_float_output=False, parse_number_strings=True)
bool_converter = BooleanConverter(parse_boolean_equivalents=True)
string_converter = StringConverter(allow_string_conversion=True)

# When provided with multiple converters, they are applied in this order: Numeric > Boolean > None > String
python_converter = PythonDataConverter(
    numeric_converter=numeric_converter, boolean_converter=bool_converter, string_converter=string_converter
)

# Output depends on the application hierarchy and the configuration of each 'Base' converter. If at least one converter
# 'validates' the value successfully, the 'highest' success value is returned.
assert python_converter.validate_value('33') == 33  # Parses integer-convertible string as integer

assert python_converter.validate_value('True') is True  # Parses boolean-equivalent string as boolean

# Since numeric converter cannot output floats and the input is not boolean-equivalent, it is processed by
# string-converter as a string
assert python_converter.validate_value(14.123) == '14.123'

# The principles showcased above are iteratively applied to each element of iterable inputs:
assert python_converter.validate_value(["22", False, 11.0, 3.32]) == (22, False, 11, '3.32')
```

__PythonDataConverter__ can be configured to raise exceptions instead of returning string error types:
```
from ataraxis_data_structures.data_converters import (
    NumericConverter,
    BooleanConverter,
    StringConverter,
    PythonDataConverter,
)

# Configures base converters to make sure input floating values will fail validation.
numeric_converter = NumericConverter(allow_float_output=False)
bool_converter = BooleanConverter(parse_boolean_equivalents=False)
string_converter = StringConverter(allow_string_conversion=False)

# By default, PythonDataConverter is configured to return 'Validation/ConversionError' string for any input(s) that
# fails conversion:
python_converter = PythonDataConverter(
    numeric_converter=numeric_converter, boolean_converter=bool_converter, string_converter=string_converter
)
assert python_converter.validate_value([3.124, 1.213]) == ("Validation/ConversionError", "Validation/ConversionError")

# However, the class can be configured to raise errors instead:
python_converter = PythonDataConverter(
    numeric_converter=numeric_converter,
    boolean_converter=bool_converter,
    string_converter=string_converter,
    raise_errors=True,
)
try:
    python_converter.validate_value([3.124, 1.213])  # This raises value error
except ValueError as e:
    print(f'Encountered error: {e}')
```

#### NumpyDataConverter
The NumpyDataConverter class extends the functionality of PythonDataConverter class to support converting to and from
NumPy datatypes. The fundamental difference between Python and NumPy data is that NumPy uses c-extensions and, 
therefore, requires input and output data to be strictly typed before it is processed. In the context of data 
conversion, this typically means that there is a single NumPy datatype into which we need to 'funnel' one or more 
Python types.

*__Note!__* At this time, NumpyDataConverter only supports integer, floating-point, and boolean conversion. Support 
for strings may be added in the future, but currently it is not planned.

__NumpyDataConverter__ works by wrapping an instance of PythonDataConverter class configured in a way that it outputs
a single Python datatype. After initial configuration, use __convert_value_to_numpy()__ method to convert input 
Python values to NumPy values.
```
from ataraxis_data_structures.data_converters import (
    NumericConverter,
    PythonDataConverter,
    NumpyDataConverter
)
import numpy as np

# NumpyDataConverter requires a PythonDataConverter instance configured to return a single type:
numeric_converter = NumericConverter(allow_float_output=False, allow_integer_output=True)  # Only integers are allowed

# PythonDataConverter has to use only one Base converter to satisfy he conditions mentioned above. Additionally, the
# class has to be configured to raise errors instead of returning error-strings:
python_converter = PythonDataConverter(numeric_converter=numeric_converter, raise_errors=True)

numpy_converter = NumpyDataConverter(python_converter=python_converter)

# By default, NumpyDataConverter prefers signed integers to unsigned integers and automatically uses the smallest
# bit-width sufficient to represent the data. This is in contrast to the 'standard' numpy behavior that defaults 
# to 32 or 64 bit-widths depending on the output type.
assert numpy_converter.convert_value_to_numpy('3') == np.int8(3)
assert isinstance(numpy_converter.convert_value_to_numpy('3'), np.int8)
```

__NumpyDataConverter__ can be additionally configured to produce outputs of specific bit-widths and, for integers,
signed or unsigned type:
```
from ataraxis_data_structures.data_converters import (
    NumericConverter,
    PythonDataConverter,
    NumpyDataConverter
)
import numpy as np

# Specifically, configures the converter to produce unsigned integers using 64 bit-widths.
numeric_converter = NumericConverter(allow_float_output=False, allow_integer_output=True)
python_converter = PythonDataConverter(numeric_converter=numeric_converter, raise_errors=True)
numpy_converter = NumpyDataConverter(python_converter=python_converter, output_bit_width=64, signed=False)

# Although the number would have automatically been converted to an 8-bit signed integer, our configuration ensures
# it is a 64-bit unsigned integer.
assert numpy_converter.convert_value_to_numpy('11') == np.uint64(11)
assert isinstance(numpy_converter.convert_value_to_numpy('11'), np.uint64)

# This works for iterables as well:
output = numpy_converter.convert_value_to_numpy([11, 341, 67481])
expected = np.array([11, 341, 67481], dtype=np.uint64)
assert np.array_equal(output, expected)
assert output.dtype == np.uint64
```

__NumpyDataConverter__ can be used to convert numpy datatypes back to Python types using __convert_value_from_numpy()__
method:
```
from ataraxis_data_structures.data_converters import (
    NumericConverter,
    PythonDataConverter,
    NumpyDataConverter
)
import numpy as np

# Configures the converter to work with floating-point numbers
numeric_converter = NumericConverter(allow_float_output=True, allow_integer_output=False)
python_converter = PythonDataConverter(numeric_converter=numeric_converter, raise_errors=True)
numpy_converter = NumpyDataConverter(python_converter=python_converter)

# Converts scalar floating types to python types
assert numpy_converter.convert_value_from_numpy(np.float64(1.23456789)) == 1.23456789
assert isinstance(numpy_converter.convert_value_from_numpy(np.float64(1.23456789)), float)

# Also works for iterables
input_array = np.array([1.234, 5.671, 6.978], dtype=np.float16)
output = numpy_converter.convert_value_from_numpy(input_array)
assert np.allclose(output, (1.234, 5.671, 6.978), atol=0.01, rtol=0)  # Fuzzy comparison due to rounding
assert isinstance(output, tuple)
```

### NestedDictionary
The NestedDictionary class wraps and manages a Python dictionary object. It exposes methods for evaluating the layout 
of the wrapped dictionary and manipulating values and sub-dictionaries in the hierarchy using a path-like API.

#### Reading and Writing values
The class contains two principal methods likely to be helpful for most users: __write_nested_value()__ and 
__read_nested_value()__ which can be used together with a Path-like API to work with dictionary values:
```
from ataraxis_data_structures import NestedDictionary

# By default, the class initializes as an empty dictionary object
nested_dictionary = NestedDictionary()

# The class is designed to work with nested paths, which are one-dimensional iterables of keys. The class always
# crawls the dictionary from the highest hierarchy, sequentially indexing sublevels of the dictionary using the
# provided keys. Note! Key datatypes are important, the class respects input key datatype where possible.
path = ['level1', 'sublevel2', 'value1']  # This is the same as nested_dict['level1']['sublevel2']['value1']

# To write into the dictionary, you can use a path-like API:
nested_dictionary.write_nested_value(variable_path=path, value=111)

# To read from the nested dictionary, you can use the same path-like API:
assert nested_dictionary.read_nested_value(variable_path=path) == 111

# Both methods can be used to read and write individual values and whole dictionary sections:
path = ['level2']
nested_dictionary.write_nested_value(variable_path=path, value={'sublevel2': {'subsublevel1': {'value': 3}}})
assert nested_dictionary.read_nested_value(variable_path=path) == {'sublevel2': {'subsublevel1': {'value': 3}}}
```

#### Wrapping existing dictionaries
The class can wrap pre-created dictionaries to extend class functionality to almost any Python dictionary object:
```
from ataraxis_data_structures import NestedDictionary

# The class can be initialized with a pre-created dictionary to manage that dictionary
seed_dict = {'key1': {'key2': {'key3': 10}}, 12: 'value1'}
nested_dictionary = NestedDictionary(seed_dict)

assert nested_dictionary.read_nested_value(['key1', 'key2', 'key3']) == 10
assert nested_dictionary.read_nested_value([12]) == 'value1'
```

#### Path API
The class generally supports two formats used to specify paths to desired values and sub-dictionaries: an iterable of
keys and a delimited string.
```
from ataraxis_data_structures import NestedDictionary

# Python dictionaries are very flexible with the datatypes that can be used for dictionary keys.
seed_dict = {11: {'11': {True: False}}}
nested_dictionary = NestedDictionary(seed_dict)

# When working with dictionaries that mix multiple different types for keys, you have to use the 'iterable' path format.
# This is the only format that reliably preserves and accounts for key datatypes:
assert nested_dictionary.read_nested_value([11, '11', True]) is False

# However, when all dictionary keys are of the same datatype, you can use the second format of delimiter-delimited
# strings. This format does not preserve key datatype information, but it is more human-friendly and mimics the
# path API commonly used in file systems:
seed_dict = {'11': {'11': {'True': False}}}
nested_dictionary = NestedDictionary(seed_dict, path_delimiter='/')

assert nested_dictionary.read_nested_value('11/11/True') is False

# You can always modify the 'delimiter' character via set_path_delimiter() method:
nested_dictionary.set_path_delimiter('.')
assert nested_dictionary.read_nested_value('11.11.True') is False
```

#### Key datatype methods
The class comes with a set of methods that can be used to discover and potentially modify dictionary key datatypes.
Primarily, these methods are designed to convert the dictionary to use the same datatype for all keys, where possible, 
to enable using the 'delimited string' path API.
```
from ataraxis_data_structures import NestedDictionary

# Instantiates a dictionary with mixed datatypes.
seed_dict = {11: {'11': {True: False}}}
nested_dictionary = NestedDictionary(seed_dict)

# If you do not know the datatypes of your dictionary, you can access them via the 'key_datatypes' property, which
# returns them as a sorted list of strings. The property is updated during class initialization and when using methods
# that modify the dictionary, but it references a static set under-the-hood and will NOT reflect any manual changes to
# the dictionary.
assert nested_dictionary.key_datatypes == ('bool', 'int', 'str')

# You can use the convert_all_keys_to_datatype method to convert all keys to the desired type. By default, the method
# modifies the wrapped dictionary in-place, but it can be optionally configured to return a new NestedDictionary class
# instance that wraps the modified dictionary
new_nested_dict = nested_dictionary.convert_all_keys_to_datatype(datatype='str', modify_class_dictionary=False)
assert new_nested_dict.key_datatypes == ('str',)  # All keys have been converted to strings
assert nested_dictionary.key_datatypes == ('bool', 'int', 'str')  # Conversion did not affect original dictionary

# This showcases the default behavior of in-place conversion
nested_dictionary.convert_all_keys_to_datatype(datatype='int')
assert nested_dictionary.key_datatypes == ('int',)  # All keys have been converted to integers
```

#### Extracting variable paths
The class is equipped with methods for mapping dictionaries with unknown topologies. Specifically, the class
can find the paths to all terminal values or to specific terminal (value), intermediate (sub-dictionary) or both 
(all) dictionary elements:
```
from ataraxis_data_structures import NestedDictionary

# Instantiates a dictionary with mixed datatypes complex nesting
seed_dict = {"11": {"11": {"11": False}}, "key2": {"key2": 123}}
nested_dictionary = NestedDictionary(seed_dict)

# Extracts the paths to all values stored in the dictionary and returns them using iterable path API format (internally,
# it is referred to as 'raw').
value_paths = nested_dictionary.extract_nested_variable_paths(return_raw=True)

# The method has extracted the path to the two terminal values in the dictionary
assert len(value_paths) == 2
assert value_paths[0] == ("11", "11", "11")
assert value_paths[1] == ("key2", "key2")

# If you need to find the path to a specific variable or section, you can use the find_nested_variable_path() to search
# for the desired path:

# The search can be customized to only evaluate dictionary section keys (intermediate_only), which allows searching for
# specific sections:
intermediate_paths = nested_dictionary.find_nested_variable_path(
    target_key="key2", search_mode="intermediate_only", return_raw=True
)

# There is only one 'section' key2 in the dictionary, and this key is found inside the highest scope of the dictionary:
assert intermediate_paths == ('key2',)

# Alternatively, you can search for terminal keys (value keys) only:
terminal_paths = nested_dictionary.find_nested_variable_path(
    target_key="11", search_mode="terminal_only", return_raw=True
)

# There is exactly one path that satisfies those search requirements
assert terminal_paths == ("11", "11", "11")

# Finally, you can evaluate all keys: terminal and intermediate.
all_paths = nested_dictionary.find_nested_variable_path(
    target_key="11", search_mode="all", return_raw=True
)

# Here, 3 tuples are returned as a tuple of tuples. In the examples above, the algorithm automatically optimized
# returned data by returning it as a single tuple, since each search discovered a single path.
assert len(all_paths) == 3
assert all_paths[0] == ("11",)
assert all_paths[1] == ("11", "11",)
assert all_paths[2] == ("11", "11", "11")
```

#### Overwriting and deleting values
In addition to reading and adding new values to the dictionary, the class offers methods for overwriting and removing
existing dictionary sections and values. These methods can be flexibly configured to carry out a wide range of 
potentially destructive dictionary operations:
```
from ataraxis_data_structures import NestedDictionary

# Instantiates a dictionary with mixed datatypes complex nesting
seed_dict = {"11": {"11": {"11": False}}, "key2": {"key2": 123}}
nested_dictionary = NestedDictionary(seed_dict)

# By default, the write function is configured to allow overwriting dictionary values
value_path = "11.11.11"
modified_dictionary = nested_dictionary.write_nested_value(
    value_path, value=True, allow_terminal_overwrite=True, modify_class_dictionary=False
)

# Ensures that 'False' is overwritten with true in the modified dictionary
assert modified_dictionary.read_nested_value(value_path) is True
assert nested_dictionary.read_nested_value(value_path) is False

# You can also overwrite dictionary sections, which is not enabled by default:
value_path = "11.11"
modified_dictionary = nested_dictionary.write_nested_value(
    value_path, value={"12": "not bool"}, allow_intermediate_overwrite=True, modify_class_dictionary=False
)

# This time, the whole intermediate section has been overwritten with the provided dictionary
assert modified_dictionary.read_nested_value(value_path) == {"12": "not bool"}
assert nested_dictionary.read_nested_value(value_path) == {"11": False}

# Similarly, you can also delete dictionary values and sections by using the dedicated deletion method. By default, it
# is designed to remove all dictionary sections that are empty after the deletion has been carried out
value_path = "11.11.11"
modified_dictionary = nested_dictionary.delete_nested_value(
    variable_path=value_path, modify_class_dictionary=False, delete_empty_sections=True
)

# Ensures the whole branch of '11' keys has been removed from the dictionary
assert '11.11.11' not in modified_dictionary.extract_nested_variable_paths()

# When empty section deletion is disabled, the branch should remain despite no longer having the deleted key:value pair
modified_dictionary = nested_dictionary.delete_nested_value(
    variable_path=value_path, modify_class_dictionary=False, delete_empty_sections=False,
)

# This path now points to an empty dictionary section, but it exists
assert '11.11' in modified_dictionary.extract_nested_variable_paths()
assert modified_dictionary.read_nested_value('11.11') == {}
```

### YamlConfig
The YamlConfig class extends the functionality of standard Python dataclasses by bundling them with methods to save and
load class data to / from .yaml files. Primarily, this is helpful for classes that store configuration data for other
runtimes so that they can be stored between runtimes and edited (.yaml is human-readable).

#### Saving and loading config data
This class is intentionally kept as minimalistic as possible. It does not do any input data validation and relies on the
user manually implementing that functionality, if necessary. The class is designed to be used as a parent for custom
dataclasses. 

All class 'yaml' functionality is realized through to_yaml() and from_yaml() methods:
```
from ataraxis_data_structures import YamlConfig
from dataclasses import dataclass
from pathlib import Path
import tempfile

# First, the class needs to be subclassed as a custom dataclass
@dataclass
class MyConfig(YamlConfig):
    # Note the 'base' class initialization values. This ensures that if the class data is not loaded from manual
    # storage, the example below will not work.
    integer: int = 0
    string: str = 'random'


# Instantiates the class using custom values
config = MyConfig(integer=123, string='hello')

# Uses temporary directory to generate the path that will be used to store the file
temp_dir = tempfile.mkdtemp()
out_path = Path(temp_dir).joinpath("my_config.yaml")

# Saves the class as a .yaml file. If you want to see / edit the file manually, replace the example 'temporary'
# directory with a custom directory
config.to_yaml(config_path=out_path)

# Ensures the file has been written
assert out_path.exists()

# Loads and re-instantiates the config as a dataclass using the data inside the .yaml file
loaded_config = MyConfig.from_yaml(config_path=out_path)

# Ensures that the loaded config data matches the original config
assert loaded_config.integer == config.integer
assert loaded_config.string == config.string
```

### SharedMemoryArray
The SharedMemoryArray class allows sharing data between multiple Python processes in a thread- and process-safe way.
It is designed to compliment other common data-sharing methods, such as multiprocessing and multithreading Queue 
classes. The class implements a shared one-dimensional numpy array, allowing different processes to dynamically write 
and read any elements of the array independent of order and without mandatory 'consumption' of manipulated elements.

#### Array creation
The SharedMemoryArray only needs to be initialized __once__ by the highest scope process. That is, only the parent 
process should create the SharedMemoryArray instance and provide it as an argument to all children processes during
their instantiation. The initialization process uses the input prototype numpy array and unique buffer name to generate 
a shared memory buffer and fill it with input array data. 

*__Note!__* The array dimensions and datatype cannot be changed after initialization, the resultant SharedMemoryArray
will always use the same shape and datatype.
```
from ataraxis_data_structures import SharedMemoryArray
import numpy as np

# The prototype array and buffer name determine the layout of the SharedMemoryArray for its entire lifetime:
prototype = np.array([1, 2, 3, 4, 5, 6], dtype=np.uint64)
buffer_name = 'unique_buffer'

# To initialize the array, use create_array() method. DO NOT use class initialization method directly!
sma = SharedMemoryArray.create_array(name=buffer_name, prototype=prototype)

# The instantiated SharedMemoryArray object wraps an array with the same dimensions and data type as the prototype
# and uses the unique buffer name to identify the shared memory buffer to connect from different processes.
assert sma.name == buffer_name
assert sma.shape == prototype.shape
assert sma.datatype == prototype.dtype
```

#### Array connection, disconnection and destruction
Each __child__ process has to use the __connect()__ method to connect to the array before reading or writing data. 
The parent process that has created the array connects to the array automatically during creation and does not need to 
be reconnected. At the end of each connected process runtime, you need to call the __disconnect()__ method to remove 
the reference to the shared buffer:
```
import numpy as np

from ataraxis_data_structures import SharedMemoryArray

# Initializes a SharedMemoryArray
prototype = np.zeros(shape=6, dtype=np.uint64)
buffer_name = "unique_buffer"
sma = SharedMemoryArray.create_array(name=buffer_name, prototype=prototype)

# This method has to be called before any child process that received the array can manipulate its data. While the
# process that creates the array is connected automatically, calling the connect() method does not have negative
# consequences.
sma.connect()

# You can verify the connection status of the array by using is_connected property:
assert sma.is_connected

# This disconnects the array from shared buffer. On Windows platforms, when all instances are disconnected from the
# buffer, the buffer is automatically garbage-collected. Therefore, it is important to make sure the array has at least
# one connected instance at all times, unless you no longer intend to use the class. On Unix platforms, the buffer may
# persist even after being disconnected by all instances.
sma.disconnect()  # For each connect(), there has to be a matching disconnect() statement

assert not sma.is_connected

# On Unix platforms, you may need to manually destroy the array by calling the destroy() method. This has no effect on
# Windows (see above):
sma.destroy()  # While not strictly necessary, for each create_array(), there should be a matching destroy() call.
```

#### Reading array data
To read from the array wrapped by the class, you can use the __read_data()__ method. The method allows reading
individual values and array slices and return data as NumPy or Python values:
```
import numpy as np
from ataraxis_data_structures import SharedMemoryArray

# Initializes a SharedMemoryArray
prototype = np.array([1, 2, 3, 4, 5, 6], dtype=np.uint64)
buffer_name = "unique_buffer"
sma = SharedMemoryArray.create_array(name=buffer_name, prototype=prototype)
sma.connect()

# The method can be used to read individual elements from the array. By default, the data is read as the numpy datatype
# used by the array
output = sma.read_data(index=2)
assert output == np.uint64(3)
assert isinstance(output, np.uint64)

# You can use 'convert_output' flag to force the method to us ePython datatypes for the returned data:
output = sma.read_data(index=2, convert_output=True)
assert output == 3
assert isinstance(output, int)

# By default, the method acquires a Lock object before reading data, preventing multiple processes from working with
# the array at the same time. For some use cases this can be detrimental (for example, when you are using the array to
# share the data between multiple read-only processes). In this case, you can read the data without locking:
output = sma.read_data(index=2, convert_output=True, with_lock=False)
assert output == 3
assert isinstance(output, int)

# To read a slice of the array, provide a tuple of two indices (for closed range) or a tuple of one index (start, open
# range).
output = sma.read_data(index=(0,), convert_output=True, with_lock=False)
assert output == [1, 2, 3, 4, 5, 6]
assert isinstance(output, list)

# Closed range end-index is excluded from sliced data
output = sma.read_data(index=(1, 4), convert_output=False, with_lock=False)
assert np.array_equal(output, np.array([2, 3, 4], dtype=np.uint64))
assert isinstance(output, np.ndarray)
```

#### Writing array data
To write data to the array wrapped by the class, use the __write_data()__ method. Its API is deliberately kept very 
similar to the read method:
```
import numpy as np
from ataraxis_data_structures import SharedMemoryArray

# Initializes a SharedMemoryArray
prototype = np.array([1, 2, 3, 4, 5, 6], dtype=np.uint64)
buffer_name = "unique_buffer"
sma = SharedMemoryArray.create_array(name=buffer_name, prototype=prototype)
sma.connect()

# Data writing method has a similar API to data reading method. It can write scalars and slices to the shared memory
# array. It tries to automatically convert the input into the type used by the array as needed:
sma.write_data(index=1, data=7, with_lock=True)
assert sma.read_data(index=1, convert_output=True) == 7

# Numpy inputs are automatically converted to the correct datatype if possible
sma.write_data(index=1, data=np.uint8(9), with_lock=True)
assert sma.read_data(index=1, convert_output=False) == np.uint8(9)

# Writing by slice is also supported
sma.write_data(index=(1, 3), data=[10, 11], with_lock=False)
assert sma.read_data(index=(0,), convert_output=True) == [1, 10, 11, 4, 5, 6]
```

#### Using the array from multiple processes
While all methods showcased above run from the same process, the main advantage of the class is that they work
just as well when used from different Python processes:
```
import numpy as np
from ataraxis_data_structures import SharedMemoryArray
from multiprocessing import Process


def concurrent_worker(shared_memory_object: SharedMemoryArray, index: int):
    """This worker will run in a different process.

    It increments a shared memory array variable by 1 if the variable is even. Since each increment will
    shift it to be odd, to work as intended, this process has to work together with a different process that
    increments odd values. The process shuts down once the value reaches 200.

    Args:
        shared_memory_object: The SharedMemoryArray instance to work with.
        index: The index inside the array to increment
    """
    # Connects to the array
    shared_memory_object.connect()

    # Runs until the value becomes 200
    while shared_memory_object.read_data(index) < 200:
        # Reads data from the input index
        shared_value = shared_memory_object.read_data(index)

        # Checks if the value is even and below 200
        if shared_value % 2 == 0 and shared_value < 200:
            # Increments the value by one and writes it back to the array
            shared_memory_object.write_data(index, shared_value + 1)

    # Disconnects and terminates the process
    shared_memory_object.disconnect()


if __name__ == "__main__":
    # Initializes a SharedMemoryArray
    sma = SharedMemoryArray.create_array("test_concurrent", np.zeros(5, dtype=np.int32))

    # Generates multiple processes and uses each to repeatedly write and read data from different indices of the same
    # array.
    processes = [Process(target=concurrent_worker, args=(sma, i)) for i in range(5)]
    for p in processes:
        p.start()

    # For each of the array indices, increments the value of the index if it is odd. Child processes increment even
    # values and ignore odd ones, so the only way for this code to finish is if children and parent process take turns
    # incrementing shared values until they reach 200
    while np.any(sma.read_data((0, 5)) < 200):  # Runs as long as any value is below 200
        # Loops over addressable indices
        for i in range(5):
            value = sma.read_data(i)
            if value % 2 != 0 and value < 200:  # If the value is odd and below 200, increments the value by 1
                sma.write_data(i, value + 1)

    # Waits for the processes to join
    for p in processes:
        p.join()

    # Verifies that all processes ran as expected and incremented their respective variable
    assert np.all(sma.read_data((0, 5)) == 200)

    # Cleans up the shared memory array after all processes are terminated
    sma.disconnect()
    sma.destroy()
```
### DataLogger
The DataLogger class sets up data logger instances running on isolated cores (Processes) and exposes a shared Queue 
object for buffering and piping data from any other Process to the logger cores. Currently, the logger is only intended 
for saving serialized byte arrays used by other Ataraxis libraries (notably: ataraxis-video-system and 
ataraxis-transport-layer).

#### Logger creation and use
Currently, a single DataLogger can be initialized at a time. Initializing a second instance until the first instance is
garbage collected will run into an error due to internal binding of [SharedMemoryArray](#sharedmemoryarray) class.
```
from ataraxis_data_structures import DataLogger, LogPackage
import numpy as np
import tempfile
import time as tm
from pathlib import Path

# Due to the internal use of Process classes, the logger has to be protected by the __main__ guard.
if __name__ == '__main__':
    # The Logger only needs to be provided with the path to the output directory to be used. However, it can be further
    # customized to control the number of processes and threads used to log the data. See class docstrings for details.
    tempdir = tempfile.TemporaryDirectory()  # A temporary directory for illustration purposes
    logger = DataLogger(output_directory=Path(tempdir.name))  # The logger will create a new folder: 'tempdir/data_log'

    # Before the logger starts saving data, its saver processes need to be initialized.
    logger.start()

    # To submit data to the logger, access its input_queue property and share it with all other Processes that need to
    # log byte-serialized data.
    logger_queue = logger.input_queue

    # Creates and submits example data to be logged. Note, teh data has to be packaged into a LogPackage dataclass.
    source_id = 1
    timestamp = tm.perf_counter_ns()  # timestamp has to be an integer
    data = np.array([1, 2, 3, 4, 5], dtype=np.uint8)
    package = LogPackage(source_id, timestamp, data)
    logger_queue.put(package)

    # The timer has to be precise enough to resolve two consecutive datapoints (timestamp has to differ for the two
    # datapoints, so nanosecond or microsecond timers are best).
    timestamp = tm.perf_counter_ns()
    data = np.array([6, 7, 8, 9, 10], dtype=np.uint8)
    # Same source id
    package = LogPackage(source_id, timestamp, data)
    logger_queue.put(package)

    # Shutdown ensures all buffered data is saved before the logger is terminated. At the end of this runtime, there
    # should be 2 .npy files: 1_0000000000000000001.npy and 1_0000000000000000002.npy.
    logger.shutdown()

    # Verifies two .npy files were created
    assert len(list(Path(tempdir.name).glob('**/*.npy'))) == 2

    # The logger also provides a method for compressing all .npy files into .npz archives. This method is intended to be
    # called after the 'online' runtime is over to optimize the memory occupied by data.
    logger.compress_logs(remove_sources=True)  # Ensures .npy files are deleted once they are compressed into .npz file

    # The compression creates a single .npz file named after the source_id: 1_data_log.npz
    assert len(list(Path(tempdir.name).glob('**/*.npy'))) == 0
    assert len(list(Path(tempdir.name).glob('**/*.npz'))) == 1
```
___

## API Documentation

See the [API documentation](https://ataraxis-data-structures-api-docs.netlify.app/) for the
detailed description of the methods and classes exposed by components of this library.
___

## Developers

This section provides installation, dependency, and build-system instructions for the developers that want to
modify the source code of this library. Additionally, it contains instructions for recreating the conda environments
that were used during development from the included .yml files.

### Installing the library

1. Download this repository to your local machine using your preferred method, such as git-cloning.
2. ```cd``` to the root directory of the project using your command line interface of choice.
3. Install development dependencies. You have multiple options of satisfying this requirement:
    1. **_Preferred Method:_** Use conda or pip to install
       [tox](https://tox.wiki/en/latest/user_guide.html) or use an environment that has it installed and
       call ```tox -e import``` to automatically import the os-specific development environment included with the
       source code in your local conda distribution. Alternatively, you can use ```tox -e create``` to create the 
       environment from scratch and automatically install the necessary dependencies using pyproject.toml file. See 
       [environments](#environments) section for other environment installation methods.
    2. Run ```python -m pip install .'[dev]'``` command to install development dependencies and the library using 
       pip. On some systems, you may need to use a slightly modified version of this command: 
       ```python -m pip install .[dev]```.
    3. As long as you have an environment with [tox](https://tox.wiki/en/latest/user_guide.html) installed
       and do not intend to run any code outside the predefined project automation pipelines, tox will automatically
       install all required dependencies for each task.

**Note:** When using tox automation, having a local version of the library may interfere with tox tasks that attempt
to build the library using an isolated environment. While the problem is rare, our 'tox' pipelines automatically 
install and uninstall the project from its' conda environment. This relies on a static tox configuration and will only 
target the project-specific environment, so it is advised to always ```tox -e import``` or ```tox -e create``` the 
project environment using 'tox' before running other tox commands.

### Additional Dependencies

In addition to installing the required python packages, separately install the following dependencies:

1. [Python](https://www.python.org/downloads/) distributions, one for each version that you intend to support. 
  Currently, this library supports version 3.10 and above. The easiest way to get tox to work as intended is to have 
  separate python distributions, but using [pyenv](https://github.com/pyenv/pyenv) is a good alternative too. 
  This is needed for the 'test' task to work as intended.

### Development Automation

This project comes with a fully configured set of automation pipelines implemented using 
[tox](https://tox.wiki/en/latest/user_guide.html). Check [tox.ini file](tox.ini) for details about 
available pipelines and their implementation. Alternatively, call ```tox list``` from the root directory of the project
to see the list of available tasks.

**Note!** All commits to this project have to successfully complete the ```tox``` task before being pushed to GitHub. 
To minimize the runtime for this task, use ```tox --parallel```.

For more information, you can also see the 'Usage' section of the 
[ataraxis-automation project](https://github.com/Sun-Lab-NBB/ataraxis-automation) documentation.

### Environments

All environments used during development are exported as .yml files and as spec.txt files to the [envs](envs) folder.
The environment snapshots were taken on each of the three explicitly supported OS families: Windows 11, OSx (M1) 14.5
and Linux Ubuntu 22.04 LTS.

**Note!** Since the OSx environment was built for an M1 (Apple Silicon) platform, it may not work on Intel-based 
Apple devices.

To install the development environment for your OS:

1. Download this repository to your local machine using your preferred method, such as git-cloning.
2. ```cd``` into the [envs](envs) folder.
3. Use one of the installation methods below:
    1. **_Preferred Method_**: Install [tox](https://tox.wiki/en/latest/user_guide.html) or use another
       environment with already installed tox and call ```tox -e import```.
    2. **_Alternative Method_**: Run ```conda env create -f ENVNAME.yml``` or ```mamba env create -f ENVNAME.yml```. 
       Replace 'ENVNAME.yml' with the name of the environment you want to install (axbu_dev_osx for OSx, 
       axbu_dev_win for Windows, and axbu_dev_lin for Linux).

**Hint:** while only the platforms mentioned above were explicitly evaluated, this project is likely to work on any 
common OS, but may require additional configurations steps.

Since the release of [ataraxis-automation](https://github.com/Sun-Lab-NBB/ataraxis-automation) version 2.0.0 you can 
also create the development environment from scratch via pyproject.toml dependencies. To do this, use 
```tox -e create``` from project root directory.

### Automation Troubleshooting

Many packages used in 'tox' automation pipelines (uv, mypy, ruff) and 'tox' itself are prone to various failures. In 
most cases, this is related to their caching behavior. Despite a considerable effort to disable caching behavior known 
to be problematic, in some cases it cannot or should not be eliminated. If you run into an unintelligible error with 
any of the automation components, deleting the corresponding .cache (.tox, .ruff_cache, .mypy_cache, etc.) manually 
or via a cli command is very likely to fix the issue.
___

## Versioning

We use [semantic versioning](https://semver.org/) for this project. For the versions available, see the 
[tags on this repository](https://github.com/Sun-Lab-NBB/ataraxis-data-structures/tags).

---

## Authors

- Ivan Kondratyev ([Inkaros](https://github.com/Inkaros))
- Edwin Chen

___

## License

This project is licensed under the GPL3 License: see the [LICENSE](LICENSE) file for details.
___

## Acknowledgments

- All Sun lab [members](https://neuroai.github.io/sunlab/people) for providing the inspiration and comments during the
  development of this library.
- [numpy](https://github.com/numpy/numpy) project for providing low-level functionality for many of the 
  classes exposed through this library.
- [dacite](https://github.com/konradhalas/dacite) and [pyyaml](https://github.com/yaml/pyyaml/) for jointly providing
  the low-level functionality to read and write dataclasses to / from .yaml files.
- The creators of all other projects used in our development automation pipelines [see pyproject.toml](pyproject.toml).

---

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "ataraxis-data-structures",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "Ivan Kondratyev <ik278@cornell.edu>",
    "keywords": "ataraxis, data-manipulation, data-structures, nested-dictionary, shared-memory",
    "author": null,
    "author_email": "Ivan Kondratyev <ik278@cornell.edu>, Edwin Chen <ec769@cornell.edu>",
    "download_url": "https://files.pythonhosted.org/packages/e5/32/ceb3cde2f38e09e477def2472b8708e76724634e7d52532ab3a32f1e3129/ataraxis_data_structures-1.1.4.tar.gz",
    "platform": null,
    "description": "# ataraxis-data-structures\n\nProvides classes and structures for storing, manipulating, and sharing data between Python processes.\n\n![PyPI - Version](https://img.shields.io/pypi/v/ataraxis-data-structures)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/ataraxis-data-structures)\n[![uv](https://tinyurl.com/uvbadge)](https://github.com/astral-sh/uv)\n[![Ruff](https://tinyurl.com/ruffbadge)](https://github.com/astral-sh/ruff)\n![type-checked: mypy](https://img.shields.io/badge/type--checked-mypy-blue?style=flat-square&logo=python)\n![PyPI - License](https://img.shields.io/pypi/l/ataraxis-data-structures)\n![PyPI - Status](https://img.shields.io/pypi/status/ataraxis-data-structures)\n![PyPI - Wheel](https://img.shields.io/pypi/wheel/ataraxis-data-structures)\n___\n\n## Detailed Description\n\nThis library aggregates the classes and methods that broadly help working with data. This includes \nclasses to manipulate the data, share (move) the data between different Python processes and save and load the \ndata from storage. \n\nGenerally, these classes either implement novel functionality not available through other popular libraries or extend \nexisting functionality to match specific needs of other project Ataraxis modules. That said, the library is written \nin a way that it can be used as a standalone module with minimum dependency on other Ataraxis modules.\n___\n\n## Features\n\n- Supports Windows, Linux, and macOS.\n- Provides a Process- and Thread-safe way of sharing data between Python processes through a NumPy array structure.\n- Provides tools for working with complex nested dictionaries using a path-like API.\n- Provides a set of classes for converting between a wide range of Python and NumPy scalar and iterable datatypes.\n- Extends standard Python dataclass to enable it to save and load itself to / from YAML files.\n- Pure-python API.\n- Provides a massively-scalable data logger optimized for saving byte-serialized data from multiple input Processes.\n- GPL 3 License.\n\n___\n\n## Table of Contents\n\n- [Dependencies](#dependencies)\n- [Installation](#installation)\n- [Usage](#usage)\n- [API Documentation](#api-documentation)\n- [Developers](#developers)\n- [Versioning](#versioning)\n- [Authors](#authors)\n- [License](#license)\n- [Acknowledgements](#Acknowledgments)\n___\n\n## Dependencies\n\nFor users, all library dependencies are installed automatically for all supported installation methods \n(see [Installation](#installation) section). For developers, see the [Developers](#developers) section for \ninformation on installing additional development dependencies.\n___\n\n## Installation\n\n### Source\n\n1. Download this repository to your local machine using your preferred method, such as git-cloning. Optionally, use one\n   of the stable releases that include precompiled binary wheels in addition to source code.\n2. ```cd``` to the root directory of the project using your command line interface of choice.\n3. Run ```python -m pip install .``` to install the project. Alternatively, if using a distribution with precompiled\n   binaries, use ```python -m pip install WHEEL_PATH```, replacing 'WHEEL_PATH' with the path to the wheel file.\n\n### PIP\n\nUse the following command to install the library using PIP: ```pip install ataraxis-data-structures```\n\n### Conda / Mamba\n\n**_Note. Due to conda-forge contributing process being more nuanced than pip uploads, conda versions may lag behind\npip and source code distributions._**\n\nUse the following command to install the library using Conda or Mamba: ```conda install ataraxis-data-structures```\n___\n\n## Usage\n\nThis section is broken into subsections for each exposed utility class or module. For each, it progresses from a \nminimalistic example and / or 'quickstart' to detailed notes on nuanced class functionality \n(if the class has such functionality).\n\n### Data Converters\nGenerally, Data Converters are designed to in some way mimic the functionality of the\n[pydantic](https://docs.pydantic.dev/latest/) project. Unlike pydantic, which is primarily a data validator, \nour Converters are designed specifically for flexible data conversion. While pydantic provides a fairly \ninflexible 'coercion' mechanism to cast input data to desired types, Converter classes offer a flexible and \nnuanced mechanism for casting Python variables between different types.\n\n#### Base Converters\nTo assist converting to specific Python scalar types, we provide 4 'Base' converters: NumericConverter, \nBooleanConverter, StringConverter, and NoneConverter. After initial configuration, each converter takes in any input \nand conditionally converts it to the specific Python scalar datatype using __validate_value()__ class method.\n\n__NumericConverter:__ Converts inputs to integers, floats, or both:\n```\nfrom ataraxis_data_structures.data_converters import NumericConverter\n\n# NumericConverter is used to convert inputs into integers, floats or both. By default, it is configured to return\n# both types. Depending on configuration, the class can be constrained to one type of outputs:\nnum_converter = NumericConverter(allow_integer_output=False, allow_float_output=True)\nassert num_converter.validate_value(3) == 3.0\n\n# When converting floats to integers, the class will only carry out the conversion if doing so does not require\n# rounding or otherwise altering the value.\nnum_converter = NumericConverter(allow_integer_output=True, allow_float_output=False)\nassert num_converter.validate_value(3.0) == 3\n\n# The class can convert number-equivalents to numeric types depending on configuration. When possible, it prefers\n# floating-point numbers over integers:\nnum_converter = NumericConverter(allow_integer_output=True, allow_float_output=True, parse_number_strings=True)\nassert num_converter.validate_value('3.0') == 3.0\n\n# NumericConverter can also filter input values based on a specified range. If the value fails validation, the method \n# returns None.\nnum_converter = NumericConverter(number_lower_limit=1, number_upper_limit=2, allow_float_output=False)\nassert num_converter.validate_value('3.0') is None\n```\n\n__BooleanConverter:__ Converts inputs to booleans:\n```\nfrom ataraxis_data_structures.data_converters import BooleanConverter\n\n# Boolean converter only has one additional parameter: whether to convert boolean-equivalents.\nbool_converter = BooleanConverter(parse_boolean_equivalents=True)\n\nassert bool_converter.validate_value(1) is True\nassert bool_converter.validate_value(True) is True\nassert bool_converter.validate_value('true') is True\n\nassert bool_converter.validate_value(0) is False\nassert bool_converter.validate_value(False) is False\nassert bool_converter.validate_value('false') is False\n\n# If valdiation fails for any input, the emthod returns None\nbool_converter = BooleanConverter(parse_boolean_equivalents=False)\nassert bool_converter.validate_value(1) is None\n```\n\n__NoneConverter:__ Converts inputs to None:\n```\nfrom ataraxis_data_structures.data_converters import NoneConverter\n\n# None converter only has one additional parameter: whether to convert None equivalents.\nbool_converter = NoneConverter(parse_none_equivalents=True)\n\nassert bool_converter.validate_value('Null') is None\nassert bool_converter.validate_value(None) is None\nassert bool_converter.validate_value('none') is None\n\n# If the method is not able to convert or validate the input, it returns string \"None\":\nassert bool_converter.validate_value(\"Not an equivalent\") == 'None'\n```\n\n__StringConverter:__ Converts inputs to strings. Since most Python scalar types are string-convertible, the default \nclass configuration is to NOT convert inputs (to validate them without a conversion):\n```\nfrom ataraxis_data_structures.data_converters import StringConverter\n\n# By default, string converter is configured to only validate, but not convert inputs:\nstr_converter = StringConverter()\nassert str_converter.validate_value(\"True\") == 'True'\nassert str_converter.validate_value(1) is None  # Conversion failed\n\n# To enable conversion, set the appropriate class initialization argument:\nstr_converter = StringConverter(allow_string_conversion=True)\nassert str_converter.validate_value(1) == '1'\n\n# Additionally, the class can be sued to filter inputs based on a predefined list and force strings to be lower-case.\n# Note, filtering is NOT case-sensitive:\nstr_converter = StringConverter(allow_string_conversion=True, string_force_lower=True, string_options=['1', 'ok'])\nassert str_converter.validate_value(1) == '1'\nassert str_converter.validate_value('OK') == 'ok'  # Valid option, converted to the lower case\nassert str_converter.validate_value('2') is None  # Not a valid option\n```\n\n#### PythonDataConverter\nThe PythonDataConverter class expands upon the functionality of the 'Base' Converter classes. To do so, it accepts \npre-configured instances of the 'Base' Converter classes and applies them to inputs via its' __validate_value()__ \nmethod.\n\n__PythonDataConverter__ extends converter functionality to __one-dimensional iterable inputs and outputs__ by applying \na 'Base' converter to each element of the iterable. It also works with scalars:\n```\nfrom ataraxis_data_structures.data_converters import NumericConverter, PythonDataConverter\n\n# Each input converter has to be preconfigured\nnumeric_converter = NumericConverter(allow_integer_output=True, allow_float_output=False, parse_number_strings=True)\n\n# PythonDataConverter has arguments that allow providing the class with an instance for each of the 'Base' converters.\n# By default, all 'Converter' arguments are set to None, indicating they are not in use. The class requires at least one\n# converter to work.\npython_converter = PythonDataConverter(numeric_converter=numeric_converter)\n\n# PythonDataConverter class extends wrapped 'Base' converter functionality to iterables:\nassert python_converter.validate_value(\"33\") == 33\n\n# Defaults to tuple outputs. Unlike 'Base' Converters, the class uses a long 'Validation/ConversionError' string to\n# denote outputs that failed to be converted\nassert python_converter.validate_value([\"33\", 11, 14.0, 3.32]) == (33, 11, 14, \"Validation/ConversionError\")\n\n# Optionally, the class can be configured to filter 'failed' iterable elements out and return a list instead of a tuple\npython_converter = PythonDataConverter(\n    numeric_converter=numeric_converter, filter_failed_elements=True, iterable_output_type=\"list\"\n)\nassert python_converter.validate_value([\"33\", 11, 14.0, 3.32]) == [33, 11, 14]\n```\n\n__PythonDataConverter__ also allows combining __multiple 'Base' converters__ to allow multiple output types. \n*__Note:__* The outputs are preferentially converted in this order float > integer > boolean > None > string:\n```\nfrom ataraxis_data_structures.data_converters import (\n    NumericConverter,\n    BooleanConverter,\n    StringConverter,\n    PythonDataConverter,\n)\n\n# Configured converters to be combined through PythonDataConverter\nnumeric_converter = NumericConverter(allow_integer_output=True, allow_float_output=False, parse_number_strings=True)\nbool_converter = BooleanConverter(parse_boolean_equivalents=True)\nstring_converter = StringConverter(allow_string_conversion=True)\n\n# When provided with multiple converters, they are applied in this order: Numeric > Boolean > None > String\npython_converter = PythonDataConverter(\n    numeric_converter=numeric_converter, boolean_converter=bool_converter, string_converter=string_converter\n)\n\n# Output depends on the application hierarchy and the configuration of each 'Base' converter. If at least one converter\n# 'validates' the value successfully, the 'highest' success value is returned.\nassert python_converter.validate_value('33') == 33  # Parses integer-convertible string as integer\n\nassert python_converter.validate_value('True') is True  # Parses boolean-equivalent string as boolean\n\n# Since numeric converter cannot output floats and the input is not boolean-equivalent, it is processed by\n# string-converter as a string\nassert python_converter.validate_value(14.123) == '14.123'\n\n# The principles showcased above are iteratively applied to each element of iterable inputs:\nassert python_converter.validate_value([\"22\", False, 11.0, 3.32]) == (22, False, 11, '3.32')\n```\n\n__PythonDataConverter__ can be configured to raise exceptions instead of returning string error types:\n```\nfrom ataraxis_data_structures.data_converters import (\n    NumericConverter,\n    BooleanConverter,\n    StringConverter,\n    PythonDataConverter,\n)\n\n# Configures base converters to make sure input floating values will fail validation.\nnumeric_converter = NumericConverter(allow_float_output=False)\nbool_converter = BooleanConverter(parse_boolean_equivalents=False)\nstring_converter = StringConverter(allow_string_conversion=False)\n\n# By default, PythonDataConverter is configured to return 'Validation/ConversionError' string for any input(s) that\n# fails conversion:\npython_converter = PythonDataConverter(\n    numeric_converter=numeric_converter, boolean_converter=bool_converter, string_converter=string_converter\n)\nassert python_converter.validate_value([3.124, 1.213]) == (\"Validation/ConversionError\", \"Validation/ConversionError\")\n\n# However, the class can be configured to raise errors instead:\npython_converter = PythonDataConverter(\n    numeric_converter=numeric_converter,\n    boolean_converter=bool_converter,\n    string_converter=string_converter,\n    raise_errors=True,\n)\ntry:\n    python_converter.validate_value([3.124, 1.213])  # This raises value error\nexcept ValueError as e:\n    print(f'Encountered error: {e}')\n```\n\n#### NumpyDataConverter\nThe NumpyDataConverter class extends the functionality of PythonDataConverter class to support converting to and from\nNumPy datatypes. The fundamental difference between Python and NumPy data is that NumPy uses c-extensions and, \ntherefore, requires input and output data to be strictly typed before it is processed. In the context of data \nconversion, this typically means that there is a single NumPy datatype into which we need to 'funnel' one or more \nPython types.\n\n*__Note!__* At this time, NumpyDataConverter only supports integer, floating-point, and boolean conversion. Support \nfor strings may be added in the future, but currently it is not planned.\n\n__NumpyDataConverter__ works by wrapping an instance of PythonDataConverter class configured in a way that it outputs\na single Python datatype. After initial configuration, use __convert_value_to_numpy()__ method to convert input \nPython values to NumPy values.\n```\nfrom ataraxis_data_structures.data_converters import (\n    NumericConverter,\n    PythonDataConverter,\n    NumpyDataConverter\n)\nimport numpy as np\n\n# NumpyDataConverter requires a PythonDataConverter instance configured to return a single type:\nnumeric_converter = NumericConverter(allow_float_output=False, allow_integer_output=True)  # Only integers are allowed\n\n# PythonDataConverter has to use only one Base converter to satisfy he conditions mentioned above. Additionally, the\n# class has to be configured to raise errors instead of returning error-strings:\npython_converter = PythonDataConverter(numeric_converter=numeric_converter, raise_errors=True)\n\nnumpy_converter = NumpyDataConverter(python_converter=python_converter)\n\n# By default, NumpyDataConverter prefers signed integers to unsigned integers and automatically uses the smallest\n# bit-width sufficient to represent the data. This is in contrast to the 'standard' numpy behavior that defaults \n# to 32 or 64 bit-widths depending on the output type.\nassert numpy_converter.convert_value_to_numpy('3') == np.int8(3)\nassert isinstance(numpy_converter.convert_value_to_numpy('3'), np.int8)\n```\n\n__NumpyDataConverter__ can be additionally configured to produce outputs of specific bit-widths and, for integers,\nsigned or unsigned type:\n```\nfrom ataraxis_data_structures.data_converters import (\n    NumericConverter,\n    PythonDataConverter,\n    NumpyDataConverter\n)\nimport numpy as np\n\n# Specifically, configures the converter to produce unsigned integers using 64 bit-widths.\nnumeric_converter = NumericConverter(allow_float_output=False, allow_integer_output=True)\npython_converter = PythonDataConverter(numeric_converter=numeric_converter, raise_errors=True)\nnumpy_converter = NumpyDataConverter(python_converter=python_converter, output_bit_width=64, signed=False)\n\n# Although the number would have automatically been converted to an 8-bit signed integer, our configuration ensures\n# it is a 64-bit unsigned integer.\nassert numpy_converter.convert_value_to_numpy('11') == np.uint64(11)\nassert isinstance(numpy_converter.convert_value_to_numpy('11'), np.uint64)\n\n# This works for iterables as well:\noutput = numpy_converter.convert_value_to_numpy([11, 341, 67481])\nexpected = np.array([11, 341, 67481], dtype=np.uint64)\nassert np.array_equal(output, expected)\nassert output.dtype == np.uint64\n```\n\n__NumpyDataConverter__ can be used to convert numpy datatypes back to Python types using __convert_value_from_numpy()__\nmethod:\n```\nfrom ataraxis_data_structures.data_converters import (\n    NumericConverter,\n    PythonDataConverter,\n    NumpyDataConverter\n)\nimport numpy as np\n\n# Configures the converter to work with floating-point numbers\nnumeric_converter = NumericConverter(allow_float_output=True, allow_integer_output=False)\npython_converter = PythonDataConverter(numeric_converter=numeric_converter, raise_errors=True)\nnumpy_converter = NumpyDataConverter(python_converter=python_converter)\n\n# Converts scalar floating types to python types\nassert numpy_converter.convert_value_from_numpy(np.float64(1.23456789)) == 1.23456789\nassert isinstance(numpy_converter.convert_value_from_numpy(np.float64(1.23456789)), float)\n\n# Also works for iterables\ninput_array = np.array([1.234, 5.671, 6.978], dtype=np.float16)\noutput = numpy_converter.convert_value_from_numpy(input_array)\nassert np.allclose(output, (1.234, 5.671, 6.978), atol=0.01, rtol=0)  # Fuzzy comparison due to rounding\nassert isinstance(output, tuple)\n```\n\n### NestedDictionary\nThe NestedDictionary class wraps and manages a Python dictionary object. It exposes methods for evaluating the layout \nof the wrapped dictionary and manipulating values and sub-dictionaries in the hierarchy using a path-like API.\n\n#### Reading and Writing values\nThe class contains two principal methods likely to be helpful for most users: __write_nested_value()__ and \n__read_nested_value()__ which can be used together with a Path-like API to work with dictionary values:\n```\nfrom ataraxis_data_structures import NestedDictionary\n\n# By default, the class initializes as an empty dictionary object\nnested_dictionary = NestedDictionary()\n\n# The class is designed to work with nested paths, which are one-dimensional iterables of keys. The class always\n# crawls the dictionary from the highest hierarchy, sequentially indexing sublevels of the dictionary using the\n# provided keys. Note! Key datatypes are important, the class respects input key datatype where possible.\npath = ['level1', 'sublevel2', 'value1']  # This is the same as nested_dict['level1']['sublevel2']['value1']\n\n# To write into the dictionary, you can use a path-like API:\nnested_dictionary.write_nested_value(variable_path=path, value=111)\n\n# To read from the nested dictionary, you can use the same path-like API:\nassert nested_dictionary.read_nested_value(variable_path=path) == 111\n\n# Both methods can be used to read and write individual values and whole dictionary sections:\npath = ['level2']\nnested_dictionary.write_nested_value(variable_path=path, value={'sublevel2': {'subsublevel1': {'value': 3}}})\nassert nested_dictionary.read_nested_value(variable_path=path) == {'sublevel2': {'subsublevel1': {'value': 3}}}\n```\n\n#### Wrapping existing dictionaries\nThe class can wrap pre-created dictionaries to extend class functionality to almost any Python dictionary object:\n```\nfrom ataraxis_data_structures import NestedDictionary\n\n# The class can be initialized with a pre-created dictionary to manage that dictionary\nseed_dict = {'key1': {'key2': {'key3': 10}}, 12: 'value1'}\nnested_dictionary = NestedDictionary(seed_dict)\n\nassert nested_dictionary.read_nested_value(['key1', 'key2', 'key3']) == 10\nassert nested_dictionary.read_nested_value([12]) == 'value1'\n```\n\n#### Path API\nThe class generally supports two formats used to specify paths to desired values and sub-dictionaries: an iterable of\nkeys and a delimited string.\n```\nfrom ataraxis_data_structures import NestedDictionary\n\n# Python dictionaries are very flexible with the datatypes that can be used for dictionary keys.\nseed_dict = {11: {'11': {True: False}}}\nnested_dictionary = NestedDictionary(seed_dict)\n\n# When working with dictionaries that mix multiple different types for keys, you have to use the 'iterable' path format.\n# This is the only format that reliably preserves and accounts for key datatypes:\nassert nested_dictionary.read_nested_value([11, '11', True]) is False\n\n# However, when all dictionary keys are of the same datatype, you can use the second format of delimiter-delimited\n# strings. This format does not preserve key datatype information, but it is more human-friendly and mimics the\n# path API commonly used in file systems:\nseed_dict = {'11': {'11': {'True': False}}}\nnested_dictionary = NestedDictionary(seed_dict, path_delimiter='/')\n\nassert nested_dictionary.read_nested_value('11/11/True') is False\n\n# You can always modify the 'delimiter' character via set_path_delimiter() method:\nnested_dictionary.set_path_delimiter('.')\nassert nested_dictionary.read_nested_value('11.11.True') is False\n```\n\n#### Key datatype methods\nThe class comes with a set of methods that can be used to discover and potentially modify dictionary key datatypes.\nPrimarily, these methods are designed to convert the dictionary to use the same datatype for all keys, where possible, \nto enable using the 'delimited string' path API.\n```\nfrom ataraxis_data_structures import NestedDictionary\n\n# Instantiates a dictionary with mixed datatypes.\nseed_dict = {11: {'11': {True: False}}}\nnested_dictionary = NestedDictionary(seed_dict)\n\n# If you do not know the datatypes of your dictionary, you can access them via the 'key_datatypes' property, which\n# returns them as a sorted list of strings. The property is updated during class initialization and when using methods\n# that modify the dictionary, but it references a static set under-the-hood and will NOT reflect any manual changes to\n# the dictionary.\nassert nested_dictionary.key_datatypes == ('bool', 'int', 'str')\n\n# You can use the convert_all_keys_to_datatype method to convert all keys to the desired type. By default, the method\n# modifies the wrapped dictionary in-place, but it can be optionally configured to return a new NestedDictionary class\n# instance that wraps the modified dictionary\nnew_nested_dict = nested_dictionary.convert_all_keys_to_datatype(datatype='str', modify_class_dictionary=False)\nassert new_nested_dict.key_datatypes == ('str',)  # All keys have been converted to strings\nassert nested_dictionary.key_datatypes == ('bool', 'int', 'str')  # Conversion did not affect original dictionary\n\n# This showcases the default behavior of in-place conversion\nnested_dictionary.convert_all_keys_to_datatype(datatype='int')\nassert nested_dictionary.key_datatypes == ('int',)  # All keys have been converted to integers\n```\n\n#### Extracting variable paths\nThe class is equipped with methods for mapping dictionaries with unknown topologies. Specifically, the class\ncan find the paths to all terminal values or to specific terminal (value), intermediate (sub-dictionary) or both \n(all) dictionary elements:\n```\nfrom ataraxis_data_structures import NestedDictionary\n\n# Instantiates a dictionary with mixed datatypes complex nesting\nseed_dict = {\"11\": {\"11\": {\"11\": False}}, \"key2\": {\"key2\": 123}}\nnested_dictionary = NestedDictionary(seed_dict)\n\n# Extracts the paths to all values stored in the dictionary and returns them using iterable path API format (internally,\n# it is referred to as 'raw').\nvalue_paths = nested_dictionary.extract_nested_variable_paths(return_raw=True)\n\n# The method has extracted the path to the two terminal values in the dictionary\nassert len(value_paths) == 2\nassert value_paths[0] == (\"11\", \"11\", \"11\")\nassert value_paths[1] == (\"key2\", \"key2\")\n\n# If you need to find the path to a specific variable or section, you can use the find_nested_variable_path() to search\n# for the desired path:\n\n# The search can be customized to only evaluate dictionary section keys (intermediate_only), which allows searching for\n# specific sections:\nintermediate_paths = nested_dictionary.find_nested_variable_path(\n    target_key=\"key2\", search_mode=\"intermediate_only\", return_raw=True\n)\n\n# There is only one 'section' key2 in the dictionary, and this key is found inside the highest scope of the dictionary:\nassert intermediate_paths == ('key2',)\n\n# Alternatively, you can search for terminal keys (value keys) only:\nterminal_paths = nested_dictionary.find_nested_variable_path(\n    target_key=\"11\", search_mode=\"terminal_only\", return_raw=True\n)\n\n# There is exactly one path that satisfies those search requirements\nassert terminal_paths == (\"11\", \"11\", \"11\")\n\n# Finally, you can evaluate all keys: terminal and intermediate.\nall_paths = nested_dictionary.find_nested_variable_path(\n    target_key=\"11\", search_mode=\"all\", return_raw=True\n)\n\n# Here, 3 tuples are returned as a tuple of tuples. In the examples above, the algorithm automatically optimized\n# returned data by returning it as a single tuple, since each search discovered a single path.\nassert len(all_paths) == 3\nassert all_paths[0] == (\"11\",)\nassert all_paths[1] == (\"11\", \"11\",)\nassert all_paths[2] == (\"11\", \"11\", \"11\")\n```\n\n#### Overwriting and deleting values\nIn addition to reading and adding new values to the dictionary, the class offers methods for overwriting and removing\nexisting dictionary sections and values. These methods can be flexibly configured to carry out a wide range of \npotentially destructive dictionary operations:\n```\nfrom ataraxis_data_structures import NestedDictionary\n\n# Instantiates a dictionary with mixed datatypes complex nesting\nseed_dict = {\"11\": {\"11\": {\"11\": False}}, \"key2\": {\"key2\": 123}}\nnested_dictionary = NestedDictionary(seed_dict)\n\n# By default, the write function is configured to allow overwriting dictionary values\nvalue_path = \"11.11.11\"\nmodified_dictionary = nested_dictionary.write_nested_value(\n    value_path, value=True, allow_terminal_overwrite=True, modify_class_dictionary=False\n)\n\n# Ensures that 'False' is overwritten with true in the modified dictionary\nassert modified_dictionary.read_nested_value(value_path) is True\nassert nested_dictionary.read_nested_value(value_path) is False\n\n# You can also overwrite dictionary sections, which is not enabled by default:\nvalue_path = \"11.11\"\nmodified_dictionary = nested_dictionary.write_nested_value(\n    value_path, value={\"12\": \"not bool\"}, allow_intermediate_overwrite=True, modify_class_dictionary=False\n)\n\n# This time, the whole intermediate section has been overwritten with the provided dictionary\nassert modified_dictionary.read_nested_value(value_path) == {\"12\": \"not bool\"}\nassert nested_dictionary.read_nested_value(value_path) == {\"11\": False}\n\n# Similarly, you can also delete dictionary values and sections by using the dedicated deletion method. By default, it\n# is designed to remove all dictionary sections that are empty after the deletion has been carried out\nvalue_path = \"11.11.11\"\nmodified_dictionary = nested_dictionary.delete_nested_value(\n    variable_path=value_path, modify_class_dictionary=False, delete_empty_sections=True\n)\n\n# Ensures the whole branch of '11' keys has been removed from the dictionary\nassert '11.11.11' not in modified_dictionary.extract_nested_variable_paths()\n\n# When empty section deletion is disabled, the branch should remain despite no longer having the deleted key:value pair\nmodified_dictionary = nested_dictionary.delete_nested_value(\n    variable_path=value_path, modify_class_dictionary=False, delete_empty_sections=False,\n)\n\n# This path now points to an empty dictionary section, but it exists\nassert '11.11' in modified_dictionary.extract_nested_variable_paths()\nassert modified_dictionary.read_nested_value('11.11') == {}\n```\n\n### YamlConfig\nThe YamlConfig class extends the functionality of standard Python dataclasses by bundling them with methods to save and\nload class data to / from .yaml files. Primarily, this is helpful for classes that store configuration data for other\nruntimes so that they can be stored between runtimes and edited (.yaml is human-readable).\n\n#### Saving and loading config data\nThis class is intentionally kept as minimalistic as possible. It does not do any input data validation and relies on the\nuser manually implementing that functionality, if necessary. The class is designed to be used as a parent for custom\ndataclasses. \n\nAll class 'yaml' functionality is realized through to_yaml() and from_yaml() methods:\n```\nfrom ataraxis_data_structures import YamlConfig\nfrom dataclasses import dataclass\nfrom pathlib import Path\nimport tempfile\n\n# First, the class needs to be subclassed as a custom dataclass\n@dataclass\nclass MyConfig(YamlConfig):\n    # Note the 'base' class initialization values. This ensures that if the class data is not loaded from manual\n    # storage, the example below will not work.\n    integer: int = 0\n    string: str = 'random'\n\n\n# Instantiates the class using custom values\nconfig = MyConfig(integer=123, string='hello')\n\n# Uses temporary directory to generate the path that will be used to store the file\ntemp_dir = tempfile.mkdtemp()\nout_path = Path(temp_dir).joinpath(\"my_config.yaml\")\n\n# Saves the class as a .yaml file. If you want to see / edit the file manually, replace the example 'temporary'\n# directory with a custom directory\nconfig.to_yaml(config_path=out_path)\n\n# Ensures the file has been written\nassert out_path.exists()\n\n# Loads and re-instantiates the config as a dataclass using the data inside the .yaml file\nloaded_config = MyConfig.from_yaml(config_path=out_path)\n\n# Ensures that the loaded config data matches the original config\nassert loaded_config.integer == config.integer\nassert loaded_config.string == config.string\n```\n\n### SharedMemoryArray\nThe SharedMemoryArray class allows sharing data between multiple Python processes in a thread- and process-safe way.\nIt is designed to compliment other common data-sharing methods, such as multiprocessing and multithreading Queue \nclasses. The class implements a shared one-dimensional numpy array, allowing different processes to dynamically write \nand read any elements of the array independent of order and without mandatory 'consumption' of manipulated elements.\n\n#### Array creation\nThe SharedMemoryArray only needs to be initialized __once__ by the highest scope process. That is, only the parent \nprocess should create the SharedMemoryArray instance and provide it as an argument to all children processes during\ntheir instantiation. The initialization process uses the input prototype numpy array and unique buffer name to generate \na shared memory buffer and fill it with input array data. \n\n*__Note!__* The array dimensions and datatype cannot be changed after initialization, the resultant SharedMemoryArray\nwill always use the same shape and datatype.\n```\nfrom ataraxis_data_structures import SharedMemoryArray\nimport numpy as np\n\n# The prototype array and buffer name determine the layout of the SharedMemoryArray for its entire lifetime:\nprototype = np.array([1, 2, 3, 4, 5, 6], dtype=np.uint64)\nbuffer_name = 'unique_buffer'\n\n# To initialize the array, use create_array() method. DO NOT use class initialization method directly!\nsma = SharedMemoryArray.create_array(name=buffer_name, prototype=prototype)\n\n# The instantiated SharedMemoryArray object wraps an array with the same dimensions and data type as the prototype\n# and uses the unique buffer name to identify the shared memory buffer to connect from different processes.\nassert sma.name == buffer_name\nassert sma.shape == prototype.shape\nassert sma.datatype == prototype.dtype\n```\n\n#### Array connection, disconnection and destruction\nEach __child__ process has to use the __connect()__ method to connect to the array before reading or writing data. \nThe parent process that has created the array connects to the array automatically during creation and does not need to \nbe reconnected. At the end of each connected process runtime, you need to call the __disconnect()__ method to remove \nthe reference to the shared buffer:\n```\nimport numpy as np\n\nfrom ataraxis_data_structures import SharedMemoryArray\n\n# Initializes a SharedMemoryArray\nprototype = np.zeros(shape=6, dtype=np.uint64)\nbuffer_name = \"unique_buffer\"\nsma = SharedMemoryArray.create_array(name=buffer_name, prototype=prototype)\n\n# This method has to be called before any child process that received the array can manipulate its data. While the\n# process that creates the array is connected automatically, calling the connect() method does not have negative\n# consequences.\nsma.connect()\n\n# You can verify the connection status of the array by using is_connected property:\nassert sma.is_connected\n\n# This disconnects the array from shared buffer. On Windows platforms, when all instances are disconnected from the\n# buffer, the buffer is automatically garbage-collected. Therefore, it is important to make sure the array has at least\n# one connected instance at all times, unless you no longer intend to use the class. On Unix platforms, the buffer may\n# persist even after being disconnected by all instances.\nsma.disconnect()  # For each connect(), there has to be a matching disconnect() statement\n\nassert not sma.is_connected\n\n# On Unix platforms, you may need to manually destroy the array by calling the destroy() method. This has no effect on\n# Windows (see above):\nsma.destroy()  # While not strictly necessary, for each create_array(), there should be a matching destroy() call.\n```\n\n#### Reading array data\nTo read from the array wrapped by the class, you can use the __read_data()__ method. The method allows reading\nindividual values and array slices and return data as NumPy or Python values:\n```\nimport numpy as np\nfrom ataraxis_data_structures import SharedMemoryArray\n\n# Initializes a SharedMemoryArray\nprototype = np.array([1, 2, 3, 4, 5, 6], dtype=np.uint64)\nbuffer_name = \"unique_buffer\"\nsma = SharedMemoryArray.create_array(name=buffer_name, prototype=prototype)\nsma.connect()\n\n# The method can be used to read individual elements from the array. By default, the data is read as the numpy datatype\n# used by the array\noutput = sma.read_data(index=2)\nassert output == np.uint64(3)\nassert isinstance(output, np.uint64)\n\n# You can use 'convert_output' flag to force the method to us ePython datatypes for the returned data:\noutput = sma.read_data(index=2, convert_output=True)\nassert output == 3\nassert isinstance(output, int)\n\n# By default, the method acquires a Lock object before reading data, preventing multiple processes from working with\n# the array at the same time. For some use cases this can be detrimental (for example, when you are using the array to\n# share the data between multiple read-only processes). In this case, you can read the data without locking:\noutput = sma.read_data(index=2, convert_output=True, with_lock=False)\nassert output == 3\nassert isinstance(output, int)\n\n# To read a slice of the array, provide a tuple of two indices (for closed range) or a tuple of one index (start, open\n# range).\noutput = sma.read_data(index=(0,), convert_output=True, with_lock=False)\nassert output == [1, 2, 3, 4, 5, 6]\nassert isinstance(output, list)\n\n# Closed range end-index is excluded from sliced data\noutput = sma.read_data(index=(1, 4), convert_output=False, with_lock=False)\nassert np.array_equal(output, np.array([2, 3, 4], dtype=np.uint64))\nassert isinstance(output, np.ndarray)\n```\n\n#### Writing array data\nTo write data to the array wrapped by the class, use the __write_data()__ method. Its API is deliberately kept very \nsimilar to the read method:\n```\nimport numpy as np\nfrom ataraxis_data_structures import SharedMemoryArray\n\n# Initializes a SharedMemoryArray\nprototype = np.array([1, 2, 3, 4, 5, 6], dtype=np.uint64)\nbuffer_name = \"unique_buffer\"\nsma = SharedMemoryArray.create_array(name=buffer_name, prototype=prototype)\nsma.connect()\n\n# Data writing method has a similar API to data reading method. It can write scalars and slices to the shared memory\n# array. It tries to automatically convert the input into the type used by the array as needed:\nsma.write_data(index=1, data=7, with_lock=True)\nassert sma.read_data(index=1, convert_output=True) == 7\n\n# Numpy inputs are automatically converted to the correct datatype if possible\nsma.write_data(index=1, data=np.uint8(9), with_lock=True)\nassert sma.read_data(index=1, convert_output=False) == np.uint8(9)\n\n# Writing by slice is also supported\nsma.write_data(index=(1, 3), data=[10, 11], with_lock=False)\nassert sma.read_data(index=(0,), convert_output=True) == [1, 10, 11, 4, 5, 6]\n```\n\n#### Using the array from multiple processes\nWhile all methods showcased above run from the same process, the main advantage of the class is that they work\njust as well when used from different Python processes:\n```\nimport numpy as np\nfrom ataraxis_data_structures import SharedMemoryArray\nfrom multiprocessing import Process\n\n\ndef concurrent_worker(shared_memory_object: SharedMemoryArray, index: int):\n    \"\"\"This worker will run in a different process.\n\n    It increments a shared memory array variable by 1 if the variable is even. Since each increment will\n    shift it to be odd, to work as intended, this process has to work together with a different process that\n    increments odd values. The process shuts down once the value reaches 200.\n\n    Args:\n        shared_memory_object: The SharedMemoryArray instance to work with.\n        index: The index inside the array to increment\n    \"\"\"\n    # Connects to the array\n    shared_memory_object.connect()\n\n    # Runs until the value becomes 200\n    while shared_memory_object.read_data(index) < 200:\n        # Reads data from the input index\n        shared_value = shared_memory_object.read_data(index)\n\n        # Checks if the value is even and below 200\n        if shared_value % 2 == 0 and shared_value < 200:\n            # Increments the value by one and writes it back to the array\n            shared_memory_object.write_data(index, shared_value + 1)\n\n    # Disconnects and terminates the process\n    shared_memory_object.disconnect()\n\n\nif __name__ == \"__main__\":\n    # Initializes a SharedMemoryArray\n    sma = SharedMemoryArray.create_array(\"test_concurrent\", np.zeros(5, dtype=np.int32))\n\n    # Generates multiple processes and uses each to repeatedly write and read data from different indices of the same\n    # array.\n    processes = [Process(target=concurrent_worker, args=(sma, i)) for i in range(5)]\n    for p in processes:\n        p.start()\n\n    # For each of the array indices, increments the value of the index if it is odd. Child processes increment even\n    # values and ignore odd ones, so the only way for this code to finish is if children and parent process take turns\n    # incrementing shared values until they reach 200\n    while np.any(sma.read_data((0, 5)) < 200):  # Runs as long as any value is below 200\n        # Loops over addressable indices\n        for i in range(5):\n            value = sma.read_data(i)\n            if value % 2 != 0 and value < 200:  # If the value is odd and below 200, increments the value by 1\n                sma.write_data(i, value + 1)\n\n    # Waits for the processes to join\n    for p in processes:\n        p.join()\n\n    # Verifies that all processes ran as expected and incremented their respective variable\n    assert np.all(sma.read_data((0, 5)) == 200)\n\n    # Cleans up the shared memory array after all processes are terminated\n    sma.disconnect()\n    sma.destroy()\n```\n### DataLogger\nThe DataLogger class sets up data logger instances running on isolated cores (Processes) and exposes a shared Queue \nobject for buffering and piping data from any other Process to the logger cores. Currently, the logger is only intended \nfor saving serialized byte arrays used by other Ataraxis libraries (notably: ataraxis-video-system and \nataraxis-transport-layer).\n\n#### Logger creation and use\nCurrently, a single DataLogger can be initialized at a time. Initializing a second instance until the first instance is\ngarbage collected will run into an error due to internal binding of [SharedMemoryArray](#sharedmemoryarray) class.\n```\nfrom ataraxis_data_structures import DataLogger, LogPackage\nimport numpy as np\nimport tempfile\nimport time as tm\nfrom pathlib import Path\n\n# Due to the internal use of Process classes, the logger has to be protected by the __main__ guard.\nif __name__ == '__main__':\n    # The Logger only needs to be provided with the path to the output directory to be used. However, it can be further\n    # customized to control the number of processes and threads used to log the data. See class docstrings for details.\n    tempdir = tempfile.TemporaryDirectory()  # A temporary directory for illustration purposes\n    logger = DataLogger(output_directory=Path(tempdir.name))  # The logger will create a new folder: 'tempdir/data_log'\n\n    # Before the logger starts saving data, its saver processes need to be initialized.\n    logger.start()\n\n    # To submit data to the logger, access its input_queue property and share it with all other Processes that need to\n    # log byte-serialized data.\n    logger_queue = logger.input_queue\n\n    # Creates and submits example data to be logged. Note, teh data has to be packaged into a LogPackage dataclass.\n    source_id = 1\n    timestamp = tm.perf_counter_ns()  # timestamp has to be an integer\n    data = np.array([1, 2, 3, 4, 5], dtype=np.uint8)\n    package = LogPackage(source_id, timestamp, data)\n    logger_queue.put(package)\n\n    # The timer has to be precise enough to resolve two consecutive datapoints (timestamp has to differ for the two\n    # datapoints, so nanosecond or microsecond timers are best).\n    timestamp = tm.perf_counter_ns()\n    data = np.array([6, 7, 8, 9, 10], dtype=np.uint8)\n    # Same source id\n    package = LogPackage(source_id, timestamp, data)\n    logger_queue.put(package)\n\n    # Shutdown ensures all buffered data is saved before the logger is terminated. At the end of this runtime, there\n    # should be 2 .npy files: 1_0000000000000000001.npy and 1_0000000000000000002.npy.\n    logger.shutdown()\n\n    # Verifies two .npy files were created\n    assert len(list(Path(tempdir.name).glob('**/*.npy'))) == 2\n\n    # The logger also provides a method for compressing all .npy files into .npz archives. This method is intended to be\n    # called after the 'online' runtime is over to optimize the memory occupied by data.\n    logger.compress_logs(remove_sources=True)  # Ensures .npy files are deleted once they are compressed into .npz file\n\n    # The compression creates a single .npz file named after the source_id: 1_data_log.npz\n    assert len(list(Path(tempdir.name).glob('**/*.npy'))) == 0\n    assert len(list(Path(tempdir.name).glob('**/*.npz'))) == 1\n```\n___\n\n## API Documentation\n\nSee the [API documentation](https://ataraxis-data-structures-api-docs.netlify.app/) for the\ndetailed description of the methods and classes exposed by components of this library.\n___\n\n## Developers\n\nThis section provides installation, dependency, and build-system instructions for the developers that want to\nmodify the source code of this library. Additionally, it contains instructions for recreating the conda environments\nthat were used during development from the included .yml files.\n\n### Installing the library\n\n1. Download this repository to your local machine using your preferred method, such as git-cloning.\n2. ```cd``` to the root directory of the project using your command line interface of choice.\n3. Install development dependencies. You have multiple options of satisfying this requirement:\n    1. **_Preferred Method:_** Use conda or pip to install\n       [tox](https://tox.wiki/en/latest/user_guide.html) or use an environment that has it installed and\n       call ```tox -e import``` to automatically import the os-specific development environment included with the\n       source code in your local conda distribution. Alternatively, you can use ```tox -e create``` to create the \n       environment from scratch and automatically install the necessary dependencies using pyproject.toml file. See \n       [environments](#environments) section for other environment installation methods.\n    2. Run ```python -m pip install .'[dev]'``` command to install development dependencies and the library using \n       pip. On some systems, you may need to use a slightly modified version of this command: \n       ```python -m pip install .[dev]```.\n    3. As long as you have an environment with [tox](https://tox.wiki/en/latest/user_guide.html) installed\n       and do not intend to run any code outside the predefined project automation pipelines, tox will automatically\n       install all required dependencies for each task.\n\n**Note:** When using tox automation, having a local version of the library may interfere with tox tasks that attempt\nto build the library using an isolated environment. While the problem is rare, our 'tox' pipelines automatically \ninstall and uninstall the project from its' conda environment. This relies on a static tox configuration and will only \ntarget the project-specific environment, so it is advised to always ```tox -e import``` or ```tox -e create``` the \nproject environment using 'tox' before running other tox commands.\n\n### Additional Dependencies\n\nIn addition to installing the required python packages, separately install the following dependencies:\n\n1. [Python](https://www.python.org/downloads/) distributions, one for each version that you intend to support. \n  Currently, this library supports version 3.10 and above. The easiest way to get tox to work as intended is to have \n  separate python distributions, but using [pyenv](https://github.com/pyenv/pyenv) is a good alternative too. \n  This is needed for the 'test' task to work as intended.\n\n### Development Automation\n\nThis project comes with a fully configured set of automation pipelines implemented using \n[tox](https://tox.wiki/en/latest/user_guide.html). Check [tox.ini file](tox.ini) for details about \navailable pipelines and their implementation. Alternatively, call ```tox list``` from the root directory of the project\nto see the list of available tasks.\n\n**Note!** All commits to this project have to successfully complete the ```tox``` task before being pushed to GitHub. \nTo minimize the runtime for this task, use ```tox --parallel```.\n\nFor more information, you can also see the 'Usage' section of the \n[ataraxis-automation project](https://github.com/Sun-Lab-NBB/ataraxis-automation) documentation.\n\n### Environments\n\nAll environments used during development are exported as .yml files and as spec.txt files to the [envs](envs) folder.\nThe environment snapshots were taken on each of the three explicitly supported OS families: Windows 11, OSx (M1) 14.5\nand Linux Ubuntu 22.04 LTS.\n\n**Note!** Since the OSx environment was built for an M1 (Apple Silicon) platform, it may not work on Intel-based \nApple devices.\n\nTo install the development environment for your OS:\n\n1. Download this repository to your local machine using your preferred method, such as git-cloning.\n2. ```cd``` into the [envs](envs) folder.\n3. Use one of the installation methods below:\n    1. **_Preferred Method_**: Install [tox](https://tox.wiki/en/latest/user_guide.html) or use another\n       environment with already installed tox and call ```tox -e import```.\n    2. **_Alternative Method_**: Run ```conda env create -f ENVNAME.yml``` or ```mamba env create -f ENVNAME.yml```. \n       Replace 'ENVNAME.yml' with the name of the environment you want to install (axbu_dev_osx for OSx, \n       axbu_dev_win for Windows, and axbu_dev_lin for Linux).\n\n**Hint:** while only the platforms mentioned above were explicitly evaluated, this project is likely to work on any \ncommon OS, but may require additional configurations steps.\n\nSince the release of [ataraxis-automation](https://github.com/Sun-Lab-NBB/ataraxis-automation) version 2.0.0 you can \nalso create the development environment from scratch via pyproject.toml dependencies. To do this, use \n```tox -e create``` from project root directory.\n\n### Automation Troubleshooting\n\nMany packages used in 'tox' automation pipelines (uv, mypy, ruff) and 'tox' itself are prone to various failures. In \nmost cases, this is related to their caching behavior. Despite a considerable effort to disable caching behavior known \nto be problematic, in some cases it cannot or should not be eliminated. If you run into an unintelligible error with \nany of the automation components, deleting the corresponding .cache (.tox, .ruff_cache, .mypy_cache, etc.) manually \nor via a cli command is very likely to fix the issue.\n___\n\n## Versioning\n\nWe use [semantic versioning](https://semver.org/) for this project. For the versions available, see the \n[tags on this repository](https://github.com/Sun-Lab-NBB/ataraxis-data-structures/tags).\n\n---\n\n## Authors\n\n- Ivan Kondratyev ([Inkaros](https://github.com/Inkaros))\n- Edwin Chen\n\n___\n\n## License\n\nThis project is licensed under the GPL3 License: see the [LICENSE](LICENSE) file for details.\n___\n\n## Acknowledgments\n\n- All Sun lab [members](https://neuroai.github.io/sunlab/people) for providing the inspiration and comments during the\n  development of this library.\n- [numpy](https://github.com/numpy/numpy) project for providing low-level functionality for many of the \n  classes exposed through this library.\n- [dacite](https://github.com/konradhalas/dacite) and [pyyaml](https://github.com/yaml/pyyaml/) for jointly providing\n  the low-level functionality to read and write dataclasses to / from .yaml files.\n- The creators of all other projects used in our development automation pipelines [see pyproject.toml](pyproject.toml).\n\n---\n",
    "bugtrack_url": null,
    "license": "GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007  Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/> Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed.  Preamble  The GNU General Public License is a free, copyleft license for software and other kinds of works.  The licenses for most software and other practical works are designed to take away your freedom to share and change the works.  By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users.  We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors.  You can apply it to your programs, too.  When we speak of free software, we are referring to freedom, not price.  Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things.  To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights.  Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others.  For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received.  You must make sure that they, too, receive or can get the source code.  And you must show them these terms so they know their rights.  Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it.  For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software.  For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions.  Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so.  This is fundamentally incompatible with the aim of protecting users' freedom to change the software.  The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable.  Therefore, we have designed this version of the GPL to prohibit the practice for those products.  If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users.  Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary.  To prevent this, the GPL assures that patents cannot be used to render the program non-free.  The precise terms and conditions for copying, distribution and modification follow.  TERMS AND CONDITIONS  0. Definitions.  \"This License\" refers to version 3 of the GNU General Public License.  \"Copyright\" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.  \"The Program\" refers to any copyrightable work licensed under this License.  Each licensee is addressed as \"you\".  \"Licensees\" and \"recipients\" may be individuals or organizations.  To \"modify\" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy.  The resulting work is called a \"modified version\" of the earlier work or a work \"based on\" the earlier work.  A \"covered work\" means either the unmodified Program or a work based on the Program.  To \"propagate\" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy.  Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well.  To \"convey\" a work means any kind of propagation that enables other parties to make or receive copies.  Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying.  An interactive user interface displays \"Appropriate Legal Notices\" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License.  If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion.  1. Source Code.  The \"source code\" for a work means the preferred form of the work for making modifications to it.  \"Object code\" means any non-source form of a work.  A \"Standard Interface\" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language.  The \"System Libraries\" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form.  A \"Major Component\", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.  The \"Corresponding Source\" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities.  However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work.  For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work.  The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source.  The Corresponding Source for a work in source code form is that same work.  2. Basic Permissions.  All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met.  This License explicitly affirms your unlimited permission to run the unmodified Program.  The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work.  This License acknowledges your rights of fair use or other equivalent, as provided by copyright law.  You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force.  You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright.  Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.  Conveying under any other circumstances is permitted solely under the conditions stated below.  Sublicensing is not allowed; section 10 makes it unnecessary.  3. Protecting Users' Legal Rights From Anti-Circumvention Law.  No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures.  When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures.  4. Conveying Verbatim Copies.  You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.  You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee.  5. Conveying Modified Source Versions.  You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions:  a) The work must carry prominent notices stating that you modified it, and giving a relevant date.  b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7.  This requirement modifies the requirement in section 4 to \"keep intact all notices\".  c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy.  This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged.  This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it.  d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.  A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an \"aggregate\" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit.  Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.  6. Conveying Non-Source Forms.  You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:  a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange.  b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge.  c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source.  This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b.  d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge.  You need not require recipients to copy the Corresponding Source along with the object code.  If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source.  Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements.  e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.  A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work.  A \"User Product\" is either (1) a \"consumer product\", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling.  In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage.  For a particular product received by a particular user, \"normally used\" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product.  A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product.  \"Installation Information\" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source.  The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made.  If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information.  But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM).  The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed.  Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network.  Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying.  7. Additional Terms.  \"Additional permissions\" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law.  If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions.  When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it.  (Additional permissions may be written to require their own removal in certain cases when you modify the work.)  You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission.  Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms:  a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or  b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or  c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or  d) Limiting the use for publicity purposes of names of licensors or authors of the material; or  e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or  f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors.  All other non-permissive additional terms are considered \"further restrictions\" within the meaning of section 10.  If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term.  If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying.  If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.  Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way.  8. Termination.  You may not propagate or modify a covered work except as expressly provided under this License.  Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11).  However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.  Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.  Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License.  If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10.  9. Acceptance Not Required for Having Copies.  You are not required to accept this License in order to receive or run a copy of the Program.  Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance.  However, nothing other than this License grants you permission to propagate or modify any covered work.  These actions infringe copyright if you do not accept this License.  Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so.  10. Automatic Licensing of Downstream Recipients.  Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License.  You are not responsible for enforcing compliance by third parties with this License.  An \"entity transaction\" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations.  If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts.  You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License.  For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.  11. Patents.  A \"contributor\" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based.  The work thus licensed is called the contributor's \"contributor version\".  A contributor's \"essential patent claims\" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version.  For purposes of this definition, \"control\" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License.  Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version.  In the following three paragraphs, a \"patent license\" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement).  To \"grant\" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party.  If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients.  \"Knowingly relying\" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid.  If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it.  A patent license is \"discriminatory\" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License.  You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007.  Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law.  12. No Surrender of Others' Freedom.  If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License.  If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all.  For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program.  13. Use with the GNU Affero General Public License.  Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work.  The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such.  14. Revised Versions of this License.  The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time.  Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.  Each version is given a distinguishing version number.  If the Program specifies that a certain numbered version of the GNU General Public License \"or any later version\" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation.  If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation.  If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program.  Later license versions may give you additional or different permissions.  However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version.  15. Disclaimer of Warranty.  THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.  EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM \"AS IS\" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU.  SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.  16. Limitation of Liability.  IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.  17. Interpretation of Sections 15 and 16.  If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee.  END OF TERMS AND CONDITIONS  How to Apply These Terms to Your New Programs  If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms.  To do so, attach the following notices to the program.  It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the \"copyright\" line and a pointer to where the full notice is found.  <one line to give the program's name and a brief idea of what it does.> Copyright (C) <year>  <name of author>  This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.  This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.  You should have received a copy of the GNU General Public License along with this program.  If not, see <https://www.gnu.org/licenses/>.  Also add information on how to contact you by electronic and paper mail.  If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode:  <program>  Copyright (C) <year>  <name of author> This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details.  The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License.  Of course, your program's commands might be different; for a GUI interface, you would use an \"about box\".  You should also get your employer (if you work as a programmer) or school, if any, to sign a \"copyright disclaimer\" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see <https://www.gnu.org/licenses/>.  The GNU General Public License does not permit incorporating your program into proprietary programs.  If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library.  If this is what you want to do, use the GNU Lesser General Public License instead of this License.  But first, please read <https://www.gnu.org/licenses/why-not-lgpl.html>.",
    "summary": "Provides classes and structures for storing, manipulating, and sharing data between Python processes.",
    "version": "1.1.4",
    "project_urls": {
        "Documentation": "https://ataraxis-data-structures-api-docs.netlify.app/",
        "Homepage": "https://github.com/Sun-Lab-NBB/ataraxis-data-structures"
    },
    "split_keywords": [
        "ataraxis",
        " data-manipulation",
        " data-structures",
        " nested-dictionary",
        " shared-memory"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "295038af1e4a31e96fed957ddea1de86c6a662a9d27f518a7487f8cc3fb6e11b",
                "md5": "d4c9bb86f86522b64a6cd3c2c434d642",
                "sha256": "388351229614415b762ddc3380b40999691f7af9457e6946c171d097f4f6c506"
            },
            "downloads": -1,
            "filename": "ataraxis_data_structures-1.1.4-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "d4c9bb86f86522b64a6cd3c2c434d642",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 115660,
            "upload_time": "2024-11-18T19:49:38",
            "upload_time_iso_8601": "2024-11-18T19:49:38.903890Z",
            "url": "https://files.pythonhosted.org/packages/29/50/38af1e4a31e96fed957ddea1de86c6a662a9d27f518a7487f8cc3fb6e11b/ataraxis_data_structures-1.1.4-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e532ceb3cde2f38e09e477def2472b8708e76724634e7d52532ab3a32f1e3129",
                "md5": "8d96597a2aef0dc75a40ead9d3215313",
                "sha256": "b01c62aa7fed0451d1f087ca4a052287694906201195558d3a9bed465f761f49"
            },
            "downloads": -1,
            "filename": "ataraxis_data_structures-1.1.4.tar.gz",
            "has_sig": false,
            "md5_digest": "8d96597a2aef0dc75a40ead9d3215313",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 152225,
            "upload_time": "2024-11-18T19:49:40",
            "upload_time_iso_8601": "2024-11-18T19:49:40.809606Z",
            "url": "https://files.pythonhosted.org/packages/e5/32/ceb3cde2f38e09e477def2472b8708e76724634e7d52532ab3a32f1e3129/ataraxis_data_structures-1.1.4.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-11-18 19:49:40",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Sun-Lab-NBB",
    "github_project": "ataraxis-data-structures",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "tox": true,
    "lcname": "ataraxis-data-structures"
}
        
Elapsed time: 0.59354s