| Name | pydbase JSON |
| Version |
1.5.2
JSON |
| download |
| home_page | https://github.com/pglen/pydbase |
| Summary | High speed database with key / data in python. |
| upload_time | 2024-04-10 07:28:17 |
| maintainer | None |
| docs_url | None |
| author | Peter Glen |
| requires_python | >=3 |
| license | None |
| keywords |
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# pydbase
## High speed database with key / data
#### see: blockchain functions at the end
The motivation was to create a no frills way of saving / retrieving data.
It is fast, and the time test shows that this is an order of magnitude
faster than most mainstream databases. This is due to the engine's simplicity.
It avoids expensive computations in favor of quickly saving data.
### Fast data save / retrieve
Mostly ready for production. All tests pass. Please use caution, as this is new.
The command line tester can drive most aspects of this API; and it is somewhat
complete. It is also good way to see the API / Module in action.
## API
The module 'twincore' uses two data files and a lock file. The file
names are generated from the base name of the data file;
name.pydb for data; name.pidx for the index, name.lock for the lock file.
In case of frozen process the lock file times out in xx seconds
and breaks the lock. If the locking process (id in lockfile) does
not exist, the lock breaks immediately.
Example DB creation:
core = twincore.TwinCore(datafile_name)
Some basic ops:
dbsize = core.getdbsize()
core.save_data(keyx, datax)
rec_arr = core.retrieve(keyx, ncount)
print("rec_arr", rec_arr)
Example chain DB creation:
core = twinchain.TwinChain(datafile_name)
core.append(keyx, datax)
recnum = core.getdbsize()
rec = core.get_payload(recnum)
print(recnum, rec)
### Setting verbosity and debug level:
twincore.core_quiet = quiet
twincore.core_verbose = verbose
twincore.core_pgdebug = pgdebug
(Setting before data creation will display mesages from the construtor)
### Structure of the data:
32 byte header, starting with FILESIG
4 bytes 4 bytes 4 bytes Variable
------------------------------------------------------------
RECSIG Hash_of_key Len_of_key DATA_for_key
RECSEP Hash_of_payload Len_of_payload DATA_for_payload
.
.
RECSIG Hash_of_key Len_of_key DATA_for_key
RECSEP Hash_of_payload Len_of_payload DATA_for_payload
where:
RECSIG="RECB" (record begin here)
RECSEP="RECS" (record separated here)
RECDEL="RECX" (record deleted)
Deleted records are marked with the RECSIG mutated from RECB to RECX
Vacuum will remove the deleted records; Make sure your database has no
pending ops; or non atomic opts when vacuuming;
(like: find keys - delete keys in two ops)
New data is appended to the end, no duplicate filtering is done.
Retrieval is searched from reverse, the latest record with this key
is retrieved first. Most of the times this behavior is what we
want; also the record history is kept this way, also a desirable
behavior.
## Usage:
### The DB exerciser
The file dbaseadm.py exercises most of the twincore functionality. It also
provides examples of how to drive it.
The command line utility's help response:
Usage: dbaseadm.py [options] [arg_key arg_data]
-h Help (this screen) -|- -E Replace record in place
-V Print version -|- -q Quiet on, less printing
-d Debug level (0-10) -|- -v Increment verbosity level
-r Randomize data -|- -w Write random record(s)
-z Dump backwards(s) -|- -i Show deleted record(s)
-U Vacuum DB -|- -R Re-index / recover DB
-I DB Integrity check -|- -c Set check integrity flag
-s Skip to count recs -|- -K List keys only
-S Print num recs -|- -m Dump data to console
-o offs Get data from offset -|- -G num Get record by number
-F subkey Find by sub str -|- -g num Get number of recs.
-k keyval Key to save -|- -a str Data to save
-y keyval Find by key -|- -D keyval Delete by key
-n num Number of records -|- -t keyval Retrieve by key
-p num Skip number of recs -|- -u recnum Delete at recnum
-l lim Limit get records -|- -e offs Delete at offset
-Z keyval Get record position -|- -X max Limit recs on delete
-x max Limit max number of records to get (default: 1)
-f file Input or output file (default: 'pydbase.pydb')
The verbosity / debugl level influences the amount of data presented.
Use quotes for multi word arguments.
### The chain adm utility:
Usage: chainadm.py [options]
Options: -a data append data to the end of chain
-g recnum get record
-k reckey get record by key/header
-r recnum get record header
-d level debug level
-n append / show number of records
-e override header
-t print record's UUID date)
-s skip count
-x max record count to list
-m dump chain data
-c check data integrity
-i check link integrity
-S get db size
-v increase verbosity
-h help (this screen)
### Comparison to other databases:
This comparison is to show the time it takes to write 500 records.
In the tests the record size is about the same (Hello, 1 /vs/ "Hello", 1)
Please see the sqlite_test.sql for details of data output;
The test can be repeated with running the 'time.sh' script file.
Please note the the time.sh clears all files in test_data/* for a fair test.
dbaseadm time test, writing 500 records ...
real 0m0.108s
user 0m0.068s
sys 0m0.040s
chainadm time test, writing 500 records ...
real 0m0.225s
user 0m0.154s
sys 0m0.071s
sqlite time test, writing 500 records ...
real 0m1.465s
user 0m0.130s
sys 0m0.292s
Please note that the sqlite engine has to do a lot of parsing which we
skip doing; That is why pydbase is more than an order of magnitude faster ...
even with all the hashing for data integrity check
### Saving more complex data
The database saves a key / value pair. However, the key can be mutated
to contain more sophisticated data. For example: adding a string in front of it.
[ Like: the string CUST_ for customer data / details]. Also the key can be made
unique by adding a UUID to it, or using pyvpacker to construct it. (see below)
The data may consist of any text / binary. The library pyvpacker and can pack
any data into a string; It is installed as a dependency, and a copy of
pyvpacker can be obtained from pip or github.
## the pyvpacker.py module:
This module can pack arbitrary python data into a string; which can be
used to store anything in the pydbase's key / data sections. Note that
data type is limited to the python native data types and compounds thereof.
Types: (int, real, str, array, hash)
Example from running testpacker.py:
org: (1, 2, 'aa', ['bb', b'dd'])
packed: pg s4 'iisa' i4 1 i4 2 s2 'aa' a29 'pg s2 'sb' s2 'bb' b4 'ZGQ=' '
unpacked: [1, 2, 'aa', ['bb', b'dd']]
rec_arr: pg s4 'iisa' i4 1 i4 2 s2 'aa' a29 'pg s2 'sb' s2 'bb' b4 'ZGQ=' '
rec_arr_upacked: [1, 2, 'aa', ['bb', b'dd']]
(Note: the decode returns an array of data; use data[0] to get the original)
There is also the option of using pyvpacker on the key itself. Because the key
is identified by its hash, there is no speed penalty; Note that the hash is a 32 bit
one; collisions are possible, however unlikely; To compensate, make sure you compare the
key proper with the returned key.
## Maintenance
The DB can rebuild its index and purge (vacuum) all deleted records. In the
test utility the options are:
./dbaseadm.py -U for vacuum (add -v for verbosity)
The database is re-built, the deleted entries are purged, the damaged data (if any)
is saved into a separate file, created with the same base name as the data base,
with the '.perr' extension.
./dbaseadm.py -R for re-index
The index is recreated; as of the current file contents. This is useful if
the index is lost (like copying the data only)
If there is a data file without the index, the re-indexing is called
automatically. In case of deleted data file, pydbase will recognize
the dangling index and nuke it by renaming it to
orgfilename.pidx.dangle (Tue 07.Feb.2023 just deleted it);
The database grows with every record added to it. It does not check if
the particular record already exists. It adds the new copy of the record to
the end;
Retrieving starts from the end, and the data retrieved
(for this particular key) is the last record saved. All the other records
of this key are also there in chronological (save) order. Miracle of
record history archived by default.
To clean the old record history, one may delete all the records with
this same key, except the last one.
## Blockchain implementation
The database is extended with a blockhcain implementation. The new class
is called twinchain; and it is a class derived from twincore.
To drive the blockchain, just use the append method. The database will calculate
all the hashes, integrate it into the existing chain with the new item getting
a backlink field. This field is calculated based upon the previous record's
hash and the previous record's frozen date. This assures that identical data
will have a different hash, so data cannot be anticipated based upon its hash
alone. The hash is done with 256 bits, and assumed to be very secure.
To drive it:
core = twinchain.TwinChain() # Takes an optional file name
core.append("The payload") # Arbitrary data
Block chain layer on top of twincore.
prev curr
record
| Time Now | Time Now | Time Now |
| hash256 | | hash256 | | hash256 | |
| Header | | Header | | Header | |
| Payload | | Payload | | Payload | |
| Backlink | | Backlink | | Backlink | |
|---->----| |---->---| |------ ...
The hashed sum of fields saved to the next backlink.
## Integrity check
Two levels; Level one is checking if the record checksums are correct;
Level two checks if the linkage is correct.
## The in-place update
The save operation has a flag for in-place update. This is useful for updating
without the data storage extending. Useful for counts and timers. The in-place
update operates as a record overwrite, and has to be equal length than the existing
record. If shorter, the record is padded to the original data's length by appending
spaces. Below is an example to update a counter in the database, which will execute
in a microsecond time range.
dbcore = twinchain.TwinCore(filename)
rec = dbcore.get_rec(xxx)
# Increment count:
arr = self.packer.decode_data(rec[1])[0]
arr[0] = "%05d" % (int(arr[0]) + 1)
strx = str(self.packer.encode_data("", arr))
ret = dbcore.save_data(rec[0], strx, True)
If the new data (relativeto the in-place data) is longer, a new record is
created, just like a normal operation. This new, longer record than accommodates all the new in-place requests.
It is recommended that one produces a fixed record size for consistent results.
(See: sprintf (python % operator) in the example above.)
## PyTest
The pytest passes with no errors;
The following (and more) test are created / executed:
### Test results:
============================= test session starts ==============================
platform linux -- Python 3.10.12, pytest-7.4.3, pluggy-1.0.0
rootdir: /home/peterglen/pgpygtk/pydbase
collected 44 items
test_acreate.py ... [ 6%]
test_bigdata.py . [ 9%]
test_bindata.py . [ 11%]
test_chain.py . [ 13%]
test_chain_data.py . [ 15%]
test_chain_link.py .. [ 20%]
test_del.py . [ 22%]
test_dump.py . [ 25%]
test_find.py .. [ 29%]
test_findrec.py .. [ 34%]
test_getoffs.py ... [ 40%]
test_getrec.py . [ 43%]
test_handles.py ..... [ 54%]
test_inplace.py ... [ 61%]
test_integrity.py . [ 63%]
test_list.py .. [ 68%]
test_lockrel.py . [ 70%]
test_multi.py .. [ 75%]
test_packer.py ...... [ 88%]
test_reindex.py . [ 90%]
test_search.py ... [ 97%]
test_vacuum.py . [100%]
============================== 44 passed in 0.57s ==============================
## History
1.1 Tue 20.Feb.2024 Initial release
1.2.0 Mon 26.Feb.2024 Moved pip home to pydbase/
1.4.0 Tue 27.Feb.2024 Addedd pgdebug
1.4.2 Wed 28.Feb.2024 Fixed multiple instances
1.4.3 Wed 28.Feb.2024 ChainAdm added
1.4.4 Fri 01.Mar.2024 Tests for chain functions
1.4.5 Fri 01.Mar.2024 Misc fixes
1.4.6 Mon 04.Mar.2024 Vacuum count on vacuumed records
1.4.7 Tue 05.Mar.2024 In place record update
1.4.8 Sat 09.Mar.2024 Added new locking mechanism
1.4.9 Mon 01.Apr.2024 Updated to run on MSYS2, new locking
1.5.0 Tue 02.Apr.2024 Cleaned, pip upload
1.5.1 Wed 10.Apr.2024 Dangling lock .. fixed
// EOF
Raw data
{
"_id": null,
"home_page": "https://github.com/pglen/pydbase",
"name": "pydbase",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3",
"maintainer_email": null,
"keywords": null,
"author": "Peter Glen",
"author_email": "peterglen99@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/4b/77/d0975c761269ef139d987e49df032205ff87a3cd0a75f68889cbdf93d329/pydbase-1.5.2.tar.gz",
"platform": null,
"description": "# pydbase\n\n## High speed database with key / data\n\n#### see: blockchain functions at the end\n\n The motivation was to create a no frills way of saving / retrieving data.\nIt is fast, and the time test shows that this is an order of magnitude\nfaster than most mainstream databases. This is due to the engine's simplicity.\nIt avoids expensive computations in favor of quickly saving data.\n\n### Fast data save / retrieve\n\n Mostly ready for production. All tests pass. Please use caution, as this is new.\nThe command line tester can drive most aspects of this API; and it is somewhat\ncomplete. It is also good way to see the API / Module in action.\n\n## API\n\n The module 'twincore' uses two data files and a lock file. The file\n names are generated from the base name of the data file;\nname.pydb for data; name.pidx for the index, name.lock for the lock file.\n In case of frozen process the lock file times out in xx seconds\nand breaks the lock. If the locking process (id in lockfile) does\nnot exist, the lock breaks immediately.\n\nExample DB creation:\n\n core = twincore.TwinCore(datafile_name)\n\nSome basic ops:\n\n dbsize = core.getdbsize()\n\n core.save_data(keyx, datax)\n rec_arr = core.retrieve(keyx, ncount)\n print(\"rec_arr\", rec_arr)\n\nExample chain DB creation:\n\n core = twinchain.TwinChain(datafile_name)\n core.append(keyx, datax)\n recnum = core.getdbsize()\n rec = core.get_payload(recnum)\n print(recnum, rec)\n\n### Setting verbosity and debug level:\n\n twincore.core_quiet = quiet\n twincore.core_verbose = verbose\n twincore.core_pgdebug = pgdebug\n\n (Setting before data creation will display mesages from the construtor)\n\n### Structure of the data:\n\n 32 byte header, starting with FILESIG\n\n 4 bytes 4 bytes 4 bytes Variable\n ------------------------------------------------------------\n RECSIG Hash_of_key Len_of_key DATA_for_key\n RECSEP Hash_of_payload Len_of_payload DATA_for_payload\n\n .\n .\n\n RECSIG Hash_of_key Len_of_key DATA_for_key\n RECSEP Hash_of_payload Len_of_payload DATA_for_payload\n\n where:\n\n RECSIG=\"RECB\" (record begin here)\n RECSEP=\"RECS\" (record separated here)\n RECDEL=\"RECX\" (record deleted)\n\n Deleted records are marked with the RECSIG mutated from RECB to RECX\n\n Vacuum will remove the deleted records; Make sure your database has no\n pending ops; or non atomic opts when vacuuming;\n\n (like: find keys - delete keys in two ops)\n\n New data is appended to the end, no duplicate filtering is done.\n Retrieval is searched from reverse, the latest record with this key\n is retrieved first. Most of the times this behavior is what we\n want; also the record history is kept this way, also a desirable\n behavior.\n\n## Usage:\n\n### The DB exerciser\n\n The file dbaseadm.py exercises most of the twincore functionality. It also\nprovides examples of how to drive it.\n\nThe command line utility's help response:\n\n Usage: dbaseadm.py [options] [arg_key arg_data]\n -h Help (this screen) -|- -E Replace record in place\n -V Print version -|- -q Quiet on, less printing\n -d Debug level (0-10) -|- -v Increment verbosity level\n -r Randomize data -|- -w Write random record(s)\n -z Dump backwards(s) -|- -i Show deleted record(s)\n -U Vacuum DB -|- -R Re-index / recover DB\n -I DB Integrity check -|- -c Set check integrity flag\n -s Skip to count recs -|- -K List keys only\n -S Print num recs -|- -m Dump data to console\n -o offs Get data from offset -|- -G num Get record by number\n -F subkey Find by sub str -|- -g num Get number of recs.\n -k keyval Key to save -|- -a str Data to save\n -y keyval Find by key -|- -D keyval Delete by key\n -n num Number of records -|- -t keyval Retrieve by key\n -p num Skip number of recs -|- -u recnum Delete at recnum\n -l lim Limit get records -|- -e offs Delete at offset\n -Z keyval Get record position -|- -X max Limit recs on delete\n -x max Limit max number of records to get (default: 1)\n -f file Input or output file (default: 'pydbase.pydb')\n The verbosity / debugl level influences the amount of data presented.\n Use quotes for multi word arguments.\n\n### The chain adm utility:\n\n Usage: chainadm.py [options]\n Options: -a data append data to the end of chain\n -g recnum get record\n -k reckey get record by key/header\n -r recnum get record header\n -d level debug level\n -n append / show number of records\n -e override header\n -t print record's UUID date)\n -s skip count\n -x max record count to list\n -m dump chain data\n -c check data integrity\n -i check link integrity\n -S get db size\n -v increase verbosity\n -h help (this screen)\n\n### Comparison to other databases:\n\n This comparison is to show the time it takes to write 500 records.\nIn the tests the record size is about the same (Hello, 1 /vs/ \"Hello\", 1)\nPlease see the sqlite_test.sql for details of data output;\n\n The test can be repeated with running the 'time.sh' script file.\nPlease note the the time.sh clears all files in test_data/* for a fair test.\n\n dbaseadm time test, writing 500 records ...\n real\t0m0.108s\n user\t0m0.068s\n sys\t0m0.040s\n chainadm time test, writing 500 records ...\n real\t0m0.225s\n user\t0m0.154s\n sys\t0m0.071s\n sqlite time test, writing 500 records ...\n real\t0m1.465s\n user\t0m0.130s\n sys\t0m0.292s\n\n Please note that the sqlite engine has to do a lot of parsing which we\nskip doing; That is why pydbase is more than an order of magnitude faster ...\neven with all the hashing for data integrity check\n\n### Saving more complex data\n\n The database saves a key / value pair. However, the key can be mutated\nto contain more sophisticated data. For example: adding a string in front of it.\n[ Like: the string CUST_ for customer data / details]. Also the key can be made\nunique by adding a UUID to it, or using pyvpacker to construct it. (see below)\n\n The data may consist of any text / binary. The library pyvpacker and can pack\nany data into a string; It is installed as a dependency, and a copy of\npyvpacker can be obtained from pip or github.\n\n## the pyvpacker.py module:\n\n This module can pack arbitrary python data into a string; which can be\nused to store anything in the pydbase's key / data sections. Note that\ndata type is limited to the python native data types and compounds thereof.\n\n Types: (int, real, str, array, hash)\n\nExample from running testpacker.py:\n\n org: (1, 2, 'aa', ['bb', b'dd'])\n packed: pg s4 'iisa' i4 1 i4 2 s2 'aa' a29 'pg s2 'sb' s2 'bb' b4 'ZGQ=' '\n unpacked: [1, 2, 'aa', ['bb', b'dd']]\n rec_arr: pg s4 'iisa' i4 1 i4 2 s2 'aa' a29 'pg s2 'sb' s2 'bb' b4 'ZGQ=' '\n rec_arr_upacked: [1, 2, 'aa', ['bb', b'dd']]\n (Note: the decode returns an array of data; use data[0] to get the original)\n\n There is also the option of using pyvpacker on the key itself. Because the key\nis identified by its hash, there is no speed penalty; Note that the hash is a 32 bit\none; collisions are possible, however unlikely; To compensate, make sure you compare the\nkey proper with the returned key.\n\n## Maintenance\n\n The DB can rebuild its index and purge (vacuum) all deleted records. In the\ntest utility the options are:\n\n ./dbaseadm.py -U for vacuum (add -v for verbosity)\n\n The database is re-built, the deleted entries are purged, the damaged data (if any)\n is saved into a separate file, created with the same base name as the data base,\n with the '.perr' extension.\n\n ./dbaseadm.py -R for re-index\n\n The index is recreated; as of the current file contents. This is useful if\nthe index is lost (like copying the data only)\n\n If there is a data file without the index, the re-indexing is called\n automatically. In case of deleted data file, pydbase will recognize\n the dangling index and nuke it by renaming it to\n orgfilename.pidx.dangle (Tue 07.Feb.2023 just deleted it);\n\n The database grows with every record added to it. It does not check if\n the particular record already exists. It adds the new copy of the record to\nthe end;\n Retrieving starts from the end, and the data retrieved\n(for this particular key) is the last record saved. All the other records\nof this key are also there in chronological (save) order. Miracle of\nrecord history archived by default.\n\n To clean the old record history, one may delete all the records with\nthis same key, except the last one.\n\n## Blockchain implementation\n\n The database is extended with a blockhcain implementation. The new class\nis called twinchain; and it is a class derived from twincore.\n\n To drive the blockchain, just use the append method. The database will calculate\nall the hashes, integrate it into the existing chain with the new item getting\na backlink field. This field is calculated based upon the previous record's\nhash and the previous record's frozen date. This assures that identical data\nwill have a different hash, so data cannot be anticipated based upon its hash\nalone. The hash is done with 256 bits, and assumed to be very secure.\n\nTo drive it:\n\n core = twinchain.TwinChain() # Takes an optional file name\n core.append(\"The payload\") # Arbitrary data\n\n Block chain layer on top of twincore.\n\n prev curr\n record\n | Time Now | Time Now | Time Now |\n | hash256 | | hash256 | | hash256 | |\n | Header | | Header | | Header | |\n | Payload | | Payload | | Payload | |\n | Backlink | | Backlink | | Backlink | |\n |---->----| |---->---| |------ ...\n\n The hashed sum of fields saved to the next backlink.\n\n## Integrity check\n\n Two levels; Level one is checking if the record checksums are correct;\n Level two checks if the linkage is correct.\n\n## The in-place update\n\n The save operation has a flag for in-place update. This is useful for updating\nwithout the data storage extending. Useful for counts and timers. The in-place\nupdate operates as a record overwrite, and has to be equal length than the existing\nrecord. If shorter, the record is padded to the original data's length by appending\nspaces. Below is an example to update a counter in the database, which will execute\nin a microsecond time range.\n\n dbcore = twinchain.TwinCore(filename)\n rec = dbcore.get_rec(xxx)\n # Increment count:\n arr = self.packer.decode_data(rec[1])[0]\n arr[0] = \"%05d\" % (int(arr[0]) + 1)\n strx = str(self.packer.encode_data(\"\", arr))\n ret = dbcore.save_data(rec[0], strx, True)\n\n If the new data (relativeto the in-place data) is longer, a new record is\ncreated, just like a normal operation. This new, longer record than accommodates all the new in-place requests.\nIt is recommended that one produces a fixed record size for consistent results.\n(See: sprintf (python % operator) in the example above.)\n\n## PyTest\n\n The pytest passes with no errors;\n The following (and more) test are created / executed:\n\n### Test results:\n\n ============================= test session starts ==============================\n platform linux -- Python 3.10.12, pytest-7.4.3, pluggy-1.0.0\n rootdir: /home/peterglen/pgpygtk/pydbase\n collected 44 items\n\n test_acreate.py ... [ 6%]\n test_bigdata.py . [ 9%]\n test_bindata.py . [ 11%]\n test_chain.py . [ 13%]\n test_chain_data.py . [ 15%]\n test_chain_link.py .. [ 20%]\n test_del.py . [ 22%]\n test_dump.py . [ 25%]\n test_find.py .. [ 29%]\n test_findrec.py .. [ 34%]\n test_getoffs.py ... [ 40%]\n test_getrec.py . [ 43%]\n test_handles.py ..... [ 54%]\n test_inplace.py ... [ 61%]\n test_integrity.py . [ 63%]\n test_list.py .. [ 68%]\n test_lockrel.py . [ 70%]\n test_multi.py .. [ 75%]\n test_packer.py ...... [ 88%]\n test_reindex.py . [ 90%]\n test_search.py ... [ 97%]\n test_vacuum.py . [100%]\n\n ============================== 44 passed in 0.57s ==============================\n\n## History\n\n 1.1 Tue 20.Feb.2024 Initial release\n 1.2.0 Mon 26.Feb.2024 Moved pip home to pydbase/\n 1.4.0 Tue 27.Feb.2024 Addedd pgdebug\n 1.4.2 Wed 28.Feb.2024 Fixed multiple instances\n 1.4.3 Wed 28.Feb.2024 ChainAdm added\n 1.4.4 Fri 01.Mar.2024 Tests for chain functions\n 1.4.5 Fri 01.Mar.2024 Misc fixes\n 1.4.6 Mon 04.Mar.2024 Vacuum count on vacuumed records\n 1.4.7 Tue 05.Mar.2024 In place record update\n 1.4.8 Sat 09.Mar.2024 Added new locking mechanism\n 1.4.9 Mon 01.Apr.2024 Updated to run on MSYS2, new locking\n 1.5.0 Tue 02.Apr.2024 Cleaned, pip upload\n 1.5.1 Wed 10.Apr.2024 Dangling lock .. fixed\n\n// EOF\n",
"bugtrack_url": null,
"license": null,
"summary": "High speed database with key / data in python.",
"version": "1.5.2",
"project_urls": {
"Homepage": "https://github.com/pglen/pydbase"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "645cd7bbd8280319ca12f30ac4dbd4a3b1300ce8343cf8476d3e1eee831c1559",
"md5": "3cda21ccd50038597fe695986fcfc169",
"sha256": "321558505ad90040195042f28268913ea21d8ab726db811e1607ce24d42ad7a5"
},
"downloads": -1,
"filename": "pydbase-1.5.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "3cda21ccd50038597fe695986fcfc169",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3",
"size": 66675,
"upload_time": "2024-04-10T07:28:14",
"upload_time_iso_8601": "2024-04-10T07:28:14.705818Z",
"url": "https://files.pythonhosted.org/packages/64/5c/d7bbd8280319ca12f30ac4dbd4a3b1300ce8343cf8476d3e1eee831c1559/pydbase-1.5.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "4b77d0975c761269ef139d987e49df032205ff87a3cd0a75f68889cbdf93d329",
"md5": "6e1eac5380edb930f3d0149c21bd7929",
"sha256": "f742bec61906c754b0787016714c2ecc746c07780450b8a9af24eec79e550dfc"
},
"downloads": -1,
"filename": "pydbase-1.5.2.tar.gz",
"has_sig": false,
"md5_digest": "6e1eac5380edb930f3d0149c21bd7929",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3",
"size": 72156,
"upload_time": "2024-04-10T07:28:17",
"upload_time_iso_8601": "2024-04-10T07:28:17.891512Z",
"url": "https://files.pythonhosted.org/packages/4b/77/d0975c761269ef139d987e49df032205ff87a3cd0a75f68889cbdf93d329/pydbase-1.5.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-04-10 07:28:17",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "pglen",
"github_project": "pydbase",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "pydbase"
}