pysteve


Namepysteve JSON
Version 0.0.18 PyPI version JSON
download
home_pageNone
SummaryHelper and setup functions for people named Steve
upload_time2024-04-06 18:24:27
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT License Copyright (c) 2023 Stephen Hilton Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
keywords helper setup boilerplate data
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # File: pySteve.py
pySteve is a mish-mash collection of useful functions, rather than an application.  It is particularly useful to people named Steve.

## Functions:
### chunk_lines
**Breaks a list of string lines into a list of lists of string lines, based on supplied markers.**

Accepts a list of lines (say, from reading a file) and separates those lines into separate lists based on boundries discovered by running lines against a list of marker functions (usually lambdas). Each marker function will be passed the line in sequence, and must return a True or False as to whether the line is the beginning of a new section. If ANY of the marker functions return True, the line is considered the first of a new chunk (list of string lines). At the end, the new list of lists of string lines is returned. For example: after opening and reading lines of a python file, you could send that list of lines to this function along with the two lambda functions ` lambda line : str(line).startswith('def') ` and ` lambda line : str(line).startswith('class') ` to split the list of lines by python functions.
#### Arguments:
- list_of_lines:list = []
- newchunck_marker_funcs:list = []
#### Argument Details:
- list_of_lines (list): List of string lines to match markers against and group.
 - newchunck_marker_funcs (list): List of functions, applied to each line to mark new chunk boundries.
#### Returns:
- list: A list of lists of string lines.
---
### db_safe_name
**Accepts a string and returns a DB column safe string, absent special characters, and substitute reserved words if provided a list.**

The **kwargs (called **reserved_words_subs) will be added to the reserved word substitution dictionary, allowing for custom reserved word overrides. For example, for Notion Inserts you may want to translate the columns: parent='Parent_ID', object='Object_Type'
#### Arguments:
- original_column_name:str
- reserved_word_prefix:str=None
- **reserved_words
---
### dict_soft_update
**Simply adds one dictionary to another like a dict.update(), except it will NOT overwrite values if found in dict_main.**

#### Arguments:
- dict_main:dict
- dict_addifmissing:dict={}
---
### envfile_load
**Returns a dictionary containing name/value pairs pulled from the supplied .env formatted shell (zsh) script.**

If load_path does not have a direct match, it is assumed to be a pattern and will be matched given supplied template logic (unless exact_match_only = True), and return with the load_path_sorted logic (first or last). There are several synonyms: [first | earliest] or [last | latest]
#### Arguments:
- load_path:Path='.'
- load_path_sorted:str = 'latest'
- exact_match_only:bool = False
#### Argument Details:
- load_path (Path): file to load from, with template logic allowed.
 - load_path_sorted (str): If load_path was a template, indicates which file to select, based on filename sort.
 - exact_match_only (bool): Disallow filename templating, and require exact filename only.
#### Returns:
- dict: the dictionary name/value parsed from the supplied file.
---
### envfile_save
**Always saves a dict as a shell (zsh) as an env-loadable file, adding folders, file iterations and substitutions as needed.**

The save_path allows for substitution of any name in the dict (within curly brackets) with the value of that entry. For example, if dict = {'date':'2024-01-01'}, save_path = '~/some/path/file_{date}.sh' will become '~/some/path/file_2024-01-01.sh'. Additionally, if the full substituted file exists, the process will append an iterator (1,2,3) to the end to preserve uniqueness. The final save_path is returned from the function. Because the file needs to be compatible with shell .env files, special characters (spaces, newlines, etc) are removed from all 'names' in the save_dict, and any values with newlines will be wrapped in a docstring using the string saved in variable 'docstring_eom_marker'
#### Arguments:
- save_path:Path
- save_dict:dict = {}
- iteration_zero_pad:int = 6
#### Argument Details:
- save_path (Path): Where to save the file, with substitution logic.
 - save_dict (dict): Dictionary containing file content.
#### Returns:
- Path: Returns the final save_path used (after substitution and iteration).
---
### generate_markdown_doc
**Parses python files to automatically generate simple markdown documentation (generated this document).**

Parses the file at source_path, or if source_path is a folder, will iterate (alphabetically) thru all .py files and generate markdown content by introspecting all functions, classes, and methods, with a heavy focus on using google-style docstrings. It will always return the markdown as a string, and if the dest_filepath is specified, it will also save to that filename. By default it will replace the dest_filepath, or set append=True to append the markdown to the end. This allows freedom to chose which files /folders to document, and structure the mardown files however you'd like. It also allows for a simple way to auto-generate README.md files, for small projects. Todo: add class support, change source_path to a list of files/folders.
#### Arguments:
- source_path:Path = './src'
- dest_filepath:Path = './README.md'
- append:bool = False
- include_dunders:bool = False
- py_indent_spaces:int = 4
#### Argument Details:
- source_path (Path): Path to a file or folder containing .py files with which to create markdown.
 - dest_filepath (Path): If specified, will save the markdown string to the file specified.
 - append (bool): Whether to append (True) over overwrite (False, default) the dest_filepath.
 - include_dunders (bool): Whether to include (True) or exclude (False, default) files beginning with double-underscores (dunders).
 - py_indent_spaces (int): Number of spaces which constitute a python indent. Defaults to 4.
#### Returns:
- str: Markdown text generated.
---
### infer_datatype
**Infers the primative data types based on value characteristics, and returns a tuple of (type, typed_value). Currently supports float, int, str, and list (with typed elements using recursive calls).**

#### Arguments:
- value
---
### logger_setup
**Sets up a logger in a consistent, default way with both stream(console) and file handlers.**

#### Arguments:
- application_name:str=None
- console_level=logging.INFO
- filepath_level=logging.DEBUG
- filepath:Path='./logs/{datetime}--{application_name}.log'
- **kwargs
#### Argument Details:
- application_name (str): Name of the application logger. Leave None for root logger.
 - console_level (logging.level): Logging level for stream handler (debug, info, warning, error). Set None to disable.
 - filepath_level (logging.level): Logging level for file handler (debug, info, warning, error). Set None to disable.
 - filepath (Path): Path location for the file handler log files, supporting substitutions. Set None to disable.
 - kwargs: Any other keyword arguments are treated as {name} = value substitions for the filepath, if enabled.
#### Returns:
- (logger): a reference to the logging object.
---
### notion_get_api_key
**Sets and/or returns the Notion API Key, either directly from from an envfile.**

#### Arguments:
- api_key:str=None
- envfile='.'
#### Argument Details:
- api_key (str): Notion API Key for connecting account.
 - envfile (Path|str|dict): Either the path to an envfile, or a dictionary as produced by envfile_load()
#### Returns:
- str: The Notion API Key as provided, or parsed from env file.
---
### notion_translate_type
**Translates notion datatypes into other system data types, like python or DBs. Also indicates whether the notion type is considered an object (needing further drill-downs) or a primative type (like str or int).**

The returned dictionary object will have several entries: "db" for database translation, "py" for python, "pk" for the notion primary key field of that object type, and "obj" boolean as to whether the data type returns an object that requires additional drill-downs. For example, a notion "page" is an object that can contain other types, whereas "text" is a primative data type. The type map can be extended real-time using the **kwargs.
#### Arguments:
- notion_type:str
- decimal_precision:int=2
- varchar_size:int=None
- **kwargs
#### Argument Details:
- notion_type (str): Notion dataset type (text, user, status, page, etc.).
 - decimal_precision (int): For types that translate into DB type Decimal, this sets the precision. Omitted if None.
 - varchar_size (int): For types that translate into DB type Varchar, this sets the size. Omitted if None.
 - kwargs: Added to the typemap, if adheres to the format name={"pk":"...", "db":"...", "py":"...", "obj":bool }
#### Returns:
- dict: Dictionary with various type translations for the notion type provided. The "py" return contains actual types.
---
### notion_translate_value
**Given a Notion object from the API source, return the value as a string as well as a list. Also returns a bool value indicating whether the list value contains multiple discrete items (say, a multi-select type) or just multiple parts of one discrete object (say, rich-text type).**

#### Arguments:
- proptype
- propobject
---
### notionapi_get_dataset
**Connect to Notion and retrieve a dataset (aka table, aka database) by NotionID.**

You must setup an Integration and add datasets, otherwise you'll get a "not authorized" error, even with a valid API Key. For more information, see: https://developers.notion.com/docs/create-a-notion-integration#create-your-integration-in-notion
#### Arguments:
- api_key:str
- notion_id:str
- row_limit:int=-1
- filter_json:dict = {}
- **headers
#### Argument Details:
- api_key (str): A valid Notion API Key.
 - notion_id (str): The Notion ID for the data table you're trying to access.
 - row_limit (int): The number of rows to return. To get all rows, set to -1 (default)
 - filter_json (dict): The API filter object. See: https://developers.notion.com/reference/post-database-query-filter
 - **headers (kwargs): Any name/value pairs provided will be added to the API request header.
#### Returns:
- tuple: ('Name of table', [{rows as tabular data}], [{rows as key/value pairs}], [{column definitions}] )
---
### notionapi_get_dataset_info
**-------------------- Retrieves database and column information from a notion dataset (table).**

You must setup an Integration and add datasets, otherwise you'll get a "not authorized" error, even with a valid API Key. For more information, see: https://developers.notion.com/docs/create-a-notion-integration#create-your-integration-in-notion
#### Arguments:
- api_key:str
- notion_id:str
- **headers
#### Argument Details:
- api_key (str): A valid Notion API Key.
 - notion_id (str): The Notion ID for the data table you're trying to access.
 - **headers (kwargs): Any name/value pairs provided will be added to the API request header.
#### Returns:
- str: Title of database (table)
 - dict: Column name mapping between Notion (key) and DB (value)
---
### notionapi_get_users
**Query Notion and return a set of all users in the organization, with optional ability to 'include' only certain users by name.**

#### Arguments:
- api_key:str
- include:list = ['all']
- full_json:bool = False
- **headers
---
### parse_filename_iterators
**Iterate thru all files in the supplied folder, and return a dictionary containing three lists: - iter_files, containing files that end in an iterator ( *.000 ) - base_files, containing files that do not end in an interator (aka base file) - all_files, sorted in order with the base file first per pattern, followed by any iterations**

#### Arguments:
- folderpath:Path
---
### parse_placeholders
**From given string, parses out a list of placeholder values, along with their positions.**

#### Arguments:
- value:str = ''
- wrappers:str = '{}'
---
### substitute_dict_in_string
**Substitutes dictionary '{name}' with 'value' in provided string.**

This performs a basic string substitution of any "{name}" in a supplied dictionary with the corresponding 'value' and returns the post-substitution string, doing a basic text.format(**sub_dict). The reason for this function is a list of pre-loaded values that are also included, such as various filename-safe flavors of time. The time format is always decending in granularity, i.e. Year, Month, Day, Hour, Minute, Second. Any of the pre-configured '{name}' values can be overwritten with the supplied dictionary. For example, the preloaded substitution {now} will turn into a timestamp, but if sub_dict contains {'now':'pizza'} then the preloaded substitution is ignored and instead {now} becomes pizza. Return: str: the post-substitution string.
#### Arguments:
- text:str=''
- sub_dict:dict={}
- date_delim:str=''
- time_delim:str=''
- datetime_delim:str='_'
#### Argument Details:
- text (str): The string to perform the substitution around.
 - sub_dict (dict): Dictionary containing the name/value pairs to substitute.
 - date_delim (str): Character(s) used between year, month, and date.
 - time_delim (str): Character(s) used between hour, minute, and second.
 - datetime_delim (str): Character(s) used between the date and time component.
---
### tokenize_quoted_strings
**Tokenizes all quoted segments found inside supplied string, and returns the string plus all tokens.**

Returns a tuple with the tokenized string and a dictionary of all tokens, for later replacement as needed. If return_quote_type is True, also returns the quote type with one more nested layer to the return dict, looking like: {"T0": {"text":"'string in quotes, inlcuding quotes', "quote_type":"'"}, "T1":{...}} If return_quote_type is False, returns a slightly more flat structure: {"T0": "'string in quotes, inlcuding quotes'", "T1":...}
#### Arguments:
- text:str=''
- return_quote_type:bool=False
#### Argument Details:
- text (str): String including quotes to tokenize.
 - return_quote_type (bool): if True, will also return the type of quote found, if False (default) just returns tokenized text in a flatter dictionary structure.
#### Returns:
- tuple (str, dict): the tokenized text as string, and tokens in a dict.
---
---
---
---



            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "pysteve",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "helper, setup, boilerplate, data",
    "author": null,
    "author_email": "Stephen Hilton <Stephen@FamilyHilton.com>",
    "download_url": "https://files.pythonhosted.org/packages/aa/72/8e5f79656c66bfd772376c926c55fee7c88d1d537175a2d904d8f1a742b3/pysteve-0.0.18.tar.gz",
    "platform": null,
    "description": "# File: pySteve.py\npySteve is a mish-mash collection of useful functions, rather than an application.  It is particularly useful to people named Steve.\n\n## Functions:\n### chunk_lines\n**Breaks a list of string lines into a list of lists of string lines, based on supplied markers.**\n\nAccepts a list of lines (say, from reading a file) and separates those lines into separate lists based on boundries discovered by running lines against a list of marker functions (usually lambdas). Each marker function will be passed the line in sequence, and must return a True or False as to whether the line is the beginning of a new section. If ANY of the marker functions return True, the line is considered the first of a new chunk (list of string lines). At the end, the new list of lists of string lines is returned. For example: after opening and reading lines of a python file, you could send that list of lines to this function along with the two lambda functions ` lambda line : str(line).startswith('def') ` and ` lambda line : str(line).startswith('class') ` to split the list of lines by python functions.\n#### Arguments:\n- list_of_lines:list = []\n- newchunck_marker_funcs:list = []\n#### Argument Details:\n- list_of_lines (list): List of string lines to match markers against and group.\n - newchunck_marker_funcs (list): List of functions, applied to each line to mark new chunk boundries.\n#### Returns:\n- list: A list of lists of string lines.\n---\n### db_safe_name\n**Accepts a string and returns a DB column safe string, absent special characters, and substitute reserved words if provided a list.**\n\nThe **kwargs (called **reserved_words_subs) will be added to the reserved word substitution dictionary, allowing for custom reserved word overrides. For example, for Notion Inserts you may want to translate the columns: parent='Parent_ID', object='Object_Type'\n#### Arguments:\n- original_column_name:str\n- reserved_word_prefix:str=None\n- **reserved_words\n---\n### dict_soft_update\n**Simply adds one dictionary to another like a dict.update(), except it will NOT overwrite values if found in dict_main.**\n\n#### Arguments:\n- dict_main:dict\n- dict_addifmissing:dict={}\n---\n### envfile_load\n**Returns a dictionary containing name/value pairs pulled from the supplied .env formatted shell (zsh) script.**\n\nIf load_path does not have a direct match, it is assumed to be a pattern and will be matched given supplied template logic (unless exact_match_only = True), and return with the load_path_sorted logic (first or last). There are several synonyms: [first | earliest] or [last | latest]\n#### Arguments:\n- load_path:Path='.'\n- load_path_sorted:str = 'latest'\n- exact_match_only:bool = False\n#### Argument Details:\n- load_path (Path): file to load from, with template logic allowed.\n - load_path_sorted (str): If load_path was a template, indicates which file to select, based on filename sort.\n - exact_match_only (bool): Disallow filename templating, and require exact filename only.\n#### Returns:\n- dict: the dictionary name/value parsed from the supplied file.\n---\n### envfile_save\n**Always saves a dict as a shell (zsh) as an env-loadable file, adding folders, file iterations and substitutions as needed.**\n\nThe save_path allows for substitution of any name in the dict (within curly brackets) with the value of that entry. For example, if dict = {'date':'2024-01-01'}, save_path = '~/some/path/file_{date}.sh' will become '~/some/path/file_2024-01-01.sh'. Additionally, if the full substituted file exists, the process will append an iterator (1,2,3) to the end to preserve uniqueness. The final save_path is returned from the function. Because the file needs to be compatible with shell .env files, special characters (spaces, newlines, etc) are removed from all 'names' in the save_dict, and any values with newlines will be wrapped in a docstring using the string saved in variable 'docstring_eom_marker'\n#### Arguments:\n- save_path:Path\n- save_dict:dict = {}\n- iteration_zero_pad:int = 6\n#### Argument Details:\n- save_path (Path): Where to save the file, with substitution logic.\n - save_dict (dict): Dictionary containing file content.\n#### Returns:\n- Path: Returns the final save_path used (after substitution and iteration).\n---\n### generate_markdown_doc\n**Parses python files to automatically generate simple markdown documentation (generated this document).**\n\nParses the file at source_path, or if source_path is a folder, will iterate (alphabetically) thru all .py files and generate markdown content by introspecting all functions, classes, and methods, with a heavy focus on using google-style docstrings. It will always return the markdown as a string, and if the dest_filepath is specified, it will also save to that filename. By default it will replace the dest_filepath, or set append=True to append the markdown to the end. This allows freedom to chose which files /folders to document, and structure the mardown files however you'd like. It also allows for a simple way to auto-generate README.md files, for small projects. Todo: add class support, change source_path to a list of files/folders.\n#### Arguments:\n- source_path:Path = './src'\n- dest_filepath:Path = './README.md'\n- append:bool = False\n- include_dunders:bool = False\n- py_indent_spaces:int = 4\n#### Argument Details:\n- source_path (Path): Path to a file or folder containing .py files with which to create markdown.\n - dest_filepath (Path): If specified, will save the markdown string to the file specified.\n - append (bool): Whether to append (True) over overwrite (False, default) the dest_filepath.\n - include_dunders (bool): Whether to include (True) or exclude (False, default) files beginning with double-underscores (dunders).\n - py_indent_spaces (int): Number of spaces which constitute a python indent. Defaults to 4.\n#### Returns:\n- str: Markdown text generated.\n---\n### infer_datatype\n**Infers the primative data types based on value characteristics, and returns a tuple of (type, typed_value). Currently supports float, int, str, and list (with typed elements using recursive calls).**\n\n#### Arguments:\n- value\n---\n### logger_setup\n**Sets up a logger in a consistent, default way with both stream(console) and file handlers.**\n\n#### Arguments:\n- application_name:str=None\n- console_level=logging.INFO\n- filepath_level=logging.DEBUG\n- filepath:Path='./logs/{datetime}--{application_name}.log'\n- **kwargs\n#### Argument Details:\n- application_name (str): Name of the application logger. Leave None for root logger.\n - console_level (logging.level): Logging level for stream handler (debug, info, warning, error). Set None to disable.\n - filepath_level (logging.level): Logging level for file handler (debug, info, warning, error). Set None to disable.\n - filepath (Path): Path location for the file handler log files, supporting substitutions. Set None to disable.\n - kwargs: Any other keyword arguments are treated as {name} = value substitions for the filepath, if enabled.\n#### Returns:\n- (logger): a reference to the logging object.\n---\n### notion_get_api_key\n**Sets and/or returns the Notion API Key, either directly from from an envfile.**\n\n#### Arguments:\n- api_key:str=None\n- envfile='.'\n#### Argument Details:\n- api_key (str): Notion API Key for connecting account.\n - envfile (Path|str|dict): Either the path to an envfile, or a dictionary as produced by envfile_load()\n#### Returns:\n- str: The Notion API Key as provided, or parsed from env file.\n---\n### notion_translate_type\n**Translates notion datatypes into other system data types, like python or DBs. Also indicates whether the notion type is considered an object (needing further drill-downs) or a primative type (like str or int).**\n\nThe returned dictionary object will have several entries: \"db\" for database translation, \"py\" for python, \"pk\" for the notion primary key field of that object type, and \"obj\" boolean as to whether the data type returns an object that requires additional drill-downs. For example, a notion \"page\" is an object that can contain other types, whereas \"text\" is a primative data type. The type map can be extended real-time using the **kwargs.\n#### Arguments:\n- notion_type:str\n- decimal_precision:int=2\n- varchar_size:int=None\n- **kwargs\n#### Argument Details:\n- notion_type (str): Notion dataset type (text, user, status, page, etc.).\n - decimal_precision (int): For types that translate into DB type Decimal, this sets the precision. Omitted if None.\n - varchar_size (int): For types that translate into DB type Varchar, this sets the size. Omitted if None.\n - kwargs: Added to the typemap, if adheres to the format name={\"pk\":\"...\", \"db\":\"...\", \"py\":\"...\", \"obj\":bool }\n#### Returns:\n- dict: Dictionary with various type translations for the notion type provided. The \"py\" return contains actual types.\n---\n### notion_translate_value\n**Given a Notion object from the API source, return the value as a string as well as a list. Also returns a bool value indicating whether the list value contains multiple discrete items (say, a multi-select type) or just multiple parts of one discrete object (say, rich-text type).**\n\n#### Arguments:\n- proptype\n- propobject\n---\n### notionapi_get_dataset\n**Connect to Notion and retrieve a dataset (aka table, aka database) by NotionID.**\n\nYou must setup an Integration and add datasets, otherwise you'll get a \"not authorized\" error, even with a valid API Key. For more information, see: https://developers.notion.com/docs/create-a-notion-integration#create-your-integration-in-notion\n#### Arguments:\n- api_key:str\n- notion_id:str\n- row_limit:int=-1\n- filter_json:dict = {}\n- **headers\n#### Argument Details:\n- api_key (str): A valid Notion API Key.\n - notion_id (str): The Notion ID for the data table you're trying to access.\n - row_limit (int): The number of rows to return. To get all rows, set to -1 (default)\n - filter_json (dict): The API filter object. See: https://developers.notion.com/reference/post-database-query-filter\n - **headers (kwargs): Any name/value pairs provided will be added to the API request header.\n#### Returns:\n- tuple: ('Name of table', [{rows as tabular data}], [{rows as key/value pairs}], [{column definitions}] )\n---\n### notionapi_get_dataset_info\n**-------------------- Retrieves database and column information from a notion dataset (table).**\n\nYou must setup an Integration and add datasets, otherwise you'll get a \"not authorized\" error, even with a valid API Key. For more information, see: https://developers.notion.com/docs/create-a-notion-integration#create-your-integration-in-notion\n#### Arguments:\n- api_key:str\n- notion_id:str\n- **headers\n#### Argument Details:\n- api_key (str): A valid Notion API Key.\n - notion_id (str): The Notion ID for the data table you're trying to access.\n - **headers (kwargs): Any name/value pairs provided will be added to the API request header.\n#### Returns:\n- str: Title of database (table)\n - dict: Column name mapping between Notion (key) and DB (value)\n---\n### notionapi_get_users\n**Query Notion and return a set of all users in the organization, with optional ability to 'include' only certain users by name.**\n\n#### Arguments:\n- api_key:str\n- include:list = ['all']\n- full_json:bool = False\n- **headers\n---\n### parse_filename_iterators\n**Iterate thru all files in the supplied folder, and return a dictionary containing three lists: - iter_files, containing files that end in an iterator ( *.000 ) - base_files, containing files that do not end in an interator (aka base file) - all_files, sorted in order with the base file first per pattern, followed by any iterations**\n\n#### Arguments:\n- folderpath:Path\n---\n### parse_placeholders\n**From given string, parses out a list of placeholder values, along with their positions.**\n\n#### Arguments:\n- value:str = ''\n- wrappers:str = '{}'\n---\n### substitute_dict_in_string\n**Substitutes dictionary '{name}' with 'value' in provided string.**\n\nThis performs a basic string substitution of any \"{name}\" in a supplied dictionary with the corresponding 'value' and returns the post-substitution string, doing a basic text.format(**sub_dict). The reason for this function is a list of pre-loaded values that are also included, such as various filename-safe flavors of time. The time format is always decending in granularity, i.e. Year, Month, Day, Hour, Minute, Second. Any of the pre-configured '{name}' values can be overwritten with the supplied dictionary. For example, the preloaded substitution {now} will turn into a timestamp, but if sub_dict contains {'now':'pizza'} then the preloaded substitution is ignored and instead {now} becomes pizza. Return: str: the post-substitution string.\n#### Arguments:\n- text:str=''\n- sub_dict:dict={}\n- date_delim:str=''\n- time_delim:str=''\n- datetime_delim:str='_'\n#### Argument Details:\n- text (str): The string to perform the substitution around.\n - sub_dict (dict): Dictionary containing the name/value pairs to substitute.\n - date_delim (str): Character(s) used between year, month, and date.\n - time_delim (str): Character(s) used between hour, minute, and second.\n - datetime_delim (str): Character(s) used between the date and time component.\n---\n### tokenize_quoted_strings\n**Tokenizes all quoted segments found inside supplied string, and returns the string plus all tokens.**\n\nReturns a tuple with the tokenized string and a dictionary of all tokens, for later replacement as needed. If return_quote_type is True, also returns the quote type with one more nested layer to the return dict, looking like: {\"T0\": {\"text\":\"'string in quotes, inlcuding quotes', \"quote_type\":\"'\"}, \"T1\":{...}} If return_quote_type is False, returns a slightly more flat structure: {\"T0\": \"'string in quotes, inlcuding quotes'\", \"T1\":...}\n#### Arguments:\n- text:str=''\n- return_quote_type:bool=False\n#### Argument Details:\n- text (str): String including quotes to tokenize.\n - return_quote_type (bool): if True, will also return the type of quote found, if False (default) just returns tokenized text in a flatter dictionary structure.\n#### Returns:\n- tuple (str, dict): the tokenized text as string, and tokens in a dict.\n---\n---\n---\n---\n\n\n",
    "bugtrack_url": null,
    "license": "MIT License  Copyright (c) 2023 Stephen Hilton  Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the \"Software\"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:  The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.  THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ",
    "summary": "Helper and setup functions for people named Steve",
    "version": "0.0.18",
    "project_urls": {
        "Docs": "https://github.com/Stephen-Hilton/pysteve/blob/main/README.md",
        "Documentation": "https://github.com/Stephen-Hilton/pysteve/blob/main/README.md",
        "Github": "https://github.com/Stephen-Hilton/pysteve",
        "Homepage": "https://familyhilton.com"
    },
    "split_keywords": [
        "helper",
        " setup",
        " boilerplate",
        " data"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0677f409d1831df37d1fc8c778ca442e55e1365f5029ec16024f6db0911dae53",
                "md5": "c9b5081d5f5debfde1f6c1f5e95cad0c",
                "sha256": "2d78bbd4ed76f5273c096937338f7550f117ef02ad02fc9e1212b9f6eae82886"
            },
            "downloads": -1,
            "filename": "pysteve-0.0.18-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "c9b5081d5f5debfde1f6c1f5e95cad0c",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 24620,
            "upload_time": "2024-04-06T18:24:24",
            "upload_time_iso_8601": "2024-04-06T18:24:24.849266Z",
            "url": "https://files.pythonhosted.org/packages/06/77/f409d1831df37d1fc8c778ca442e55e1365f5029ec16024f6db0911dae53/pysteve-0.0.18-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "aa728e5f79656c66bfd772376c926c55fee7c88d1d537175a2d904d8f1a742b3",
                "md5": "b35fbe79b533f9a0c4a2063f76cbb765",
                "sha256": "2d8e35897f78224500761688cf80745120ee60fe305f5af3973040f83ecd84ff"
            },
            "downloads": -1,
            "filename": "pysteve-0.0.18.tar.gz",
            "has_sig": false,
            "md5_digest": "b35fbe79b533f9a0c4a2063f76cbb765",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 33317,
            "upload_time": "2024-04-06T18:24:27",
            "upload_time_iso_8601": "2024-04-06T18:24:27.331732Z",
            "url": "https://files.pythonhosted.org/packages/aa/72/8e5f79656c66bfd772376c926c55fee7c88d1d537175a2d904d8f1a742b3/pysteve-0.0.18.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-06 18:24:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Stephen-Hilton",
    "github_project": "pysteve",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [],
    "lcname": "pysteve"
}
        
Elapsed time: 0.22365s