| Name | dpkits JSON |
| Version |
1.3.11
JSON |
| download |
| home_page | None |
| Summary | A small package for data processing |
| upload_time | 2024-10-15 11:09:29 |
| maintainer | None |
| docs_url | None |
| author | None |
| requires_python | >=3.9 |
| license | None |
| keywords |
|
| VCS |
 |
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
# Data processing Package
- Requirements
- pandas
- pyreadstat
- numpy
- zipfile
- fastapi[UploadFile]
- Step 1: import classes
```
# Convert data to pandas dataframe
from dpkits.ap_data_converter import APDataConverter
# Calculate LSM score
from dpkits.calculate_lsm import LSMCalculation
# Transpose data to stack and untack
from dpkits.data_transpose import DataTranspose
# Create the tables from converted dataframe
from dpkits.table_generator import DataTableGenerator
# Format data tables
from dpkits.table_formater import TableFormatter
```
- Step 2: Convert data files to dataframe
- class APDataConverter(files=None, file_name='', is_qme=True)
- input 1 of files or file_name
- files: list[UploadFile] default = None
- file_name: str default = ''
- is_qme: bool default = True
- Returns:
- df_data: pandas.Dataframe
- df_info: pandas.Dataframe
```
# Define input/output files name
str_file_name = 'APDataTest'
str_tbl_file_name = f'{str_file_name}_Topline.xlsx'
converter = APDataConverter(file_name='APDataTesting.xlsx')
df_data, df_info = converter.convert_df_mc()
# Use 'converter.convert_df_md()' if you need md data
```
- Step 3: Calculate LSM classificate (only for LSM projects)
- class LSMCalculation.cal_lsm_6(df_data, df_info)
- df_data: pandas.Dataframe
- df_info: pandas.Dataframe
- Returns:
- df_data: pandas.Dataframe
- df_info: pandas.Dataframe
```
df_data, df_info = LSMCalculation.cal_lsm_6(df_data, df_info)
# df_data, df_info will contains the columns CC1_Score to CC6_Score & LSM_Score
```
- Step 4: Data cleaning (if needed)
```
# Use pandas's functions to clean/process data
df_data['Gender_new'] = df_data['Gender']
df_data.replace({
'Q1_SP1': {1: 5, 2: 4, 3: 3, 4: 2, 5: 1},
'Q1_SP2': {1: 5, 2: 4, 3: 3, 4: 2, 5: 1},
}, inplace=True)
df_data.loc[(df_data['Gender_new'] == 2) & (df_data['Age'] == 5), ['Gender_new']] = [np.nan]
df_info.loc[df_info['var_name'] == 'Q1_SP1', ['val_lbl']] = [{'1': 'a', '2': 'b', '3': 'c', '4': 'd', '5': 'e'}]
df_info = pd.concat([df_info, pd.DataFrame(
columns=['var_name', 'var_lbl', 'var_type', 'val_lbl'],
data=[
['Gender_new', 'Please indicate your gender', 'SA', {'1': 'aaa', '2': 'bb', '3': 'cc'}]
]
)], ignore_index=True)
```
- Step 5: Transpose data (if needed)
- class DataTranspose.to_stack(df_data, df_info, dict_stack_structure)
- df_data: pandas.Dataframe
- df_info: pandas.Dataframe
- dict_stack_structure: dict
- Returns:
- df_data_stack: pandas.Dataframe
- df_info_stack: pandas.Dataframe
```
dict_stack_structure = {
'id_col': 'ResID',
'sp_col': 'Ma_SP',
'lst_scr': ['Gender', 'Age', 'City', 'HHI'],
'dict_sp': {
1: {
'Ma_SP1': 'Ma_SP',
'Q1_SP1': 'Q1',
'Q2_SP1': 'Q2',
'Q3_SP1': 'Q3',
},
2: {
'Ma_SP2': 'Ma_SP',
'Q1_SP2': 'Q1',
'Q2_SP2': 'Q2',
'Q3_SP2': 'Q3',
},
},
'lst_fc': ['Awareness1', 'Frequency', 'Awareness2', 'Perception']
}
df_data_stack, df_info_stack = DataTranspose.to_stack(df_data, df_info, dict_stack_structure)
```
- class DataTranspose.to_unstack(df_data_stack, df_info_stack, dict_unstack_structure)
- df_data_stack: pandas.Dataframe which transpose from stack
- df_info_stack: pandas.Dataframe which transpose from stack
- dict_unstack_structure: dict
- Returns:
- df_data_unstack: pandas.Dataframe
- df_info_unstack: pandas.Dataframe
```
dict_unstack_structure = {
'id_col': 'ResID',
'sp_col': 'Ma_SP',
'lst_col_part_head': ['Gender', 'Age', 'City', 'HHI'],
'lst_col_part_body': ['Q1', 'Q2', 'Q3'],
'lst_col_part_tail': ['Awareness1', 'Frequency', 'Awareness2', 'Perception']
}
df_data_unstack, df_info_unstack = DataTranspose.to_unstack(df_data_stack, df_info_stack, dict_unstack_structure)
```
- Step 6: OE Running
```
```
- Step 7: Export *.sav & *.xlsx
- class converter.generate_multiple_data_files(dict_dfs=dict_dfs, is_md=False, is_export_sav=True, is_export_xlsx=True, is_zip=True)
- df_data: pandas.Dataframe
- dict_dfs: dict
- is_md: bool default False
- is_export_sav: bool default True
- is_export_xlsx: bool default True
- is_zip: bool default True
- Returns: NONE
```
dict_dfs = {
1: {
'data': df_data,
'info': df_info,
'tail_name': 'ByCode',
'sheet_name': 'ByCode',
'is_recode_to_lbl': False,
},
2: {
'data': df_data,
'info': df_info,
'tail_name': 'ByLabel',
'sheet_name': 'ByLabel',
'is_recode_to_lbl': True,
},
3: {
'data': df_data_stack,
'info': df_info_stack,
'tail_name': 'Stack',
'sheet_name': 'Stack',
'is_recode_to_lbl': False,
},
4: {
'data': df_data_unstack,
'info': df_info_unstack,
'tail_name': 'Unstack',
'sheet_name': 'Unstack',
'is_recode_to_lbl': False,
},
}
converter.generate_multiple_data_files(dict_dfs=dict_dfs, is_md=False, is_export_sav=True, is_export_xlsx=True, is_zip=True)
```
- Step 8: Export data tables
- init DataTableGenerator(df_data=df_data, df_info=df_info, xlsx_name=str_tbl_file_name)
- df_data: pandas.Dataframe
- df_info: pandas.Dataframe
- xlsx_name: str
- Returns: NONE
- class DataTableGenerator.run_tables_by_js_files(lst_func_to_run)
- lst_func_to_run: list
- Returns: NONE
- init TableFormatter(xlsx_name=str_tbl_file_name)
- xlsx_name: str
- Returns: NONE
- class TableFormatter.format_sig_table()
- Returns: NONE
```
lst_side_qres = [
{"qre_name": "CC1", "sort": "des"},
{"qre_name": "$CC3", "sort": "asc"},
{"qre_name": "$CC4", "sort": "des"},
{"qre_name": "$CC6"},
{"qre_name": "$CC10"},
{"qre_name": "LSM"},
{"qre_name": "Gender"},
{"qre_name": "Age"},
{"qre_name": "City"},
{"qre_name": "HHI"},
# MA Question with net/combine (can apply to SA questions)
{"qre_name": "$Q15", "cats": {
'net_code': {
'900001|combine|Group 1 + 2': {
'1': 'Yellow/dull teeth',
'3': 'Dental plaque',
'5': 'Bad breath',
'7': 'Aphthousulcer',
'2': 'Sensitive teeth',
'4': 'Caries',
'6': 'Gingivitis (bleeding, swollen gums)',
},
'900002|net|Group 1': {
'1': 'Yellow/dull teeth',
'3': 'Dental plaque',
'5': 'Bad breath',
'7': 'Aphthousulcer',
},
'900003|net|Group 2': {
'2': 'Sensitive teeth',
'4': 'Caries',
'6': 'Gingivitis (bleeding, swollen gums)',
},
},
'8': 'Other (specify)',
'9': 'No problem',
}},
# Scale question with full properties
{
"qre_name": "Perception",
"cats": {
'1': 'Totally disagree', '2': 'Disagree', '3': 'Neutral', '4': 'Agree', '5': 'Totally agree',
'net_code': {
'900001|combine|B2B': {'1': 'Totally disagree', '2': 'Disagree'},
'900002|combine|Medium': {'3': 'Neutral'},
'900003|combine|T2B': {'4': 'Agree', '5': 'Totally agree'},
}
},
"mean": {1: 1, 2: 2, 3: 3, 4: 4, 5: 5}
},
]
lst_header_qres = [
[
{
"qre_name": "Age",
"qre_lbl": "Age",
"cats": {
'TOTAL': 'TOTAL',
'2': '18 - 24', '3': '25 - 30', '4': '31 - 39', '5': '40 - 50', '6': 'TrĂȘn 50'
}
},
{
"qre_name": "@City2",
"qre_lbl": "Location",
"cats": {
'City.isin([1, 5, 10, 11, 12])': 'All South',
'City.isin([2, 4, 16, 17, 18])': 'All North',
}
},
],
]
lst_func_to_run = [
{
'func_name': 'run_standard_table_sig',
'tables_to_run': [
'Tbl_1_Pct', # this table use df_data & df_info to run
'Tbl_1_Count', # this table use df_data & df_info to run
],
'tables_format': {
"Tbl_1_Pct": {
"tbl_name": "Table 1 - Pct",
"tbl_filter": "City > 0",
"is_count": 0,
"is_pct_sign": 1,
"is_hide_oe_zero_cats": 1,
"sig_test_info": {
"sig_type": "", # ind / rel
"sig_cols": [],
"lst_sig_lvl": []
},
"lst_side_qres": lst_side_qres,
"lst_header_qres": lst_header_qres
},
"Tbl_1_Count": {
"tbl_name": "Table 1 - Count",
"tbl_filter": "City > 0",
"is_count": 1,
"is_pct_sign": 0,
"is_hide_oe_zero_cats": 1,
"sig_test_info": {
"sig_type": "",
"sig_cols": [],
"lst_sig_lvl": []
},
"lst_side_qres": lst_side_qres,
"lst_header_qres": lst_header_qres
},
},
},
]
dtg = DataTableGenerator(df_data=df_data, df_info=df_info, xlsx_name=str_tbl_file_name)
dtg.run_tables_by_js_files(lst_func_to_run)
dtf = TableFormatter(xlsx_name=str_tbl_file_name)
dtf.format_sig_table()
```
This is a simple example package. You can use
[Github-flavored Markdown](https://guides.github.com/features/mastering-markdown/)
to write your content.
Raw data
{
"_id": null,
"home_page": null,
"name": "dpkits",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.9",
"maintainer_email": null,
"keywords": null,
"author": null,
"author_email": "Hung Dao <hung.daotuan.1991@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/16/ee/c27a48b2abd6e37838dc5a936d189f8e54d951a2d0ccabefc8ea9eb20b9e/dpkits-1.3.11.tar.gz",
"platform": null,
"description": "# Data processing Package\r\n\r\n- Requirements\r\n - pandas\r\n - pyreadstat\r\n - numpy\r\n - zipfile\r\n - fastapi[UploadFile]\r\n\r\n- Step 1: import classes\r\n ```\r\n # Convert data to pandas dataframe\r\n from dpkits.ap_data_converter import APDataConverter\r\n\r\n # Calculate LSM score\r\n from dpkits.calculate_lsm import LSMCalculation\r\n\r\n # Transpose data to stack and untack\r\n from dpkits.data_transpose import DataTranspose\r\n\r\n # Create the tables from converted dataframe \r\n from dpkits.table_generator import DataTableGenerator\r\n\r\n # Format data tables \r\n from dpkits.table_formater import TableFormatter\r\n ```\r\n\r\n- Step 2: Convert data files to dataframe\r\n - class APDataConverter(files=None, file_name='', is_qme=True)\r\n - input 1 of files or file_name\r\n - files: list[UploadFile] default = None\r\n - file_name: str default = ''\r\n - is_qme: bool default = True\r\n - Returns: \r\n - df_data: pandas.Dataframe\r\n - df_info: pandas.Dataframe\r\n ```\r\n # Define input/output files name\r\n str_file_name = 'APDataTest'\r\n str_tbl_file_name = f'{str_file_name}_Topline.xlsx'\r\n \r\n converter = APDataConverter(file_name='APDataTesting.xlsx')\r\n \r\n df_data, df_info = converter.convert_df_mc() \r\n \r\n # Use 'converter.convert_df_md()' if you need md data\r\n ```\r\n\r\n- Step 3: Calculate LSM classificate (only for LSM projects)\r\n - class LSMCalculation.cal_lsm_6(df_data, df_info)\r\n - df_data: pandas.Dataframe\r\n - df_info: pandas.Dataframe\r\n - Returns:\r\n - df_data: pandas.Dataframe\r\n - df_info: pandas.Dataframe\r\n ```\r\n df_data, df_info = LSMCalculation.cal_lsm_6(df_data, df_info)\r\n\r\n # df_data, df_info will contains the columns CC1_Score to CC6_Score & LSM_Score\r\n ```\r\n\r\n- Step 4: Data cleaning (if needed)\r\n ```\r\n # Use pandas's functions to clean/process data\r\n\r\n df_data['Gender_new'] = df_data['Gender']\r\n\r\n df_data.replace({\r\n 'Q1_SP1': {1: 5, 2: 4, 3: 3, 4: 2, 5: 1},\r\n 'Q1_SP2': {1: 5, 2: 4, 3: 3, 4: 2, 5: 1},\r\n }, inplace=True)\r\n\r\n df_data.loc[(df_data['Gender_new'] == 2) & (df_data['Age'] == 5), ['Gender_new']] = [np.nan]\r\n df_info.loc[df_info['var_name'] == 'Q1_SP1', ['val_lbl']] = [{'1': 'a', '2': 'b', '3': 'c', '4': 'd', '5': 'e'}]\r\n\r\n df_info = pd.concat([df_info, pd.DataFrame(\r\n columns=['var_name', 'var_lbl', 'var_type', 'val_lbl'],\r\n data=[\r\n ['Gender_new', 'Please indicate your gender', 'SA', {'1': 'aaa', '2': 'bb', '3': 'cc'}]\r\n ]\r\n )], ignore_index=True)\r\n ```\r\n\r\n- Step 5: Transpose data (if needed)\r\n - class DataTranspose.to_stack(df_data, df_info, dict_stack_structure)\r\n - df_data: pandas.Dataframe\r\n - df_info: pandas.Dataframe\r\n - dict_stack_structure: dict\r\n - Returns:\r\n - df_data_stack: pandas.Dataframe\r\n - df_info_stack: pandas.Dataframe\r\n ```\r\n dict_stack_structure = {\r\n 'id_col': 'ResID',\r\n 'sp_col': 'Ma_SP',\r\n 'lst_scr': ['Gender', 'Age', 'City', 'HHI'],\r\n 'dict_sp': {\r\n 1: {\r\n 'Ma_SP1': 'Ma_SP',\r\n 'Q1_SP1': 'Q1',\r\n 'Q2_SP1': 'Q2',\r\n 'Q3_SP1': 'Q3',\r\n },\r\n 2: {\r\n 'Ma_SP2': 'Ma_SP',\r\n 'Q1_SP2': 'Q1',\r\n 'Q2_SP2': 'Q2',\r\n 'Q3_SP2': 'Q3',\r\n },\r\n },\r\n 'lst_fc': ['Awareness1', 'Frequency', 'Awareness2', 'Perception']\r\n }\r\n\r\n df_data_stack, df_info_stack = DataTranspose.to_stack(df_data, df_info, dict_stack_structure)\r\n ```\r\n - class DataTranspose.to_unstack(df_data_stack, df_info_stack, dict_unstack_structure)\r\n - df_data_stack: pandas.Dataframe which transpose from stack\r\n - df_info_stack: pandas.Dataframe which transpose from stack\r\n - dict_unstack_structure: dict\r\n - Returns:\r\n - df_data_unstack: pandas.Dataframe\r\n - df_info_unstack: pandas.Dataframe\r\n ```\r\n dict_unstack_structure = {\r\n 'id_col': 'ResID',\r\n 'sp_col': 'Ma_SP',\r\n 'lst_col_part_head': ['Gender', 'Age', 'City', 'HHI'],\r\n 'lst_col_part_body': ['Q1', 'Q2', 'Q3'],\r\n 'lst_col_part_tail': ['Awareness1', 'Frequency', 'Awareness2', 'Perception']\r\n }\r\n\r\n df_data_unstack, df_info_unstack = DataTranspose.to_unstack(df_data_stack, df_info_stack, dict_unstack_structure)\r\n ```\r\n \r\n- Step 6: OE Running\r\n ```\r\n \r\n ```\r\n\r\n- Step 7: Export *.sav & *.xlsx\r\n - class converter.generate_multiple_data_files(dict_dfs=dict_dfs, is_md=False, is_export_sav=True, is_export_xlsx=True, is_zip=True)\r\n - df_data: pandas.Dataframe\r\n - dict_dfs: dict\r\n - is_md: bool default False\r\n - is_export_sav: bool default True\r\n - is_export_xlsx: bool default True\r\n - is_zip: bool default True\r\n - Returns: NONE\r\n ```\r\n dict_dfs = {\r\n 1: {\r\n 'data': df_data,\r\n 'info': df_info,\r\n 'tail_name': 'ByCode',\r\n 'sheet_name': 'ByCode',\r\n 'is_recode_to_lbl': False,\r\n },\r\n 2: {\r\n 'data': df_data,\r\n 'info': df_info,\r\n 'tail_name': 'ByLabel',\r\n 'sheet_name': 'ByLabel',\r\n 'is_recode_to_lbl': True,\r\n },\r\n 3: {\r\n 'data': df_data_stack,\r\n 'info': df_info_stack,\r\n 'tail_name': 'Stack',\r\n 'sheet_name': 'Stack',\r\n 'is_recode_to_lbl': False,\r\n },\r\n 4: {\r\n 'data': df_data_unstack,\r\n 'info': df_info_unstack,\r\n 'tail_name': 'Unstack',\r\n 'sheet_name': 'Unstack',\r\n 'is_recode_to_lbl': False,\r\n },\r\n }\r\n\r\n converter.generate_multiple_data_files(dict_dfs=dict_dfs, is_md=False, is_export_sav=True, is_export_xlsx=True, is_zip=True)\r\n ```\r\n\r\n- Step 8: Export data tables\r\n - init DataTableGenerator(df_data=df_data, df_info=df_info, xlsx_name=str_tbl_file_name)\r\n - df_data: pandas.Dataframe\r\n - df_info: pandas.Dataframe\r\n - xlsx_name: str\r\n - Returns: NONE\r\n - class DataTableGenerator.run_tables_by_js_files(lst_func_to_run)\r\n - lst_func_to_run: list\r\n - Returns: NONE\r\n - init TableFormatter(xlsx_name=str_tbl_file_name)\r\n - xlsx_name: str\r\n - Returns: NONE\r\n - class TableFormatter.format_sig_table()\r\n - Returns: NONE\r\n ```\r\n lst_side_qres = [\r\n {\"qre_name\": \"CC1\", \"sort\": \"des\"},\r\n {\"qre_name\": \"$CC3\", \"sort\": \"asc\"},\r\n {\"qre_name\": \"$CC4\", \"sort\": \"des\"},\r\n {\"qre_name\": \"$CC6\"},\r\n {\"qre_name\": \"$CC10\"},\r\n {\"qre_name\": \"LSM\"},\r\n {\"qre_name\": \"Gender\"},\r\n {\"qre_name\": \"Age\"},\r\n {\"qre_name\": \"City\"},\r\n {\"qre_name\": \"HHI\"},\r\n \r\n # MA Question with net/combine (can apply to SA questions)\r\n {\"qre_name\": \"$Q15\", \"cats\": {\r\n 'net_code': {\r\n '900001|combine|Group 1 + 2': {\r\n '1': 'Yellow/dull teeth',\r\n '3': 'Dental plaque',\r\n '5': 'Bad breath',\r\n '7': 'Aphthousulcer',\r\n '2': 'Sensitive teeth',\r\n '4': 'Caries',\r\n '6': 'Gingivitis (bleeding, swollen gums)',\r\n },\r\n '900002|net|Group 1': {\r\n '1': 'Yellow/dull teeth',\r\n '3': 'Dental plaque',\r\n '5': 'Bad breath',\r\n '7': 'Aphthousulcer',\r\n },\r\n '900003|net|Group 2': {\r\n '2': 'Sensitive teeth',\r\n '4': 'Caries',\r\n '6': 'Gingivitis (bleeding, swollen gums)',\r\n },\r\n },\r\n '8': 'Other (specify)',\r\n '9': 'No problem',\r\n }},\r\n\r\n # Scale question with full properties\r\n {\r\n \"qre_name\": \"Perception\",\r\n \"cats\": {\r\n '1': 'Totally disagree', '2': 'Disagree', '3': 'Neutral', '4': 'Agree', '5': 'Totally agree',\r\n 'net_code': {\r\n '900001|combine|B2B': {'1': 'Totally disagree', '2': 'Disagree'},\r\n '900002|combine|Medium': {'3': 'Neutral'},\r\n '900003|combine|T2B': {'4': 'Agree', '5': 'Totally agree'},\r\n }\r\n },\r\n \"mean\": {1: 1, 2: 2, 3: 3, 4: 4, 5: 5}\r\n },\r\n ]\r\n\r\n lst_header_qres = [\r\n [\r\n {\r\n \"qre_name\": \"Age\",\r\n \"qre_lbl\": \"Age\",\r\n \"cats\": {\r\n 'TOTAL': 'TOTAL',\r\n '2': '18 - 24', '3': '25 - 30', '4': '31 - 39', '5': '40 - 50', '6': 'Tr\u00ean 50'\r\n }\r\n },\r\n {\r\n \"qre_name\": \"@City2\",\r\n \"qre_lbl\": \"Location\",\r\n \"cats\": {\r\n 'City.isin([1, 5, 10, 11, 12])': 'All South',\r\n 'City.isin([2, 4, 16, 17, 18])': 'All North',\r\n }\r\n },\r\n ],\r\n ]\r\n\r\n lst_func_to_run = [\r\n {\r\n 'func_name': 'run_standard_table_sig',\r\n 'tables_to_run': [\r\n 'Tbl_1_Pct', # this table use df_data & df_info to run\r\n 'Tbl_1_Count', # this table use df_data & df_info to run\r\n ],\r\n 'tables_format': {\r\n\r\n \"Tbl_1_Pct\": {\r\n \"tbl_name\": \"Table 1 - Pct\",\r\n \"tbl_filter\": \"City > 0\",\r\n \"is_count\": 0,\r\n \"is_pct_sign\": 1,\r\n \"is_hide_oe_zero_cats\": 1,\r\n \"sig_test_info\": {\r\n \"sig_type\": \"\", # ind / rel\r\n \"sig_cols\": [],\r\n \"lst_sig_lvl\": []\r\n },\r\n \"lst_side_qres\": lst_side_qres,\r\n \"lst_header_qres\": lst_header_qres\r\n },\r\n\r\n \"Tbl_1_Count\": {\r\n \"tbl_name\": \"Table 1 - Count\",\r\n \"tbl_filter\": \"City > 0\",\r\n \"is_count\": 1,\r\n \"is_pct_sign\": 0,\r\n \"is_hide_oe_zero_cats\": 1,\r\n \"sig_test_info\": {\r\n \"sig_type\": \"\",\r\n \"sig_cols\": [],\r\n \"lst_sig_lvl\": []\r\n },\r\n \"lst_side_qres\": lst_side_qres,\r\n \"lst_header_qres\": lst_header_qres\r\n },\r\n },\r\n\r\n },\r\n ]\r\n\r\n dtg = DataTableGenerator(df_data=df_data, df_info=df_info, xlsx_name=str_tbl_file_name)\r\n dtg.run_tables_by_js_files(lst_func_to_run)\r\n\r\n dtf = TableFormatter(xlsx_name=str_tbl_file_name)\r\n dtf.format_sig_table()\r\n ```\r\n\r\n\r\n\r\n\r\n\r\nThis is a simple example package. You can use\r\n[Github-flavored Markdown](https://guides.github.com/features/mastering-markdown/)\r\nto write your content.\r\n",
"bugtrack_url": null,
"license": null,
"summary": "A small package for data processing",
"version": "1.3.11",
"project_urls": {
"Bug Tracker": "https://github.com/HungDaoHD/packaging_dpkits/issues",
"Homepage": "https://github.com/HungDaoHD/packaging_dpkits"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "c02dad213339ae162534ba97d9d13230dadb6038afa06c7ff741eba5005a7f85",
"md5": "d9816ac978775ff8e50a16bfc85727b2",
"sha256": "b8a0eab54a433533d2493191516b10249196ee2df4d889a342ba6309360ef38c"
},
"downloads": -1,
"filename": "dpkits-1.3.11-py3-none-any.whl",
"has_sig": false,
"md5_digest": "d9816ac978775ff8e50a16bfc85727b2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.9",
"size": 49815,
"upload_time": "2024-10-15T11:09:27",
"upload_time_iso_8601": "2024-10-15T11:09:27.725623Z",
"url": "https://files.pythonhosted.org/packages/c0/2d/ad213339ae162534ba97d9d13230dadb6038afa06c7ff741eba5005a7f85/dpkits-1.3.11-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "16eec27a48b2abd6e37838dc5a936d189f8e54d951a2d0ccabefc8ea9eb20b9e",
"md5": "6ff2f79db66cb2021808edb7357ae2c2",
"sha256": "d285f7520966e77c4741c57ab76eca377166ac8cdc04416ae00d76cff00df914"
},
"downloads": -1,
"filename": "dpkits-1.3.11.tar.gz",
"has_sig": false,
"md5_digest": "6ff2f79db66cb2021808edb7357ae2c2",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.9",
"size": 46663,
"upload_time": "2024-10-15T11:09:29",
"upload_time_iso_8601": "2024-10-15T11:09:29.779901Z",
"url": "https://files.pythonhosted.org/packages/16/ee/c27a48b2abd6e37838dc5a936d189f8e54d951a2d0ccabefc8ea9eb20b9e/dpkits-1.3.11.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-15 11:09:29",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "HungDaoHD",
"github_project": "packaging_dpkits",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "dpkits"
}