# Sickness-screening library
## Instruction
Predictions sepsis is a module based on pandas, torch, and scikit-learn that allows users to perform simple operations with the MIMIC dataset.
With this module, using just a few functions, you can train your model to predict whether some patients have certain diseases or not.
By default, the module is designed to train and predict sepsis.
The module also allows users to change different names of tables to aggregate data from.
### Installation
To install the module, use the following command:
```bash
pip install sickness-screening
```
or
```bash
pip3 install sickness-screening
```
### Usage
You can import functions from the module into your Python file to aggregate data from MIMIC,
fill empty spots, compress data between patients, and train your model.
### Examples
#### MIMIC Setup
In the examples, we will show how to use the sickness-screening module to train a model to predict sepsis on the MIMIC dataset.
MIMIC contains many tables, but for the example, we will need the following tables:
1. **chartevents.csv** -- contains patient monitoring data, such as body temperature and blood pressure.
2. **labevents.csv** -- contains various patient test data, such as different blood test characteristics for patients.
3. **diagnoses.csv** -- contains information about the diagnoses received by the patient.
4. **d_icd_diagnoses.csv** -- decoding of diagnosis codes for each diagnosis.
5. **d_labitems.csv** -- decoding of test codes for each patient.
#### Aggregating patient diagnosis data:
First, we will collect data on patient diagnoses:
```python
import sickness_screening as ss
ss.get_diagnoses_data(patient_diagnoses_csv='diagnoses.csv',
all_diagnoses_csv='d_icd_diagnoses.csv',
output_file_csv='gottenDiagnoses.csv')
```
Here, for each patient from **patient_diagnoses_csv**, we get the diagnosis codes, and then, using **all_diagnoses_csv**,
we get the output_file_csv file, which stores the decoding of each patient's diagnosis.
#### Obtaining data on whether a specific diagnosis is present in a patient
```python
import sickness_screening as ss
ss.get_diseas_info(diagnoses_csv='gottenDiagnoses.csv', title_column='long_title', diseas_str='sepsis',
diseas_column='has_sepsis', subject_id_column='subject_id', log_stats=True,
output_csv='sepsis_info.csv')
```
Here we use the table obtained from the previous example to get a table containing data on whether the patient's diagnosis contains the substring sepsis or not.
#### Aggregating data needed to determine SIRS (systemic inflammatory response syndrome)
Now we will collect some data needed to determine SIRS:
```python
import sickness_screening as ss
ss.get_analyzes_data(analyzes_csv='chartevents.csv', subject_id_col='subject_id', itemid_col='itemid',
charttime_col='charttime', value_col='value', valuenum_col='valuenum',
itemids=[220045, 220210, 223762, 223761, 225651], rest_columns=['Heart rate', 'Respiratory rate', 'Temperature Fahrenheit', 'Temperature Celsius',
'Direct Bilirubin'], output_csv='ssir.csv')
```
Here we use the **analyzes_csv** table, **itemids** (the codes of the tests we want to collect), and **rest_columns** (the columns we want to keep in the output table).
The function collects measurements for patients with **itemids** codes from analyzes_csv and writes them to **output_csv**, keeping only the columns present in **rest_columns**.
In this function, **subject_id_col** and **itemid_col** are responsible for the columns assigned to patient and test codes, respectively.
**charttime_col** is responsible for the time. **valuenum_col** is responsible for the column with test measurement units.
#### Combining diagnosis and SIRS data
Now we will combine the data from the previous two examples into one table:
```python
import sickness_screening as ss
ss.combine_data(first_data='gottenDiagnoses.csv',
second_data='ssir.csv',
output_file='diagnoses_and_ssir.csv')
```
#### Collecting and combining blood test data with diagnosis and SIRS data
We will collect patient blood test data and combine them into one table:
```python
import sickness_screening as ss
ss.merge_and_get_data(merge_with='diagnoses_and_ssir.csv',
blood_csv='labevents.csv',
get_data_from='chartevents.csv',
output_csv='merged_data.csv',
analyzes_names = {
51222: "Hemoglobin",
51279: "Red Blood Cell",
51240: "Large Platelets",
50861: "Alanine Aminotransferase (ALT)",
50878: "Aspartate Aminotransferase (AST)",
225651: "Direct Bilirubin",
50867: "Amylase",
51301: "White Blood Cells"})
```
This function searches for data on analyzes_names for patients from the blood_csv and **get_data_from** tables,
combines them with **merge_with**. Note that this function also combines disease data for each patient.
#### Balancing data within each patient
We will balance the data by the total number of rows for patients with and without sepsis.
```python
import sickness_screening as ss
ss.balance_on_patients(balancing_csv='merged_data.csv', disease_col='has_sepsis', subject_id_col='subject_id',
output_csv='balance.csv',
output_filtered_csv='balance_filtered.csv',
filtering_on=200,
number_of_patient_selected=50000,
log_stats=True)
```
#### Compressing data for each patient (if there are gaps in the dataset, the gaps within each patient will be filled with the patient's own values)
Now we will fill the gaps with the available data for each patient without filling with statistical values or constants:
```python
import sickness_screening as ss
ss.compress(df_to_compress='balanced_data.csv',
subject_id_col='subject_id',
output_csv='compressed_data.csv')
```
#### Select the best patients with data for final balancing
```python
import sickness_screening as ss
ss.choose(compressed_df_csv='compressed_data.csv',
output_file='final_balanced_data.csv')
```
#### Filling missing values with the most frequent value
```python
import sickness_screening as ss
ss.fill_values(balanced_csv='final_balanced_data.csv',
strategy='most_frequent',
output_csv='filled_data.csv')
```
#### Training the model on the dataset
```python
import sickness_screening as ss
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import MinMaxScaler
model = ss.train_model(df_to_train_csv='filled_data.csv',
categorical_col=['Large Platelets'],
columns_to_train_on=['Amylase'],
model=RandomForestClassifier(),
single_cat_column='White Blood Cells',
has_disease_col='has_sepsis',
subject_id_col='subject_id',
valueuom_col='valueuom',
scaler=MinMaxScaler(),
random_state=42,
test_size=0.2)
```
In this function, we train a RandomForestClassifier from scikit-learn on a dataset with one categorical column, one numeric column,
and one categorical column that can be converted to numeric. MinMaxScaler from scikit-learn is used as the normalization method.
#### For example, you can insert models like CatBoostClassifier or SVC with different kernels.
CatBoostClassifier:
```python
class_weights = {0: 1, 1: 15}
clf = CatBoostClassifier(loss_function='MultiClassOneVsAll', class_weights=class_weights, iterations=50, learning_rate=0.1, depth=5)
clf.fit(X_train, y_train)
```
SVC using Gaussian kernel with radial basis function (RBF):
```python
class_weights = {0: 1, 1: 13}
param_dist = {
'C': reciprocal(0.1, 100),
'gamma': reciprocal(0.01, 10),
'kernel': ['rbf']
}
svm_model = SVC(class_weight=class_weights, random_state=42)
random_search = RandomizedSearchCV(
svm_model,
param_distributions=param_dist,
n_iter=10,
cv=5,
scoring=make_scorer(recall_score, pos_label=1),
n_jobs=-1
)
```
## The Second Method (Transformers TabNet and DeepFM)
### Collecting features into a dataset
#### You can choose any features, but we will take 4 as in MEWS (Modified Early Warning Score) to predict sepsis in the first hours of a patient's hospital stay:
* Systolic blood pressure
* Heart rate
* Respiratory rate
* Temperature
```python
item_ids_set = set(item_ids)
with open(file_path) as f:
headers = f.readline().replace('\n', '').split(',')
i = 0
for line in tqdm(f):
values = line.replace('\n', '').split(',')
subject_id = values[0]
item_id = values[6]
valuenum = values[8]
if item_id in item_ids_set:
if subject_id not in result:
result[subject_id] = {}
result[subject_id][item_id] = valuenum
i += 1
table = pd.DataFrame.from_dict(result, orient='index')
table['subject_id'] = table.index
item_ids = [str(x) for x in [225309, 220045, 220210, 223762]]
```
#### Adding the target
```python
target_subjects = drgcodes.loc[drgcodes['drg_code'].isin([870, 871, 872]), 'subject_id']
merged_data.loc[merged_data['subject_id'].isin(target_subjects), 'diagnosis'] = 1
```
#### Filling in gaps using the NoNa library. This algorithm fills in gaps using various machine learning methods, we use StandardScaler, Ridge and RandomForestClassifier
```python
nona(
data=X,
algreg=make_pipeline(StandardScaler(with_mean=False), Ridge(alpha=0.1)),
algclass=RandomForestClassifier(max_depth=2, random_state=0)
)
```
#### Addressing class imbalance using SMOTE
```python
smote = SMOTE(random_state=random_state)
X_resampled, y_resampled = smote.fit_resample(X_train, y_train)
```
#### Training the TabNet model. TabNet is an extension of pyTorch. First, we use semi-supervised pretraining with TabNetPretrainer, then create and train a classification model using TabNetClassifier
```python
unsupervised_model = TabNetPretrainer(
optimizer_fn=torch.optim.Adam,
optimizer_params=dict(lr=pretraining_lr),
mask_type=mask_type
)
unsupervised_model.fit(
X_train=X_train.values,
eval_set=[X_val.values],
pretraining_ratio=pretraining_ratio,
)
clf = TabNetClassifier(
optimizer_fn=torch.optim.Adam,
optimizer_params=dict(lr=training_lr),
scheduler_params=scheduler_params,
scheduler_fn=torch.optim.lr_scheduler.StepLR,
mask_type=mask_type
)
clf.fit(
X_train=X_train.values, y_train=y_train.values,
eval_set=[(X_val.values, y_val.values)],
eval_metric=['auc'],
max_epochs=max_epochs,
patience=patience,
from_unsupervised=unsupervised_model
)
```
#### Training the DeepFM model
```python
deepfm = DeepFM("ranking", data_info, embed_size=16, n_epochs=2,
lr=1e-4, lr_decay=False, reg=None, batch_size=1,
num_neg=1, use_bn=False, dropout_rate=None,
hidden_units="128,64,32", tf_sess_config=None)
deepfm.fit(train_data, verbose=2, shuffle=True, eval_data=eval_data,
metrics=["loss", "balanced_accuracy", "roc_auc", "pr_auc",
"precision", "recall", "map", "ndcg"])
```
#### Viewing the obtained metrics
```python
result = loaded_clf.predict(X_test.values)
accuracy = (result == y_test.values).mean()
precision = precision_score(y_test.values, result)
recall = recall_score(y_test.values, result)
f1 = f1_score(y_test.values, result)
```
#### Visualization of 2 PCA components was performed
![Image alt](./Визуализация_2_PCA_компоненты.png)
The distribution by components is presented below:
| | Load on the first component | Load on the second component |
| ---------------- | :---: | :---: |
| Heart rate | -0.101450 | 0.991611 |
| Temperature | 0.001178 | 0.013098 |
| Systolic BP | 0.994771 | 0.100169 |
| Respiratory rate | 0.011673 | 0.080573 |
| MEWS | -0.000660 | 0.003313 |
No patterns were found.
#### A variational encoder was trained to build a separable 2D space
![Image alt](./Вариационный_кодировщик.png)
We can see that they overlap and are inseparable.
Raw data
{
"_id": null,
"home_page": "https://github.com/sslavian812/sepsis-predictions.git",
"name": "sickness-screening",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "sepsis, predictions, python, disease, screening",
"author": "@Margo78, @akp1n",
"author_email": "timtimk30@yandex.ru",
"download_url": "https://files.pythonhosted.org/packages/98/b2/d52f7bdf2bcea9ce7724fd948a828fb7283dad1aee4d2627fa89b4bf1671/sickness_screening-1.0.17.tar.gz",
"platform": null,
"description": "# Sickness-screening library\n\n## Instruction\n\nPredictions sepsis is a module based on pandas, torch, and scikit-learn that allows users to perform simple operations with the MIMIC dataset.\nWith this module, using just a few functions, you can train your model to predict whether some patients have certain diseases or not. \nBy default, the module is designed to train and predict sepsis. \nThe module also allows users to change different names of tables to aggregate data from.\n\n### Installation\n\nTo install the module, use the following command:\n\n```bash\npip install sickness-screening\n```\nor\n```bash\npip3 install sickness-screening\n```\n### Usage\n\nYou can import functions from the module into your Python file to aggregate data from MIMIC, \nfill empty spots, compress data between patients, and train your model.\n\n### Examples\n#### MIMIC Setup\nIn the examples, we will show how to use the sickness-screening module to train a model to predict sepsis on the MIMIC dataset.\nMIMIC contains many tables, but for the example, we will need the following tables:\n1. **chartevents.csv** -- contains patient monitoring data, such as body temperature and blood pressure.\n2. **labevents.csv** -- contains various patient test data, such as different blood test characteristics for patients.\n3. **diagnoses.csv** -- contains information about the diagnoses received by the patient.\n4. **d_icd_diagnoses.csv** -- decoding of diagnosis codes for each diagnosis.\n5. **d_labitems.csv** -- decoding of test codes for each patient.\n#### Aggregating patient diagnosis data:\nFirst, we will collect data on patient diagnoses:\n```python\nimport sickness_screening as ss\n\nss.get_diagnoses_data(patient_diagnoses_csv='diagnoses.csv', \n all_diagnoses_csv='d_icd_diagnoses.csv',\n output_file_csv='gottenDiagnoses.csv')\n```\nHere, for each patient from **patient_diagnoses_csv**, we get the diagnosis codes, and then, using **all_diagnoses_csv**,\nwe get the output_file_csv file, which stores the decoding of each patient's diagnosis.\n#### Obtaining data on whether a specific diagnosis is present in a patient\n```python\nimport sickness_screening as ss\nss.get_diseas_info(diagnoses_csv='gottenDiagnoses.csv', title_column='long_title', diseas_str='sepsis',\n diseas_column='has_sepsis', subject_id_column='subject_id', log_stats=True,\n output_csv='sepsis_info.csv')\n```\nHere we use the table obtained from the previous example to get a table containing data on whether the patient's diagnosis contains the substring sepsis or not.\n#### Aggregating data needed to determine SIRS (systemic inflammatory response syndrome)\nNow we will collect some data needed to determine SIRS:\n```python\nimport sickness_screening as ss\n\nss.get_analyzes_data(analyzes_csv='chartevents.csv', subject_id_col='subject_id', itemid_col='itemid',\n charttime_col='charttime', value_col='value', valuenum_col='valuenum',\n itemids=[220045, 220210, 223762, 223761, 225651], rest_columns=['Heart rate', 'Respiratory rate', 'Temperature Fahrenheit', 'Temperature Celsius',\n 'Direct Bilirubin'], output_csv='ssir.csv')\n\n```\nHere we use the **analyzes_csv** table, **itemids** (the codes of the tests we want to collect), and **rest_columns** (the columns we want to keep in the output table).\nThe function collects measurements for patients with **itemids** codes from analyzes_csv and writes them to **output_csv**, keeping only the columns present in **rest_columns**.\nIn this function, **subject_id_col** and **itemid_col** are responsible for the columns assigned to patient and test codes, respectively.\n**charttime_col** is responsible for the time. **valuenum_col** is responsible for the column with test measurement units.\n#### Combining diagnosis and SIRS data\nNow we will combine the data from the previous two examples into one table:\n```python\nimport sickness_screening as ss\n\nss.combine_data(first_data='gottenDiagnoses.csv', \n second_data='ssir.csv',\n output_file='diagnoses_and_ssir.csv')\n```\n#### Collecting and combining blood test data with diagnosis and SIRS data\nWe will collect patient blood test data and combine them into one table:\n```python\nimport sickness_screening as ss\n\nss.merge_and_get_data(merge_with='diagnoses_and_ssir.csv', \n blood_csv='labevents.csv',\n get_data_from='chartevents.csv',\n output_csv='merged_data.csv',\n analyzes_names = {\n 51222: \"Hemoglobin\",\n 51279: \"Red Blood Cell\",\n 51240: \"Large Platelets\",\n 50861: \"Alanine Aminotransferase (ALT)\",\n 50878: \"Aspartate Aminotransferase (AST)\",\n 225651: \"Direct Bilirubin\",\n 50867: \"Amylase\",\n 51301: \"White Blood Cells\"})\n```\nThis function searches for data on analyzes_names for patients from the blood_csv and **get_data_from** tables,\ncombines them with **merge_with**. Note that this function also combines disease data for each patient.\n#### Balancing data within each patient\nWe will balance the data by the total number of rows for patients with and without sepsis.\n```python \nimport sickness_screening as ss\nss.balance_on_patients(balancing_csv='merged_data.csv', disease_col='has_sepsis', subject_id_col='subject_id',\n output_csv='balance.csv',\n output_filtered_csv='balance_filtered.csv',\n filtering_on=200,\n number_of_patient_selected=50000,\n log_stats=True)\n```\n#### Compressing data for each patient (if there are gaps in the dataset, the gaps within each patient will be filled with the patient's own values)\nNow we will fill the gaps with the available data for each patient without filling with statistical values or constants:\n```python\nimport sickness_screening as ss\n\nss.compress(df_to_compress='balanced_data.csv', \n subject_id_col='subject_id',\n output_csv='compressed_data.csv')\n```\n#### Select the best patients with data for final balancing\n```python\nimport sickness_screening as ss\n\nss.choose(compressed_df_csv='compressed_data.csv', \n output_file='final_balanced_data.csv')\n```\n#### Filling missing values with the most frequent value\n```python\nimport sickness_screening as ss\n\nss.fill_values(balanced_csv='final_balanced_data.csv', \n strategy='most_frequent', \n output_csv='filled_data.csv')\n```\n#### Training the model on the dataset\n```python\nimport sickness_screening as ss\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.preprocessing import MinMaxScaler\nmodel = ss.train_model(df_to_train_csv='filled_data.csv', \n categorical_col=['Large Platelets'], \n columns_to_train_on=['Amylase'], \n model=RandomForestClassifier(), \n single_cat_column='White Blood Cells', \n has_disease_col='has_sepsis', \n subject_id_col='subject_id', \n valueuom_col='valueuom', \n scaler=MinMaxScaler(), \n random_state=42, \n test_size=0.2)\n```\nIn this function, we train a RandomForestClassifier from scikit-learn on a dataset with one categorical column, one numeric column,\nand one categorical column that can be converted to numeric. MinMaxScaler from scikit-learn is used as the normalization method.\n#### For example, you can insert models like CatBoostClassifier or SVC with different kernels.\nCatBoostClassifier:\n```python\nclass_weights = {0: 1, 1: 15}\nclf = CatBoostClassifier(loss_function='MultiClassOneVsAll', class_weights=class_weights, iterations=50, learning_rate=0.1, depth=5)\nclf.fit(X_train, y_train)\n```\nSVC using Gaussian kernel with radial basis function (RBF):\n```python\nclass_weights = {0: 1, 1: 13}\nparam_dist = {\n 'C': reciprocal(0.1, 100),\n 'gamma': reciprocal(0.01, 10),\n 'kernel': ['rbf']\n}\n\nsvm_model = SVC(class_weight=class_weights, random_state=42)\nrandom_search = RandomizedSearchCV(\n svm_model,\n param_distributions=param_dist,\n n_iter=10,\n cv=5,\n scoring=make_scorer(recall_score, pos_label=1),\n n_jobs=-1\n)\n```\n\n## The Second Method (Transformers TabNet and DeepFM)\n### Collecting features into a dataset\n#### You can choose any features, but we will take 4 as in MEWS (Modified Early Warning Score) to predict sepsis in the first hours of a patient's hospital stay:\n* Systolic blood pressure\n* Heart rate\n* Respiratory rate\n* Temperature\n```python\n item_ids_set = set(item_ids)\n\n with open(file_path) as f:\n headers = f.readline().replace('\\n', '').split(',')\n i = 0\n for line in tqdm(f):\n values = line.replace('\\n', '').split(',')\n subject_id = values[0]\n item_id = values[6]\n valuenum = values[8]\n if item_id in item_ids_set:\n if subject_id not in result:\n result[subject_id] = {}\n result[subject_id][item_id] = valuenum\n i += 1\n\n table = pd.DataFrame.from_dict(result, orient='index')\n table['subject_id'] = table.index\n\nitem_ids = [str(x) for x in [225309, 220045, 220210, 223762]]\n```\n\n#### Adding the target\n```python\ntarget_subjects = drgcodes.loc[drgcodes['drg_code'].isin([870, 871, 872]), 'subject_id']\nmerged_data.loc[merged_data['subject_id'].isin(target_subjects), 'diagnosis'] = 1\n```\n\n#### Filling in gaps using the NoNa library. This algorithm fills in gaps using various machine learning methods, we use StandardScaler, Ridge and RandomForestClassifier\n```python\nnona(\n data=X,\n algreg=make_pipeline(StandardScaler(with_mean=False), Ridge(alpha=0.1)),\n algclass=RandomForestClassifier(max_depth=2, random_state=0)\n)\n```\n\n#### Addressing class imbalance using SMOTE\n```python\nsmote = SMOTE(random_state=random_state)\nX_resampled, y_resampled = smote.fit_resample(X_train, y_train)\n```\n\n#### Training the TabNet model. TabNet is an extension of pyTorch. First, we use semi-supervised pretraining with TabNetPretrainer, then create and train a classification model using TabNetClassifier\n```python\nunsupervised_model = TabNetPretrainer(\n optimizer_fn=torch.optim.Adam,\n optimizer_params=dict(lr=pretraining_lr),\n mask_type=mask_type\n)\n\nunsupervised_model.fit(\n X_train=X_train.values,\n eval_set=[X_val.values],\n pretraining_ratio=pretraining_ratio,\n)\n\nclf = TabNetClassifier(\n optimizer_fn=torch.optim.Adam,\n optimizer_params=dict(lr=training_lr),\n scheduler_params=scheduler_params,\n scheduler_fn=torch.optim.lr_scheduler.StepLR,\n mask_type=mask_type\n)\n\nclf.fit(\n X_train=X_train.values, y_train=y_train.values,\n eval_set=[(X_val.values, y_val.values)],\n eval_metric=['auc'],\n max_epochs=max_epochs,\n patience=patience,\n from_unsupervised=unsupervised_model\n)\n```\n\n#### Training the DeepFM model\n```python\ndeepfm = DeepFM(\"ranking\", data_info, embed_size=16, n_epochs=2,\n lr=1e-4, lr_decay=False, reg=None, batch_size=1,\n num_neg=1, use_bn=False, dropout_rate=None,\n hidden_units=\"128,64,32\", tf_sess_config=None)\n\ndeepfm.fit(train_data, verbose=2, shuffle=True, eval_data=eval_data,\n metrics=[\"loss\", \"balanced_accuracy\", \"roc_auc\", \"pr_auc\",\n \"precision\", \"recall\", \"map\", \"ndcg\"])\n```\n\n#### Viewing the obtained metrics\n```python\nresult = loaded_clf.predict(X_test.values)\naccuracy = (result == y_test.values).mean()\nprecision = precision_score(y_test.values, result)\nrecall = recall_score(y_test.values, result)\nf1 = f1_score(y_test.values, result)\n```\n\n#### Visualization of 2 PCA components was performed\n![Image alt](./\u0412\u0438\u0437\u0443\u0430\u043b\u0438\u0437\u0430\u0446\u0438\u044f_2_PCA_\u043a\u043e\u043c\u043f\u043e\u043d\u0435\u043d\u0442\u044b.png)\nThe distribution by components is presented below:\n\n| | Load on the first component | Load on the second component |\n| ---------------- | :---: | :---: |\n| Heart rate | -0.101450 | 0.991611 |\n| Temperature | 0.001178 | 0.013098 |\n| Systolic BP | 0.994771 | 0.100169 |\n| Respiratory rate | 0.011673 | 0.080573 |\n| MEWS | -0.000660 | 0.003313 |\n\nNo patterns were found.\n\n#### A variational encoder was trained to build a separable 2D space\n![Image alt](./\u0412\u0430\u0440\u0438\u0430\u0446\u0438\u043e\u043d\u043d\u044b\u0439_\u043a\u043e\u0434\u0438\u0440\u043e\u0432\u0449\u0438\u043a.png)\nWe can see that they overlap and are inseparable.\n\n\n",
"bugtrack_url": null,
"license": null,
"summary": "Module for sepsis predictions",
"version": "1.0.17",
"project_urls": {
"Homepage": "https://github.com/sslavian812/sepsis-predictions.git"
},
"split_keywords": [
"sepsis",
" predictions",
" python",
" disease",
" screening"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a752e8fdb7660a754116a853fb1d0fcd9a3b273353031d90969e63c57ba7706a",
"md5": "9f0e92ef89b65b13dd53f430b9462c2b",
"sha256": "645009932943ed4d73272565bf1623c47b76b1617124dd1aeb7cff8d5e0b1262"
},
"downloads": -1,
"filename": "sickness_screening-1.0.17-py2-none-any.whl",
"has_sig": false,
"md5_digest": "9f0e92ef89b65b13dd53f430b9462c2b",
"packagetype": "bdist_wheel",
"python_version": "py2",
"requires_python": ">=3.7",
"size": 21961,
"upload_time": "2024-06-19T17:23:16",
"upload_time_iso_8601": "2024-06-19T17:23:16.196050Z",
"url": "https://files.pythonhosted.org/packages/a7/52/e8fdb7660a754116a853fb1d0fcd9a3b273353031d90969e63c57ba7706a/sickness_screening-1.0.17-py2-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "98b2d52f7bdf2bcea9ce7724fd948a828fb7283dad1aee4d2627fa89b4bf1671",
"md5": "665bd2a6a0b0d887bd8dc32d649b1596",
"sha256": "6b7e28a44ea9138a5cbe35f5b580f5af96a425dc7934942597f149ccae4ac715"
},
"downloads": -1,
"filename": "sickness_screening-1.0.17.tar.gz",
"has_sig": false,
"md5_digest": "665bd2a6a0b0d887bd8dc32d649b1596",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 16118,
"upload_time": "2024-06-19T17:23:19",
"upload_time_iso_8601": "2024-06-19T17:23:19.825083Z",
"url": "https://files.pythonhosted.org/packages/98/b2/d52f7bdf2bcea9ce7724fd948a828fb7283dad1aee4d2627fa89b4bf1671/sickness_screening-1.0.17.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-06-19 17:23:19",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "sslavian812",
"github_project": "sepsis-predictions",
"github_not_found": true,
"lcname": "sickness-screening"
}