Name | dyntrace JSON |
Version |
0.0.2
JSON |
| download |
home_page | None |
Summary | Dynamic Time-series Response Analysis and Computational Estimation |
upload_time | 2024-10-23 00:27:58 |
maintainer | None |
docs_url | None |
author | None |
requires_python | None |
license | None |
keywords |
seismic
earthquake
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# `dyntrace`
**Dyn**amic **T**ime-series **R**esponse **A**nalysis and **C**omputational **E**stimation
This guidance note provides a brief summary of how to use the developed
LSTM code for testing. In general, the LSTM code consists of two main
modules: **Training** and **Utility Functions**. The **Training** module
contains all LSTM algorithms (e.g., LSTM(Vanilla) and TCN). If you wish
to use/develop the LSTM algorithms to train a new model, please go to
the **Training** module. The **Utility Functions** module is mainly used
to generate the 3D CSMIP data for training and to evaluate the
performance of trained LSTM models.
<figure>
<embed src="images/LSTM_arhitecture2.pdf" id="fig:LSTM_arhitecture" style="width:60.0%" /><figcaption aria-hidden="true">LSTM Code Organization</figcaption>
</figure>
To use the code for testing or for other research purposes, please
download all files in the **Training** and **Utility Functions**
modules. Then, update the directory of `train_folder_path` (line 9)
and `modelPath` (line 39). The `train_folder_path` is used to store
all CSMIP training data, and the `modelPath` is used to save the
trained LSTM model. Number of time steps may also need to be updated
(line 10), depending on the CSMIP data.
After training the LSTM model, `Evaluation.py` can be used to evaluate
the accuracy of the prediction. Please update the directory of
`train_folder_path` (line 32), `test_folder_path` (line 33) and
`modelPath` (line 48). The `train_folder_path` and
`test_folder_path` are used to store all CSMIP training and testing
data respectively, and the `modelPath` is used to load the trained
LSTM model. Please also update the name of the trained LSTM model
(e.g., `vanilla_30_addFC`, no need to include `"_model.h5"`). The number
of time steps may also need to be updated (line 10), depending on the
CSMIP data. Lines 60-92 are used to plot the acceleration responses of
the predicted results and the target results for both training and
testing sets. The "3" in lines 62, 70, 79 and 87 indicates that the
index of the interested CSMIP data (it should be the third item in the
**train_folder_path** and **test_folder_path**). If required, please
change the number to plot the interested CSMIP dataset. Lines 99-104 are
used to calculate the correlation coefficient of the predicted and the
target results. It should be noted that
`np.corrcoef()` generates a $2 \times 2$ coefficient matrix,
which shows 1.0 in both diagonal elements, indicating that each dataset
is perfectly correlated with itself, and ≤ 1.0 in the off-diagonal
elements, indicating that how the predicted and target results are
correlated with each other. The code (lines 99-104) automatically
extracts the off-diagonal element, showing the correlation between the
predicted and the target results. For the details of calculation, please
go to:
<https://numpy.org/doc/stable/reference/generated/numpy.corrcoef.html>.
```python
def save_model_dict(dictionary, name, modelPath):
dictionary['model'].save(modelPath+r"model_response"+name+"_model.h5")
f = open(modelPath+r"model_response"+name+"\_history.pkl","wb")
pickle.dump(dictionary['history'].history, f) f.close() f =
open("runtime.txt", "a") f.write(name+" runtime: ")
f.write(str(dictionary['runtime']/60)) f.write("") f.close()
def load_model_dict(name, modelPath): dictionary = try: model =
load_model(modelPath+r"model_response
"+name+".hdf5") except: model = load_model(modelPath+r"model_response
"+name+"\_model.h5") dictionary['model'] = model f =
open(modelPath+r"model_response
"+name+"\_history.pkl", 'rb') history = pickle.load(f) f.close()
dictionary['history'] = history return dictionary
if \_\_name\_\_ == "\_\_main\_\_":
#specify the train_folder_path and the test_folder_path #specify time
step (each seismic event has different timesteps, so need to define
manually) #specify output_response (e.g. Accel data in Channel 3, 5, 8
=> then 3) #specify window size (stack size, please refer to Zhang2019
for details) train_folder_path = r'C:' test_folder_path = r'C:'
time_step = 7200 output_response = 3 window_size = 2
#CSMIP data is read into "train_file" and "test_file". train_file =
Read_CSMIP_data.Read_CSMIP_data(train_folder_path, time_step,
output_response, window_size) test_file =
Read_CSMIP_data.Read_CSMIP_data(test_folder_path, time_step,
output_response, window_size)
#The required 3d array will be generated for training/testing datax,
datay = train_file.generate_3d_array() testx, testy =
test_file.generate_3d_array()
#load the trained model modelPath = r'C:
' model_dict = load_model_dict("vanilla_30_addFC", modelPath) model =
model_dict['model']
#perform prediction and load the results in datapredict and testpredict
datapredict = model.predict(datax) testpredict = model.predict(testx)
print("train_loss:") print(model.evaluate(datax, datay, verbose=0))
print("test_loss:") print(model.evaluate(testx, testy, verbose=0))
#Sample Plot of training data plt.figure()
plt.plot(model.predict(datax)[3,:,0], color='blue', lw=1.0)
plt.plot(datay[3,:,0],':', color='red', alpha=0.8, lw=1.0)
plt.title('Training Set: 3rd Floor Acceleration (x-direction)')
plt.legend(["Predicted", "Real"]) plt.xlabel("Time Step")
plt.ylabel("Acceleration (cm/sec<sup>2</sup>)")
plt.figure() plt.plot(model.predict(datax)[3,:,1],
color='blue',lw=1.0) plt.plot(datay[3,:,1],':', color='red',
alpha=0.8, lw=1.0) plt.title('Training Set: Roof Acceleration
(x-direction)') plt.legend(["Predicted", "Real"]) plt.xlabel("Time
Step") plt.ylabel("Acceleration (cm/sec<sup>2</sup>)")
#Sample Plot of testing data plt.figure()
plt.plot(model.predict(testx)[3,:,0], color='blue', lw=1.0)
plt.plot(testy[3,:,1],':', color='red', alpha=0.8, lw=1.0)
plt.title('Testing Set: 3rd Floor Acceleration (x-direction)')
plt.legend(["Predicted", "Real"]) plt.xlabel("Time Step")
plt.ylabel("Acceleration (cm/sec<sup>2</sup>)")
plt.figure() plt.plot(model.predict(testx)[3,:,1],
color='blue',lw=1.0) plt.plot(testy[3,:,0],':', color='red',
alpha=0.8, lw=1.0) plt.title('Testing Set: Roof Acceleration
(x-direction)') plt.legend(["Predicted", "Real"]) plt.xlabel("Time
Step") plt.ylabel("Acceleration (cm/sec<sup>2</sup>)")
# Correlation Coefficient # Note: The resulting matrix from np.corrcoef
shows this by having 1.0 # in both diagonal elements, indicating that
each array is perfectly correlated # with itself, and \< 1.0 in the
off-diagonal elements, indicating that how the two arrays # are
correlated with each other. print("training corr") train_corr =
np.corrcoef(datapredict.flatten(), datay.flatten())[0,1]
print(train_corr) print("testing corr") test_corr =
np.corrcoef(testpredict.flatten(), testy.flatten())[0,1]
print(test_corr)
# Error - evaluate the error between the predicted result and the real
result errors = np.array([])
x = (datapredict[:,:,0] - datay[:,:,0]) / np.max(np.abs(datay[:,:,0]), axis=1).reshape((-1,1))
hist = np.histogram(x.flatten(), np.arange(-0.2, 0.201, 0.001))[0]
errors = np.append(errors, hist)
x = (datapredict[:,:,1] - datay[:,:,1]) / np.max(np.abs(datay[:,:,1]), axis=1).reshape((-1,1))
hist = np.histogram(x.flatten(), np.arange(-0.2, 0.201, 0.001))[0]
errors = np.append(errors, hist)
x = (testpredict[:,:,0] - testy[:,:,0]) / np.max(np.abs(testy[:,:,0]), axis=1).reshape((-1,1))
hist = np.histogram(x.flatten(), np.arange(-0.2, 0.201, 0.001))[0]
errors = np.append(errors, hist)
x = (testpredict[:,:,1] - testy[:,:,1]) / np.max(np.abs(testy[:,:,1]), axis=1).reshape((-1,1))
hist = np.histogram(x.flatten(), np.arange(-0.2, 0.201, 0.001))[0]
errors = np.append(errors, hist)
errors = errors.reshape((-1, 4, 400)) np.save("errors_new.npy", errors)
error = np.load("errors_new.npy")
print(error.shape)
# Print the error graph, a better result will lead to an error curve centralized to 0.
plt.figure()
plt.plot(np.arange(-20, 20, 0.1), error[0][0] / (np.sum(error[0][0]) \* 0.001))
plt.plot(np.arange(-20, 20, 0.1), error[0][1] / (np.sum(error[0][1]) \* 0.001))
plt.legend(["Third floor", "Roof"])
plt.xlim(-20,20)
plt.xlabel("Normalized Error (
plt.ylabel("PDF") plt.title('Training Set')
plt.figure()
plt.plot(np.arange(-20, 20, 0.1), error[0][2] / (np.sum(error[0][2]) \* 0.001))
plt.plot(np.arange(-20, 20, 0.1), error[0][3] / (np.sum(error[0][3]) \* 0.001))
plt.legend(["Third floor", "Roof"]) plt.xlim(-20,20)
plt.xlabel("Normalized Error (
plt.ylabel("PDF")
plt.title('Testing Set')
```
Lines 130-146 are used to generate the normalized error curves of
different floors. The errors of all the response cases and time steps
are collected and the distribution can be plotted. The more the
distribution is centered around 0.0, the better the model.
Raw data
{
"_id": null,
"home_page": null,
"name": "dyntrace",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "seismic, earthquake",
"author": null,
"author_email": "Issac Pang <issac.pang@berkeley.edu>",
"download_url": "https://files.pythonhosted.org/packages/40/ab/b2ac9471f0cf6af11822ded8e305a466f2ff140e7e77affa1a634300e39f/dyntrace-0.0.2.tar.gz",
"platform": null,
"description": "# `dyntrace`\r\n\r\n**Dyn**amic **T**ime-series **R**esponse **A**nalysis and **C**omputational **E**stimation\r\n\r\nThis guidance note provides a brief summary of how to use the developed\r\nLSTM code for testing. In general, the LSTM code consists of two main\r\nmodules: **Training** and **Utility Functions**. The **Training** module\r\ncontains all LSTM algorithms (e.g., LSTM(Vanilla) and TCN). If you wish\r\nto use/develop the LSTM algorithms to train a new model, please go to\r\nthe **Training** module. The **Utility Functions** module is mainly used\r\nto generate the 3D CSMIP data for training and to evaluate the\r\nperformance of trained LSTM models.\r\n\r\n<figure>\r\n<embed src=\"images/LSTM_arhitecture2.pdf\" id=\"fig:LSTM_arhitecture\" style=\"width:60.0%\" /><figcaption aria-hidden=\"true\">LSTM Code Organization</figcaption>\r\n</figure>\r\n\r\nTo use the code for testing or for other research purposes, please\r\ndownload all files in the **Training** and **Utility Functions**\r\nmodules. Then, update the directory of `train_folder_path` (line 9)\r\nand `modelPath` (line 39). The `train_folder_path` is used to store\r\nall CSMIP training data, and the `modelPath` is used to save the\r\ntrained LSTM model. Number of time steps may also need to be updated\r\n(line 10), depending on the CSMIP data.\r\n\r\nAfter training the LSTM model, `Evaluation.py` can be used to evaluate\r\nthe accuracy of the prediction. Please update the directory of\r\n`train_folder_path` (line 32), `test_folder_path` (line 33) and\r\n`modelPath` (line 48). The `train_folder_path` and\r\n`test_folder_path` are used to store all CSMIP training and testing\r\ndata respectively, and the `modelPath` is used to load the trained\r\nLSTM model. Please also update the name of the trained LSTM model\r\n(e.g., `vanilla_30_addFC`, no need to include `\"_model.h5\"`). The number\r\nof time steps may also need to be updated (line 10), depending on the\r\nCSMIP data. Lines 60-92 are used to plot the acceleration responses of\r\nthe predicted results and the target results for both training and\r\ntesting sets. The \"3\" in lines 62, 70, 79 and 87 indicates that the\r\nindex of the interested CSMIP data (it should be the third item in the\r\n**train_folder_path** and **test_folder_path**). If required, please\r\nchange the number to plot the interested CSMIP dataset. Lines 99-104 are\r\nused to calculate the correlation coefficient of the predicted and the\r\ntarget results. It should be noted that\r\n`np.corrcoef()` generates a $2\u2005\\times\u20052$ coefficient matrix,\r\nwhich shows 1.0 in both diagonal elements, indicating that each dataset\r\nis perfectly correlated with itself, and \u2004\u2264\u20041.0 in the off-diagonal\r\nelements, indicating that how the predicted and target results are\r\ncorrelated with each other. The code (lines 99-104) automatically\r\nextracts the off-diagonal element, showing the correlation between the\r\npredicted and the target results. For the details of calculation, please\r\ngo to:\r\n<https://numpy.org/doc/stable/reference/generated/numpy.corrcoef.html>.\r\n\r\n```python\r\ndef save_model_dict(dictionary, name, modelPath):\r\n dictionary['model'].save(modelPath+r\"model_response\"+name+\"_model.h5\") \r\n f = open(modelPath+r\"model_response\"+name+\"\\_history.pkl\",\"wb\")\r\npickle.dump(dictionary['history'].history, f) f.close() f =\r\nopen(\"runtime.txt\", \"a\") f.write(name+\" runtime: \")\r\nf.write(str(dictionary['runtime']/60)) f.write(\"\") f.close()\r\n\r\ndef load_model_dict(name, modelPath): dictionary = try: model =\r\nload_model(modelPath+r\"model_response \r\n\"+name+\".hdf5\") except: model = load_model(modelPath+r\"model_response \r\n\"+name+\"\\_model.h5\") dictionary['model'] = model f =\r\nopen(modelPath+r\"model_response \r\n\"+name+\"\\_history.pkl\", 'rb') history = pickle.load(f) f.close()\r\ndictionary['history'] = history return dictionary\r\n\r\nif \\_\\_name\\_\\_ == \"\\_\\_main\\_\\_\":\r\n\r\n#specify the train_folder_path and the test_folder_path #specify time\r\nstep (each seismic event has different timesteps, so need to define\r\nmanually) #specify output_response (e.g. Accel data in Channel 3, 5, 8\r\n=> then 3) #specify window size (stack size, please refer to Zhang2019\r\nfor details) train_folder_path = r'C:' test_folder_path = r'C:'\r\n\r\ntime_step = 7200 output_response = 3 window_size = 2\r\n\r\n#CSMIP data is read into \"train_file\" and \"test_file\". train_file =\r\nRead_CSMIP_data.Read_CSMIP_data(train_folder_path, time_step,\r\noutput_response, window_size) test_file =\r\nRead_CSMIP_data.Read_CSMIP_data(test_folder_path, time_step,\r\noutput_response, window_size)\r\n\r\n#The required 3d array will be generated for training/testing datax,\r\ndatay = train_file.generate_3d_array() testx, testy =\r\ntest_file.generate_3d_array()\r\n\r\n#load the trained model modelPath = r'C: \r\n' model_dict = load_model_dict(\"vanilla_30_addFC\", modelPath) model =\r\nmodel_dict['model']\r\n\r\n#perform prediction and load the results in datapredict and testpredict\r\ndatapredict = model.predict(datax) testpredict = model.predict(testx)\r\nprint(\"train_loss:\") print(model.evaluate(datax, datay, verbose=0))\r\nprint(\"test_loss:\") print(model.evaluate(testx, testy, verbose=0))\r\n\r\n#Sample Plot of training data plt.figure()\r\nplt.plot(model.predict(datax)[3,:,0], color='blue', lw=1.0)\r\nplt.plot(datay[3,:,0],':', color='red', alpha=0.8, lw=1.0)\r\nplt.title('Training Set: 3rd Floor Acceleration (x-direction)')\r\nplt.legend([\"Predicted\", \"Real\"]) plt.xlabel(\"Time Step\")\r\nplt.ylabel(\"Acceleration (cm/sec<sup>2</sup>)\")\r\n\r\nplt.figure() plt.plot(model.predict(datax)[3,:,1],\r\ncolor='blue',lw=1.0) plt.plot(datay[3,:,1],':', color='red',\r\nalpha=0.8, lw=1.0) plt.title('Training Set: Roof Acceleration\r\n(x-direction)') plt.legend([\"Predicted\", \"Real\"]) plt.xlabel(\"Time\r\nStep\") plt.ylabel(\"Acceleration (cm/sec<sup>2</sup>)\")\r\n\r\n#Sample Plot of testing data plt.figure()\r\nplt.plot(model.predict(testx)[3,:,0], color='blue', lw=1.0)\r\nplt.plot(testy[3,:,1],':', color='red', alpha=0.8, lw=1.0)\r\nplt.title('Testing Set: 3rd Floor Acceleration (x-direction)')\r\nplt.legend([\"Predicted\", \"Real\"]) plt.xlabel(\"Time Step\")\r\nplt.ylabel(\"Acceleration (cm/sec<sup>2</sup>)\")\r\n\r\nplt.figure() plt.plot(model.predict(testx)[3,:,1],\r\ncolor='blue',lw=1.0) plt.plot(testy[3,:,0],':', color='red',\r\nalpha=0.8, lw=1.0) plt.title('Testing Set: Roof Acceleration\r\n(x-direction)') plt.legend([\"Predicted\", \"Real\"]) plt.xlabel(\"Time\r\nStep\") plt.ylabel(\"Acceleration (cm/sec<sup>2</sup>)\")\r\n\r\n# Correlation Coefficient # Note: The resulting matrix from np.corrcoef\r\nshows this by having 1.0 # in both diagonal elements, indicating that\r\neach array is perfectly correlated # with itself, and \\< 1.0 in the\r\noff-diagonal elements, indicating that how the two arrays # are\r\ncorrelated with each other. print(\"training corr\") train_corr =\r\nnp.corrcoef(datapredict.flatten(), datay.flatten())[0,1]\r\nprint(train_corr) print(\"testing corr\") test_corr =\r\nnp.corrcoef(testpredict.flatten(), testy.flatten())[0,1]\r\nprint(test_corr)\r\n\r\n# Error - evaluate the error between the predicted result and the real\r\nresult errors = np.array([])\r\n\r\nx = (datapredict[:,:,0] - datay[:,:,0]) / np.max(np.abs(datay[:,:,0]), axis=1).reshape((-1,1)) \r\nhist = np.histogram(x.flatten(), np.arange(-0.2, 0.201, 0.001))[0] \r\nerrors = np.append(errors, hist) \r\nx = (datapredict[:,:,1] - datay[:,:,1]) / np.max(np.abs(datay[:,:,1]), axis=1).reshape((-1,1)) \r\nhist = np.histogram(x.flatten(), np.arange(-0.2, 0.201, 0.001))[0] \r\nerrors = np.append(errors, hist)\r\nx = (testpredict[:,:,0] - testy[:,:,0]) / np.max(np.abs(testy[:,:,0]), axis=1).reshape((-1,1)) \r\nhist = np.histogram(x.flatten(), np.arange(-0.2, 0.201, 0.001))[0] \r\nerrors = np.append(errors, hist)\r\nx = (testpredict[:,:,1] - testy[:,:,1]) / np.max(np.abs(testy[:,:,1]), axis=1).reshape((-1,1)) \r\nhist = np.histogram(x.flatten(), np.arange(-0.2, 0.201, 0.001))[0] \r\nerrors = np.append(errors, hist)\r\n\r\nerrors = errors.reshape((-1, 4, 400)) np.save(\"errors_new.npy\", errors)\r\nerror = np.load(\"errors_new.npy\")\r\n\r\nprint(error.shape)\r\n\r\n# Print the error graph, a better result will lead to an error curve centralized to 0. \r\nplt.figure() \r\nplt.plot(np.arange(-20, 20, 0.1), error[0][0] / (np.sum(error[0][0]) \\* 0.001))\r\nplt.plot(np.arange(-20, 20, 0.1), error[0][1] / (np.sum(error[0][1]) \\* 0.001)) \r\nplt.legend([\"Third floor\", \"Roof\"]) \r\nplt.xlim(-20,20) \r\nplt.xlabel(\"Normalized Error (\r\nplt.ylabel(\"PDF\") plt.title('Training Set')\r\n\r\nplt.figure() \r\nplt.plot(np.arange(-20, 20, 0.1), error[0][2] / (np.sum(error[0][2]) \\* 0.001)) \r\nplt.plot(np.arange(-20, 20, 0.1), error[0][3] / (np.sum(error[0][3]) \\* 0.001))\r\nplt.legend([\"Third floor\", \"Roof\"]) plt.xlim(-20,20)\r\nplt.xlabel(\"Normalized Error ( \r\nplt.ylabel(\"PDF\") \r\nplt.title('Testing Set')\r\n```\r\n\r\nLines 130-146 are used to generate the normalized error curves of\r\ndifferent floors. The errors of all the response cases and time steps\r\nare collected and the distribution can be plotted. The more the\r\ndistribution is centered around 0.0, the better the model.\r\n",
"bugtrack_url": null,
"license": null,
"summary": "Dynamic Time-series Response Analysis and Computational Estimation",
"version": "0.0.2",
"project_urls": {
"Repository": "http://github.com/issacpang/dyntrace"
},
"split_keywords": [
"seismic",
" earthquake"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "8526148c84a7fc3a598fa8cc8c2e9550f399806b001410b7eb59fe0fe7cb4960",
"md5": "24812dd7ba5a81985e452aa9ef302a17",
"sha256": "33a77ec494fc57430141ecfe5d51b422cd4fc4ca5dd8c2c2a8f60a8def7e7f68"
},
"downloads": -1,
"filename": "dyntrace-0.0.2-py3-none-any.whl",
"has_sig": false,
"md5_digest": "24812dd7ba5a81985e452aa9ef302a17",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 15031,
"upload_time": "2024-10-23T00:27:57",
"upload_time_iso_8601": "2024-10-23T00:27:57.838518Z",
"url": "https://files.pythonhosted.org/packages/85/26/148c84a7fc3a598fa8cc8c2e9550f399806b001410b7eb59fe0fe7cb4960/dyntrace-0.0.2-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "40abb2ac9471f0cf6af11822ded8e305a466f2ff140e7e77affa1a634300e39f",
"md5": "b5b9714a773d08483fdbd006810b5cc5",
"sha256": "a012c3db6377977021aadbeb24004f7f46fae3d63783b91404fececc6c298ad3"
},
"downloads": -1,
"filename": "dyntrace-0.0.2.tar.gz",
"has_sig": false,
"md5_digest": "b5b9714a773d08483fdbd006810b5cc5",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 11948,
"upload_time": "2024-10-23T00:27:58",
"upload_time_iso_8601": "2024-10-23T00:27:58.912840Z",
"url": "https://files.pythonhosted.org/packages/40/ab/b2ac9471f0cf6af11822ded8e305a466f2ff140e7e77affa1a634300e39f/dyntrace-0.0.2.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-10-23 00:27:58",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "issacpang",
"github_project": "dyntrace",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "dyntrace"
}