bearings


Namebearings JSON
Version 0.0.0 PyPI version JSON
download
home_pageNone
SummaryNone
upload_time2024-10-22 00:23:49
maintainerNone
docs_urlNone
authorNone
requires_pythonNone
licenseNone
keywords seismic earthquake
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # isol-sys-database

A database of isolated steel frames and their performance under earthquakes.

## Description

This repository is aimed at generating and analyzing isolated steel frames for their performance.
Most of the repository is dedicated towards automating the design of steel moment and braced frames isolated with friction or lead rubber bearings.
The designs are then automatically constructed in OpenSeesPy and subjected to a full nonlinear time history analysis.
The database is revolved around generating series of structures spanning the range of a few random design variables dictating over/under design of strength and displacement capacity variables, along with a few isolator design parameters.

Decision variable prediction via the SimCenter toolbox is also available (work in progress).

An analysis folder is available with some scripts performing data visualization and machine learning predictions.
The database is utilized to generate inverse design targeting specific structural performance.

## Dependencies

A comprehensive ```.yaml``` file containing the below dependencies is available for virtual environment setup (in Conda). However, it is derived directly from my working environment and includes some personal software, such as the Spyder IDE. Please remove these as necessary.

* Structural software:
	* OpenSeesPy 3.4.0
	* Python 3.9

* Data structure management:
	* Pandas 2.2.0+
	* Numpy 1.22.4+
	* Scipy 1.12.0+

* Machine learning analyses (required for design of experiment, inverse design):
	* Scikit-learn

* Visualization:
	* Matplotlib
	* Seaborn

* Decision-variable prediction:
	* Pelicun 3.1+

## Setup

Prepare a directory for each individual run's outputs under ```src/outputs/```, as well as the data output directory under ```data```. This is not done automatically in this repository since these directories should change if parallel running is desired.

Once that is completed, with the exception of post-experiment analyses, such as making plots and prediction models, the repository relies on relative pathing and should be able to run out-of-the-box. For visualizations and analyses from the ```src/analyses/``` folder, ensure that all ```sys.path.insert``` calls reference the directory with the files that generate ML models (particularly ```doe.py```). Additionally, ensure that all pickled and CSV objects reference within are pointed to a proper data file (which are not provided in this GitHub repository, but are available in the DesignSafe Data Depot upload).

## Usage
The database generation is handled through main_\* scripts available in the ```src``` folder.
```src/analyses/``` contains scripts for data visualization and results processing.

Files under ```src/``` titled gen_\* and val_\* are written for HPC-utilizing parallel computations and are not detailed here.

### Generating an initial database

An initial database of size `n_pts`, distributed randomly uniform via Latin Hypercube sampling, can be generated with 

    from db import Database
    db_obj = Database(n_pts, seed=123)
    
The database can be limited to just certain systems using the `struct_sys_list` and `isol_sys_list` arguments, defaulting to generate both moment frames and concentric braced frames, both friction and lead rubber bearings. This database holds all methods for further design and analyses and therefore must be generated. Then, generate designs for the current database of dimensionless design parameters.

    db_obj.design_bearings(filter_designs=True)
    db_obj.design_structure(filter_designs=True)
    
A full list of unfiltered designs is available in `db_obj.generated_designs`. After removing for unreasonable designs, there should be `n_pts` designs remaining, stored in `db_obj.retained_designs`. To prepare the ground motions for the analyses, performance

    db_obj.scale_gms()
    
which requires and augments the `retained_designs` attribute. Finally, perform the nonlinear dynamic analyses with

    db_obj.analyze_db('my_database.csv')
    
It is then recommended to store the data in a pickle file as well to preserve data structures in drift/velocity/acceleration outputs.

    import pickle
    with open('../data/my_run.pickle', 'wb') as f:
        pickle.dump(db_obj, f)
    
### Analyzing individual runs

Some troubleshooting tools are provided. From any row of the design DataFrames, a Building object can be created. For example, a diagnostic run is provided below.

    from building import Building
    bldg = Building(run)
    bldg.model_frame()
    bldg.apply_grav_load()
    T_1 = bldg.run_eigen()
    bldg.provide_damping(80, method='SP', zeta=[0.05], modes=[1])
    dt = 0.005
    ok = bldg.run_ground_motion(run.gm_selected, 
                            run.scale_factor*1.0, 
                            dt, T_end=60.0)

    from plot_structure import plot_dynamic
    plot_dynamic(run)
    
### Performing design-of-experiment

This task requires for a Database object to exist, and that results from analyses already exist for this (stored in `db_obj.ops_analysis`). To load an existing pickled Database object,

    import pickle
    with open('my_run.pickle', 'rb') as f:
        db_obj = pickle.load(f)

Then, first calculate collapse probabilities (since DoE is tied to targeting collapse probability).

    db_obj.calculate_collapse()
    db_obj.perform_doe(n_set=200,batch_size=5, max_iters=1500, strategy='balanced')
    
`n_set` determines how many points to build the ML object for DoE from. `batch_size` and `max_iters` are run controls for size of each DoE batch and maximum number of points added before exhaustion. `strategy` specifies the DoE strategy, which can be `explore` to target model variance, `exploit` to target model bias, or `balanced` to weigh both equally.


### Running loss analysis with Pelicun

Assuming that a database is available in the Database object's `ops_analysis` attribute (or `doe_analysis`), damage and loss can be calculated using 

    db_obj.run_pelicun(db_obj.ops_analysis, collect_IDA=False,
                    cmp_dir='../resource/loss/')

    import pickle
    loss_path = '../data/loss/'
    with open(loss_path+'my_damage_loss.pickle', 'wb') as f:
        pickle.dump(db_obj, f)
        
To calculate theoretical maximum damage/loss of the building, run

    db_obj.calc_cmp_max(db_obj.ops_analysis,
                cmp_dir='../resource/loss/')

These are stored in the Database object's `loss_data` and `max_loss` attributes, respectively. It might be prudent to first calculate theoretical maximum loss of the building to further feed into the actual loss calculation for the purpose of having a realistic estimate for replacement consequences. Without this, manual estimation of the replacement calculation is required within `loss`.

    db_max = db_obj.max_loss
    db_obj.run_pelicun(db_bj.ops_analysis, collect_IDA=False,
    			cmp_dir='../resource/loss/', max_loss_df=db_max)

### Validating a design in incremental dynamic analysis

This assumes that a Database object exists. Specify the design of the validated design using a dictionary.

    sample_dict = {
        'gap_ratio' : 0.6,
        'RI' : 2.25,
        'T_ratio': 2.16,
        'zeta_e': 0.25
    }
    
Then, prepare an IDA of three MCE levels (1.0, 1.5, 2.0x by default), perform the IDA, and store the results.

    design_df = pd.DataFrame(sample_dict, index=[0])
    db_obj.prepare_ida_legacy(design_df)
    db_obj.analyze_ida('ida_sample.csv')

    import pickle
    with open(validation_path+'my_ida.pickle', 'wb') as f:
        pickle.dump(db_obj, f)

## Interpreting results

A list of variables generated in the `ops_analysis` and `doe_analysis` object is available in ```resource/variable_list.xlsx```.

## TODO/Personal notes:
A reminder that this database is dependent on the OpenSees compatible with Python=3.9.
See opensees_build/locations/ for location of a working Opensees.pyd code.

## Research tools utilized

* [OpenSeesPy](https://github.com/zhuminjie/OpenSeesPy)
* [SimCenter Pelicun](https://github.com/NHERI-SimCenter/pelicun)

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "bearings",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "seismic, earthquake",
    "author": null,
    "author_email": "\"Huy G. Pham\" <62033779+hgp297@users.noreply.github.com>",
    "download_url": "https://files.pythonhosted.org/packages/47/75/0c9afa890b5e45affaad0d1b2acdaa5bb6369a38a8c9461b380716ec9429/bearings-0.0.0.tar.gz",
    "platform": null,
    "description": "# isol-sys-database\n\nA database of isolated steel frames and their performance under earthquakes.\n\n## Description\n\nThis repository is aimed at generating and analyzing isolated steel frames for their performance.\nMost of the repository is dedicated towards automating the design of steel moment and braced frames isolated with friction or lead rubber bearings.\nThe designs are then automatically constructed in OpenSeesPy and subjected to a full nonlinear time history analysis.\nThe database is revolved around generating series of structures spanning the range of a few random design variables dictating over/under design of strength and displacement capacity variables, along with a few isolator design parameters.\n\nDecision variable prediction via the SimCenter toolbox is also available (work in progress).\n\nAn analysis folder is available with some scripts performing data visualization and machine learning predictions.\nThe database is utilized to generate inverse design targeting specific structural performance.\n\n## Dependencies\n\nA comprehensive ```.yaml``` file containing the below dependencies is available for virtual environment setup (in Conda). However, it is derived directly from my working environment and includes some personal software, such as the Spyder IDE. Please remove these as necessary.\n\n* Structural software:\n\t* OpenSeesPy 3.4.0\n\t* Python 3.9\n\n* Data structure management:\n\t* Pandas 2.2.0+\n\t* Numpy 1.22.4+\n\t* Scipy 1.12.0+\n\n* Machine learning analyses (required for design of experiment, inverse design):\n\t* Scikit-learn\n\n* Visualization:\n\t* Matplotlib\n\t* Seaborn\n\n* Decision-variable prediction:\n\t* Pelicun 3.1+\n\n## Setup\n\nPrepare a directory for each individual run's outputs under ```src/outputs/```, as well as the data output directory under ```data```. This is not done automatically in this repository since these directories should change if parallel running is desired.\n\nOnce that is completed, with the exception of post-experiment analyses, such as making plots and prediction models, the repository relies on relative pathing and should be able to run out-of-the-box. For visualizations and analyses from the ```src/analyses/``` folder, ensure that all ```sys.path.insert``` calls reference the directory with the files that generate ML models (particularly ```doe.py```). Additionally, ensure that all pickled and CSV objects reference within are pointed to a proper data file (which are not provided in this GitHub repository, but are available in the DesignSafe Data Depot upload).\n\n## Usage\nThe database generation is handled through main_\\* scripts available in the ```src``` folder.\n```src/analyses/``` contains scripts for data visualization and results processing.\n\nFiles under ```src/``` titled gen_\\* and val_\\* are written for HPC-utilizing parallel computations and are not detailed here.\n\n### Generating an initial database\n\nAn initial database of size `n_pts`, distributed randomly uniform via Latin Hypercube sampling, can be generated with \n\n    from db import Database\n    db_obj = Database(n_pts, seed=123)\n    \nThe database can be limited to just certain systems using the `struct_sys_list` and `isol_sys_list` arguments, defaulting to generate both moment frames and concentric braced frames, both friction and lead rubber bearings. This database holds all methods for further design and analyses and therefore must be generated. Then, generate designs for the current database of dimensionless design parameters.\n\n    db_obj.design_bearings(filter_designs=True)\n    db_obj.design_structure(filter_designs=True)\n    \nA full list of unfiltered designs is available in `db_obj.generated_designs`. After removing for unreasonable designs, there should be `n_pts` designs remaining, stored in `db_obj.retained_designs`. To prepare the ground motions for the analyses, performance\n\n    db_obj.scale_gms()\n    \nwhich requires and augments the `retained_designs` attribute. Finally, perform the nonlinear dynamic analyses with\n\n    db_obj.analyze_db('my_database.csv')\n    \nIt is then recommended to store the data in a pickle file as well to preserve data structures in drift/velocity/acceleration outputs.\n\n    import pickle\n    with open('../data/my_run.pickle', 'wb') as f:\n        pickle.dump(db_obj, f)\n    \n### Analyzing individual runs\n\nSome troubleshooting tools are provided. From any row of the design DataFrames, a Building object can be created. For example, a diagnostic run is provided below.\n\n    from building import Building\n    bldg = Building(run)\n    bldg.model_frame()\n    bldg.apply_grav_load()\n    T_1 = bldg.run_eigen()\n    bldg.provide_damping(80, method='SP', zeta=[0.05], modes=[1])\n    dt = 0.005\n    ok = bldg.run_ground_motion(run.gm_selected, \n                            run.scale_factor*1.0, \n                            dt, T_end=60.0)\n\n    from plot_structure import plot_dynamic\n    plot_dynamic(run)\n    \n### Performing design-of-experiment\n\nThis task requires for a Database object to exist, and that results from analyses already exist for this (stored in `db_obj.ops_analysis`). To load an existing pickled Database object,\n\n    import pickle\n    with open('my_run.pickle', 'rb') as f:\n        db_obj = pickle.load(f)\n\nThen, first calculate collapse probabilities (since DoE is tied to targeting collapse probability).\n\n    db_obj.calculate_collapse()\n    db_obj.perform_doe(n_set=200,batch_size=5, max_iters=1500, strategy='balanced')\n    \n`n_set` determines how many points to build the ML object for DoE from. `batch_size` and `max_iters` are run controls for size of each DoE batch and maximum number of points added before exhaustion. `strategy` specifies the DoE strategy, which can be `explore` to target model variance, `exploit` to target model bias, or `balanced` to weigh both equally.\n\n\n### Running loss analysis with Pelicun\n\nAssuming that a database is available in the Database object's `ops_analysis` attribute (or `doe_analysis`), damage and loss can be calculated using \n\n    db_obj.run_pelicun(db_obj.ops_analysis, collect_IDA=False,\n                    cmp_dir='../resource/loss/')\n\n    import pickle\n    loss_path = '../data/loss/'\n    with open(loss_path+'my_damage_loss.pickle', 'wb') as f:\n        pickle.dump(db_obj, f)\n        \nTo calculate theoretical maximum damage/loss of the building, run\n\n    db_obj.calc_cmp_max(db_obj.ops_analysis,\n                cmp_dir='../resource/loss/')\n\nThese are stored in the Database object's `loss_data` and `max_loss` attributes, respectively. It might be prudent to first calculate theoretical maximum loss of the building to further feed into the actual loss calculation for the purpose of having a realistic estimate for replacement consequences. Without this, manual estimation of the replacement calculation is required within `loss`.\n\n    db_max = db_obj.max_loss\n    db_obj.run_pelicun(db_bj.ops_analysis, collect_IDA=False,\n    \t\t\tcmp_dir='../resource/loss/', max_loss_df=db_max)\n\n### Validating a design in incremental dynamic analysis\n\nThis assumes that a Database object exists. Specify the design of the validated design using a dictionary.\n\n    sample_dict = {\n        'gap_ratio' : 0.6,\n        'RI' : 2.25,\n        'T_ratio': 2.16,\n        'zeta_e': 0.25\n    }\n    \nThen, prepare an IDA of three MCE levels (1.0, 1.5, 2.0x by default), perform the IDA, and store the results.\n\n    design_df = pd.DataFrame(sample_dict, index=[0])\n    db_obj.prepare_ida_legacy(design_df)\n    db_obj.analyze_ida('ida_sample.csv')\n\n    import pickle\n    with open(validation_path+'my_ida.pickle', 'wb') as f:\n        pickle.dump(db_obj, f)\n\n## Interpreting results\n\nA list of variables generated in the `ops_analysis` and `doe_analysis` object is available in ```resource/variable_list.xlsx```.\n\n## TODO/Personal notes:\nA reminder that this database is dependent on the OpenSees compatible with Python=3.9.\nSee opensees_build/locations/ for location of a working Opensees.pyd code.\n\n## Research tools utilized\n\n* [OpenSeesPy](https://github.com/zhuminjie/OpenSeesPy)\n* [SimCenter Pelicun](https://github.com/NHERI-SimCenter/pelicun)\n",
    "bugtrack_url": null,
    "license": null,
    "summary": null,
    "version": "0.0.0",
    "project_urls": {
        "Repository": "http://github.com/hgp297/isol-sys-database"
    },
    "split_keywords": [
        "seismic",
        " earthquake"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "4e9c097dc2b8b02c5c0137eb1347756fa711d7ce25078e0295453c14a96b23cd",
                "md5": "2a12454be78f070fc45f2ad03afabff4",
                "sha256": "9417abdd7c5fdf5fc7822fe22c1c11beeafb5f414d3f77770a59b15d4192962d"
            },
            "downloads": -1,
            "filename": "bearings-0.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2a12454be78f070fc45f2ad03afabff4",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 209268,
            "upload_time": "2024-10-22T00:23:47",
            "upload_time_iso_8601": "2024-10-22T00:23:47.741667Z",
            "url": "https://files.pythonhosted.org/packages/4e/9c/097dc2b8b02c5c0137eb1347756fa711d7ce25078e0295453c14a96b23cd/bearings-0.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "47750c9afa890b5e45affaad0d1b2acdaa5bb6369a38a8c9461b380716ec9429",
                "md5": "88f949bb9543f8b6d9bcc3c6def4f73d",
                "sha256": "f2b9509b8689b43095e7e3b73da778f7876837c8a6bde3468b9b7f4fe86232ba"
            },
            "downloads": -1,
            "filename": "bearings-0.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "88f949bb9543f8b6d9bcc3c6def4f73d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 189996,
            "upload_time": "2024-10-22T00:23:49",
            "upload_time_iso_8601": "2024-10-22T00:23:49.722653Z",
            "url": "https://files.pythonhosted.org/packages/47/75/0c9afa890b5e45affaad0d1b2acdaa5bb6369a38a8c9461b380716ec9429/bearings-0.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-10-22 00:23:49",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "hgp297",
    "github_project": "isol-sys-database",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "bearings"
}
        
Elapsed time: 0.38081s