geomagpy


Namegeomagpy JSON
Version 1.1.7 PyPI version JSON
download
home_pagehttp://pypi.python.org/pypi/geomagpy/
SummaryGeomagnetic analysis tools.
upload_time2023-12-01 09:27:33
maintainer
docs_urlNone
authorR. Leonhardt, R. Bailey, M. Miklavec, J. Fee, H. Schovanec, S. Bracke
requires_python
licenseLICENSE.txt
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            MagPy
=====

**MagPy (or GeomagPy) is a Python package for analysing and displaying
geomagnetic data.**

Version Info: (please note: this package is still in a development state
with frequent modifcations) please check the release notes.

MagPy provides tools for geomagnetic data analysis with special focus on
typical data processing routines in observatories. MagPy provides
methods for data format conversion, plotting and mathematical procedures
with specifically geomagnetic analysis routines such as basevalue and
baseline calculation and database handling. Among the supported data
formats are *ImagCDF, IAGA-02, WDC, IMF, IAF, BLV*, and many more. Full
installation also provides a graphical user interface, *xmagpy*. You
will find a complete manual for *xmagpy* in the docs.

Typical usage of the basic MagPy package for reading and visualising
data looks like this:

::

        #!/usr/bin/env python

        from magpy.stream import read
        import magpy.mpplot as mp
        stream = read('filename_or_url')
        mp.plot(stream)

Below you will find a quick guide to usage of the basic MagPy package.
For instructions on *xmagpy* please refer to the document "`An
introduction to
XMagPy <https://github.com/geomagpy/magpy/blob/master/magpy/doc/xmagpy-manual.pdf>`__"
in the docs. You can also subscribe to our information channel at
`Telegram <https://t.me/geomagpy>`__ for further information on updates
and current issues.

1. INSTALLATION
---------------

Pleas note that with the publication of MagPy 1.0 the recommended python
enironment is >= 3.6. The following installation instructions will
assume such an environment. Particularly if you are using Python2.7
please go to the end of this sections for help.

This section is currently updated and will be ready with the publication
of MagPy 1.0.

1.1 Linux installation (Ubuntu,Debian)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

1.1.1 Complete Install
^^^^^^^^^^^^^^^^^^^^^^

Tested for Ubuntu 18.04 and Debian Stretch (full installation with all
optional packages). Please note that installation requires python 3.x.

::

        $ sudo pip3 install geomagpy    #Will install MagPy and all dependencies
        $ sudo pip3 install wxpython    #Will install WX graphics system

If wxpython installation via pip3 fails you can try

        $ sudo apt-get install python3-wxgtk4.0

You can now run XMagPy by using the following command

::

        $ xmagpy

1.1.2 Updates
^^^^^^^^^^^^^

To upgrade to the most recent version:

::

        $ sudo pip3 install -U geomagpy

1.1.3 Creating a desktop link
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

In order to create a desktop link on linux systems please refer to
instruction too be found your distribution. For Ubunutu and other Debian
systems such links are created as follows:

Firstly create a file "xmagpy.desktop" which contains:

::

        [Desktop Entry]
        Type=Application
        Name=XMagPy
        GenericName=GeoMagPy User Interface
        Exec=xmagpy
        Icon=/usr/local/lib/python3.7/dist-packages/magpy/gui/magpy128.xpm
        Terminal=false
        Categories=Application;Development;

Then copy this file to the systems application folder:

::

        sudo cp xmagpy.desktop /usr/share/applications/

1.2 MacOs installation
~~~~~~~~~~~~~~~~~~~~~~

1.2.1 Install a python3 interpreter
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-  we recommend
   `Miniconda <https://docs.conda.io/en/latest/miniconda.html>`__ or
   `Anaconda <https://www.continuum.io/downloads>`__
-  see e.g. https://docs.continuum.io/anaconda/install for more details
-  before continuiung, test whether python is working. Open a terminal
   and run python

1.2.2 Install MagPy
^^^^^^^^^^^^^^^^^^^

Open a terminal and use the following commands:

::

        $ pip install geomagpy    #Will install MagPy and all dependencies
        $ pip install wxpython    #Will install WX graphics system for XMagPy

You can now run XMagPy from the terminal by using the following command

::

        $ xmagpyw

1.2.3 Creating a desktop link
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Open Finder and search for xmagpyw. Copy it to the desktop. To change
the icon, click on the xmagpyw link, open information and replace the
image on the upper left with e.g. magpy128.jpg (also to be found using
finder).

1.3 Windows installation - WinPython Package
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

1.3.1 Install MagPy for Windows
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-  get the `MagPy Windows
   installer <https://cobs.zamg.ac.at/data/index.php/en/downloads/category/1-magnetism>`__
   here (under Downloads): https://cobs.zamg.ac.at
-  download and execute magpy-x.x.x.exe
-  all required packages are included in the installer

1.3.2 Post-installation information
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-  MagPy will have a sub-folder in the Start menu. Here you will find
   three items:

   ::

       * command -> opens a DOS shell within the Python environment e.g. for updates 
       * python  -> opens a python shell ready for MagPy
       * xmagpy  -> opens the MagPy graphical user interface

1.3.3 Update an existing MagPy installation on Windows
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-  right-click on subfolder "command" in the start menu
-  select "run as administrator"
-  issue the following command "pip install -U geomagpy" (you can also
   specify the version e.g. pip install geomagpy==0.x.x)

1.3.4 Installation with user priviledges only
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

-  Download a most recent version of WinPython3.x
-  Unpack in your home directory
-  Go to the WinPython Folder and run WinPython command prompt
-  issue the same commands as for MacOS installation
-  to run XMagPy: use xmagpy from the WinPython command promt.

1.4 Installation instructions for Python 2.7
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The current version of magpy is still supporting python 2.7, although it
is highly recommended to switch to python >= 3.6. Installation on python
2.7 is more complex, as some packages for graphical user interface and
CDF support not as well supported. Please note: None of the addtional
steps is necessary for python 3.x.

1.4.1 Pre-installation work
^^^^^^^^^^^^^^^^^^^^^^^^^^^

Get a recent version of NasaCDF for your platform, enables CDF support
for formats like ImagCDF. Package details and files can be found at
http://cdf.gsfc.nasa.gov/

On Linux such installation will look like
(http://cdf.gsfc.nasa.gov/html/sw\_and\_docs.html)

::

        $ tar -zxvf cdf37_0-dist-all.tar.gz
        $ cd cdf37...
        $ make OS=linux ENV=gnu CURSES=yes FORTRAN=no UCOPTIONS=-O2 SHARED=yes all
        $ sudo make INSTALLDIR=/usr/local/cdf install

Install the following additional compilers before continuing (required
for spacepy): Linux: install gcc MacOs: install gcc and gfortran

Install coordinate system transformation support:

::

        $ sudo apt-get install libproj-dev proj-data proj-bin

1.4.2 Install MagPy and dependencies
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

On Linux this will look like:

::

        $ sudo apt-get install python-matplotlib python-scipy python-h5py cython python-pip  
        $ sudo apt-get install python-wxgtk3.0 # or python-wxgtk2.8 (Debian Stretch)  
        $ sudo apt-get install python-twisted  
        $ sudo pip install ffnet
        $ sudo pip install pyproj==1.9.5
        $ sudo pip install pyserial
        $ sudo pip install service_identity
        $ sudo pip install ownet
        $ sudo pip install spacepy
        $ sudo pip install geomagpy  

On Mac and Windows you need to download a python interpreter like
`Anaconda <https://www.continuum.io/downloads>`__ or [WinPython] and
then install similar packages, particluarly the old wxpython 3.x.

1.5 Platform independent container - Docker
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

1.5.1 Install `Docker <https://www.docker.com/>`__ (toolbox) on your operating system
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

::

     - https://docs.docker.com/engine/installation/

1.5.2 Get the MagPy Image
^^^^^^^^^^^^^^^^^^^^^^^^^

::

     - open a docker shell

            >>> docker pull geomagpy/magpy:latest
            >>> docker run -d --name magpy -p 8000:8000 geomagpy/magpy:latest

1.5.3 Open a browser
^^^^^^^^^^^^^^^^^^^^

::

     - open address http://localhost:8000 (or http://"IP of your VM":8000)
     - NEW: first time access might require a token or passwd

            >>> docker logs magpy

          will show the token 
     - run python shell (not conda) 
     - in python shell

            >>> %matplotlib inline
            >>> from magpy.stream import read
            >>> ...

1.6 Install from source
~~~~~~~~~~~~~~~~~~~~~~~

Requirements: - Python 2.7, 3.x (recommended is >=3.6)

Recommended: - Python packages: \* wxpython (for python2.7 it needs to
be 3.x or older) \* NasaCDF (python 2.7 only) \* SpacePy (python 2.7
only)

-  Other useful Software:

   -  pyproj (for geographic coordinate systems)
   -  MySQL (database features)
   -  Webserver (e.g. Apache2, PHP)

      git clone git://github.com/GeomagPy/MagPy.git cd magpy\* sudo
      python setup.py install

2. A quick guide to MagPy
-------------------------

written by R. Leonhardt, R. Bailey (April 2017)

MagPy's functionality can be accessed basically in three different ways:
1) Directly import and use the magpy package into a python environment
2) Run the graphical user interface xmagpy (xmagpyw for Mac) 3) Use
predefined applications "Scripts"

The following section will primarily deal with way 1. For 2 - xmagpy -
we refer to the video tutorials whcih can be found here: Section 3
contains examples for predefined applications/scripts

2.1 Getting started with the python package
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Start python. Import all stream methods and classes using:

::

    from magpy.stream import *

Please note that this import will shadow any already existing ``read``
method.

2.2 Reading and writing data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

MagPy supports the following data formats and thus conversions between
them: - WDC: World Data Centre format - JSON: JavaScript Object Notation
- IMF: Intermagnet Format - IAF: Intermagnet Archive Format - NEIC: WGET
data from USGS - NEIC - IAGA: IAGA 2002 text format - IMAGCDF:
Intermagnet CDF Format - GFZKP: GeoForschungsZentrum KP-Index format -
GSM19/GSM90: Output formats from GSM magnetometers - POS1: POS-1 binary
output - BLV: Baseline format Intermagnet - IYFV: Yearly mean format
Intermagnet

... and many others. To get a full list, use:

::

        from magpy.stream import *
        print(PYMAG_SUPPORTED_FORMATS)

You will find several example files provided with MagPy. The ``cdf``
file is stored along with meta information in NASA's common data format
(cdf). Reading this file requires a working installation of Spacepy cdf.

If you do not have any geomagnetic data file you can access example data
by using the following command (after ``import *``):

::

        data = read(example1)
        

The data from ``example1`` has been read into a MagPy *DataStream* (or
*stream*) object. Most data processing routines in MagPy are applied to
data streams.

Several example data sets are provided within the MagPy package:

-  ``example1``: `IAGA <http://www.iaga-aiga.org/>`__ ZIP (IAGA2002, zip
   compressed) file with 1 second HEZ data
-  ``example2``: `MagPy <#magpy>`__ Archive (CDF) file with 1 sec F data
-  ``example3``: `MagPy <#magpy>`__ Basevalue (TXT) ascii file with DI
   and baseline data
-  ``example4``: `INTERMAGNET <http://www.intermagnet.org>`__ ImagCDF
   (CDF) file with one week of 1 second data
-  ``example5``: `MagPy <#magpy>`__ Archive (CDF) raw data file with xyz
   and supporting data
-  ``example6a``: `MagPy <#magpy>`__ DI (txt) raw data file with DI
   measurement
-  ``example6b``: `MagPy <#magpy>`__ like 6a to be used with example4

-  ``flagging_example``: `MagPy <#magpy>`__ FlagDictionary (JSON)
   flagging info to be used with example1
-  ``recipe1_flags``: `MagPy <#magpy>`__ FlagDictionary (JSON) to be
   used with cookbook recipe 1

2.2.1 Reading
^^^^^^^^^^^^^

For a file in the same directory:

::

        data = read(r'myfile.min') 

... or for specific paths in Linux:

::

        data = read(r'/path/to/file/myfile.min') 

... or for specific paths in Windows:

::

        data = read(r'c:\path\to\file\myfile.min')

Pathnames are related to your operating system. In this guide we will
assume a Linux system. Files that are read in are uploaded to the memory
and each data column (or piece of header information) is assigned to an
internal variable (key). To get a quick overview of the assigned keys in
any given stream (``data``) you can use the following method:

::

        print(data._get_key_headers() )

2.2.2 Writing
^^^^^^^^^^^^^

After loading data from a file, we can save the data in the standard
IAGA02 and IMAGCDF formats with the following commands.

To create an IAGA-02 format file, use:

::

        data.write(r'/path/to/diretory/',format_type='IAGA')

To create an `INTERMAGNET <http://www.intermagnet.org>`__ CDF (ImagCDF)
file:

::

        data.write(r'/path/to/diretory/',format_type='IMAGCDF')

The filename will be created automatically according to the defined
format. By default, daily files are created and the date is added to the
filename in-between the optional parameters ``filenamebegins`` and
``filenameends``. If ``filenameends`` is missing, ``.txt`` is used as
default.

2.2.3 Other possibilities for reading files
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

To read all local files ending with .min within a directory (creates a
single stream of all data):

::

        data = read(r'/path/to/file/*.min')

Getting magnetic data directly from an online source such as the WDC:

::

        data = read(r'ftp://thewellknownaddress/single_year/2011/fur2011.wdc')

Getting *kp* data from the GFZ Potsdam:

::

        data = read(r'http://www-app3.gfz-potsdam.de/kp_index/qlyymm.tab')

(Please note: data access and usage is subjected to the terms and
conditions of the individual data provider. Please make sure to read
them before accessing any of these products.)

No format specifications are required for reading. If MagPy can handle
the format, it will be automatically recognized.

Getting data for a specific time window for local files:

::

        data = read(r'/path/to/files/*.min',starttime="2014-01-01", endtime="2014-05-01")

... and remote files:

::

        data = read(r'ftp://address/fur2013.wdc',starttime="2013-01-01", endtime="2013-02-01")

Reading data from the INTERMAGNET Webservice (starting soon):

::

        data = read('http://www.intermagnet.org/test/ws/?id=WIC')

2.2.4 Selecting timerange
^^^^^^^^^^^^^^^^^^^^^^^^^

The stream can be trimmed to a specific time interval after reading by
applying the trim method, e.g. for a specific month:

::

        data = data.trim(starttime="2013-01-01", endtime="2013-02-01")

2.3 Getting help on options and usage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

2.3.1 Python's help function
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Information on individual methods and options can be obtained as
follows:

For basic functions:

::

        help(read)

For specific methods related to e.g. a stream object "data":

::

        help(data.fit)

Note that this requires the existence of a "data" object, which is
obtained e.g. by data = read(...). The help text can also be shown by
directly calling the *DataStream* object method using:

::

        help(DataStream.fit)

2.3.2 MagPy's logging system
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

MagPy automatically logs many function options and runtime information,
which can be useful for debugging purposes. This log is saved by default
in the temporary file directory of your operating system, e.g. for Linux
this would be ``/tmp/magpy.log``. The log is formatted as follows with
the date, module and function in use and the message leve
(INFO/WARNING/ERROR):

::

        2017-04-22 09:50:11,308 INFO - magpy.stream - Initiating MagPy...

Messages on the WARNING and ERROR level will automatically be printed to
shell. Messages for more detailed debugging are written at the DEBUG
level and will not be printed to the log unless an additional handler
for printing DEBUG is added.

Custom loggers can be defined by creating a logger object after
importing MagPy and adding handlers (with formatting):

::

        from magpy.stream import *
        import logging
        
        logger = logging.getLogger()
        hdlr = logging.FileHandler('testlog.log')
        formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
        hdlr.setFormatter(formatter)
        logger.addHandler(hdlr)
        

The logger can also be configured to print to shell (stdout, without
formatting):

::

        import sys
        logger = logging.getLogger()
        stdoutlog = logging.StreamHandler(sys.stdout)
        logger.addHandler(stdoutlog)

2.4 Plotting
~~~~~~~~~~~~

You will find some example plots at the `Conrad
Observatory <http://www.conrad-observatory.at>`__.

2.4.1 Quick (and not dirty)
^^^^^^^^^^^^^^^^^^^^^^^^^^^

::

        import magpy.mpplot as mp
        mp.plot(data)

2.4.2 Some options
^^^^^^^^^^^^^^^^^^

Select specific keys to plot:

::

        mp.plot(data,variables=['x','y','z'])
        

Defining a plot title and specific colors (see ``help(mp.plot)`` for
list and all options):

::

        mp.plot(data,variables=['x','y'],plottitle="Test plot",
                colorlist=['g', 'c'])

2.4.3 Data from multiple streams
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Various datasets from multiple data streams will be plotted above one
another. Provide a list of streams and an array of keys:

::

        mp.plotStreams([data1,data2],[['x','y','z'],['f']])

2.5 Flagging data
~~~~~~~~~~~~~~~~~

The flagging procedure allows the observer to mark specific data points
or ranges. Falgs are useful for labelling data spikes, storm onsets,
pulsations, disturbances, lightning strikes, etc. Each flag is asociated
with a comment and a type number. The flagtype number ranges between 0
and 4:

-  0: normal data with comment (e.g. "Hello World")
-  1: data marked by automated analysis (e.g. spike)
-  2: data marked by observer as valid geomagnetic signature (e.g. storm
   onset, pulsation). Such data cannot be marked invalid by automated
   procedures
-  3: data marked by observer as invalid (e.g. lightning, magnetic
   disturbance)
-  4: merged data (e.g. data inserted from another source/instrument as
   defined in the comment)

Flags can be stored along with the data set (requires CDF format output)
or separately in a binary archive. These flags can then be applied to
the raw data again, ascertaining perfect reproducibility.

2.5.1 Mark data spikes
^^^^^^^^^^^^^^^^^^^^^^

Load a data record with data spikes:

::

        datawithspikes = read(example1)

Mark all spikes using the automated function ``flag_outlier`` with
default options:

::

        flaggeddata = datawithspikes.flag_outlier(timerange=timedelta(minutes=1),threshold=3)

Show flagged data in a plot:

::

        mp.plot(flaggeddata,['f'],annotate=True)

2.5.2 Flag time range
^^^^^^^^^^^^^^^^^^^^^

Flag a certain time range:

::

        flaglist = flaggeddata.flag_range(keys=['f'], starttime='2012-08-02T04:33:40', 
                                          endtime='2012-08-02T04:44:10', 
                                          flagnum=3, text="iron metal near sensor")

Apply these flags to the data:

::

        flaggeddata = flaggeddata.flag(flaglist)

Show flagged data in a plot:

::

        mp.plot(flaggeddata,['f'],annotate=True)

2.5.3 Save flagged data
^^^^^^^^^^^^^^^^^^^^^^^

To save the data together with the list of flags to a CDF file:

::

        flaggeddata.write('/tmp/',filenamebegins='MyFlaggedExample_', format_type='PYCDF')

To check for correct save procedure, read and plot the new file:

::

        newdata = read("/tmp/MyFlaggedExample_*")
        mp.plot(newdata,annotate=True, plottitle='Reloaded flagged CDF data')

2.5.4 Save flags separately
^^^^^^^^^^^^^^^^^^^^^^^^^^^

To save the list of flags seperately from the data in a pickled binary
file:

::

        fullflaglist = flaggeddata.extractflags()
        saveflags(fullflaglist,"/tmp/MyFlagList.pkl"))

These flags can be loaded in and then reapplied to the data set:

::

        data = read(example1)
        flaglist = loadflags("/tmp/MyFlagList.pkl")
        data = data.flag(flaglist)
        mp.plot(data,annotate=True, plottitle='Raw data with flags from file')

2.5.5 Drop flagged data
^^^^^^^^^^^^^^^^^^^^^^^

For some analyses it is necessary to use "clean" data, which can be
produced by dropping data flagged as invalid (e.g. spikes). By default,
the following method removes all data marked with flagtype numbers 1 and
3.

::

        cleandata = flaggeddata.remove_flagged()
        mp.plot(cleandata, ['f'], plottitle='Flagged data dropped')

2.6 Basic methods
~~~~~~~~~~~~~~~~~

2.6.1 Filtering
^^^^^^^^^^^^^^^

MagPy's ``filter`` uses the settings recommended by
`IAGA <http://www.iaga-aiga.org/>`__/`INTERMAGNET <http://www.intermagnet.org>`__.
Ckeck ``help(data.filter)`` for further options and definitions of
filter types and pass bands.

First, get the sampling rate before filtering in seconds:

::

        print("Sampling rate before [sec]:", cleandata.samplingrate())

Filter the data set with default parameters (``filter`` automatically
chooses the correct settings depending on the provided sanmpling rate):

::

        filtereddata = cleandata.filter()

Get sampling rate and filtered data after filtering (please note that
all filter information is added to the data's meta information
dictionary (data.header):

::

        print("Sampling rate after [sec]:", filtereddata.samplingrate())
        print("Filter and pass band:", filtereddata.header.get('DataSamplingFilter',''))

2.6.2 Coordinate transformation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Assuming vector data in columns [x,y,z] you can freely convert between
xyz, hdz, and idf coordinates:

::

        cleandata = cleandata.xyz2hdz()

2.6.3 Calculate delta F
^^^^^^^^^^^^^^^^^^^^^^^

If the data file contains xyz (hdz, idf) data and an independently
measured f value, you can calculate delta F between the two instruments
using the following:

::

        cleandata = cleandata.delta_f()
        mp.plot(cleandata,plottitle='delta F')

2.6.4 Calculate Means
^^^^^^^^^^^^^^^^^^^^^

Mean values for certain data columns can be obtained using the ``mean``
method. The mean will only be calculated for data with the percentage of
valid data (in contrast to missing data) points not falling below the
value given by the percentage option (default 95). If too much data is
missing, then no mean is calulated and the function returns NaN.

::

        print(cleandata.mean('df', percentage=80))
        

The median can be calculated by defining the ``meanfunction`` option:

::

        print(cleandata.mean('df', meanfunction='median'))

2.6.5 Applying offsets
^^^^^^^^^^^^^^^^^^^^^^

Constant offsets can be added to individual columns using the ``offset``
method with a dictionary defining the MagPy stream column keys and the
offset to be applied (datetime.timedelta object for time column, float
for all others):

::

        offsetdata = cleandata.offset({'time':timedelta(seconds=0.19),'f':1.24})

2.6.6 Scaling data
^^^^^^^^^^^^^^^^^^

Individual columns can also be multiplied by values provided in a
dictionary:

::

        multdata = cleandata.multiply({'x':-1})

2.6.7 Fit functions
^^^^^^^^^^^^^^^^^^^

MagPy offers the possibility to fit functions to data using either
polynomial functions or cubic splines (default):

::

        func = cleandata.fit(keys=['x','y','z'],knotstep=0.1)
        mp.plot(cleandata,variables=['x','y','z'],function=func)

2.6.8 Derivatives
^^^^^^^^^^^^^^^^^

Time derivatives, which are useful to identify outliers and sharp
changes, are calculated as follows:

::

        diffdata = cleandata.differentiate(keys=['x','y','z'],put2keys = ['dx','dy','dz'])
        mp.plot(diffdata,variables=['dx','dy','dz'])

2.6.9 All methods at a glance
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

For a summary of all supported methods, see the section **List of all
MagPy methods** below.

2.7 Geomagnetic analysis
~~~~~~~~~~~~~~~~~~~~~~~~

2.7.1 Determination of K indices
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

MagPy supports the FMI method for determination of K indices. Please
consult the MagPy publication for details on this method and
application.

A month of one minute data is provided in ``example2``, which
corresponds to an `INTERMAGNET <http://www.intermagnet.org>`__ IAF
archive file. Reading a file in this format will load one minute data by
default. Accessing hourly data and other information is described below.

::

        data2 = read(example2)
        kvals = data2.k_fmi()

The determination of K values will take some time as the filtering
window is dynamically adjusted. In order to plot the original data (H
component) and K values together, we now use the multiple stream
plotting method ``plotStreams``. Here you need to provide a list of
streams and an array containing variables for each stream. The
additional options determine the appearance of the plot (limits, bar
chart):

::

        mp.plotStreams([data2,kvals],[['x'],['var1']],
                       specialdict = [{},{'var1':[0,9]}],
                       symbollist=['-','z'],
                       bartrange=0.06)
        

``'z'`` in ``symbollist`` refers to the second subplot (K), which should
be plotted as bars rather than the standard line (``'-'``).

2.7.2 Automated geomagnetic storm detection
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Geomagnetic storm detection is supported by MagPy using two procedures
based on wavelets and the Akaike Information Criterion (AIC) as outlined
in detail in Bailey and Leonhardt (2016). A basic example of usage to
find an SSC using a Discrete Wavelet Transform (DWT) is shown below:

::

        from magpy.stream import read
        from magpy.opt.stormdet import seekStorm
        stormdata = read("LEMI025_2015-03-17.cdf")      # 1s variometer data
        stormdata = stormdata.xyz2hdz()
        stormdata = stormdata.smooth('x', window_len=25)
        detection, ssc_list = seekStorm(stormdata, method="MODWT")
        print("Possible SSCs detected:", ssc_list)
        

The method ``seekStorm`` will return two variables: ``detection`` is
True if any detection was made, while ``ssc_list`` is a list of
dictionaries containing data on each detection. Note that this method
alone can return a long list of possible SSCs (most incorrectly
detected), particularly during active storm times. It is most useful
when additional restrictions based on satellite solar wind data apply
(currently only optimised for ACE data, e.g. from the NOAA website):

::

        satdata_ace_1m = read('20150317_ace_swepam_1m.txt')
        satdata_ace_5m = read('20150317_ace_epam_5m.txt')
        detection, ssc_list, sat_cme_list = seekStorm(stormdata,
                    satdata_1m=satdata_ace_1m, satdata_5m=satdata_ace_5m,
                    method='MODWT', returnsat=True)
        print("Possible CMEs detected:", sat_cme_list)
        print("Possible SSCs detected:", ssc_list)

2.7.3 Sq analysis
^^^^^^^^^^^^^^^^^

Methods are currently in preparation.

2.7.4 Validity check of data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

A common and important application used in the geomagnetism community is
a general validity check of geomagnetic data to be submitted to the
official data repositories `IAGA <http://www.iaga-aiga.org/>`__, WDC, or
`INTERMAGNET <http://www.intermagnet.org>`__. Please note: this is
currently under development and will be extended in the near future. A
'one-click' test method will be included in xmagpy in the future,
checking:

A) Validity of data formats, e.g.:

   ::

       data = read('myiaffile.bin', debug=True) 

B) Completeness of meta-information

C) Conformity of applied techniques to respective rules

D) Internal consistency of data

E) Optional: regional consistency

2.7.5 Spectral Analysis and Noise
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

For analysis of the spectral content of data, MagPy provides two basic
plotting methods. ``plotPS`` will calculate and display a power spectrum
of the selected component. ``plotSpectrogram`` will plot a spectrogram
of the time series. As usual, there are many options for plot window and
processing parameters that can be accessed using the help method.

::

        data = read(example1)
        mp.plotPS(data,key='f')
        mp.plotSpectrogram(data,['f'])

2.8 Handling multiple streams
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

2.8.1 Merging streams
^^^^^^^^^^^^^^^^^^^^^

Merging data comprises combining two streams into one new stream. This
includes adding a new column from another stream, filling gaps with data
from another stream or replacing data from one column with data from
another stream. The following example sketches the typical usage:

::

        print("Data columns in data2:", data2._get_key_headers())
        newstream = mergeStreams(data2,kvals,keys=['var1'])
        print("Data columns after merging:", data2._get_key_headers())
        mp.plot(newstream, ['x','y','z','var1'],symbollist=['-','-','-','z'])

If column ``var1`` does not existing in data2 (as above), then this
column is added. If column ``var1`` had already existed, then missing
data would be inserted from stream ``kvals``. In order to replace any
existing data, use option ``mode='replace'``.

2.8.2 Differences between streams
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Sometimes it is necessary to examine the differences between two data
streams e.g. differences between the F values of two instruments running
in parallel at an observatory. The method ``subtractStreams`` is
provided for this analysis:

::

        diff = subtractStreams(data1,data2,keys=['f'])

2.9 The art of meta-information
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Each data set is accompanied by a dictionary containing meta-information
for this data. This dictionary is completely dynamic and can be filled
freely, but there are a number of predefined fields that help the user
provide essential meta-information as requested by
`IAGA <http://www.iaga-aiga.org/>`__,
`INTERMAGNET <http://www.intermagnet.org>`__ and other data providers.
All meta information is saved only to MagPy-specific archive formats
PYCDF and PYSTR. All other export formats save only specific information
as required by the projected format.

The current content of this dictionary can be accessed by:

::

        data = read(example1)
        print(data.header)

Information is added/changed by using:

::

        data.header['SensorName'] = 'FGE'

Individual information is obtained from the dictionary using standard
key input:

::

        print(data.header.get('SensorName'))

If you want to have a more readable list of the header information, do:

::

        for key in data.header:
            print ("Key: {} \t Content: {}".format(key,data.header.get(key)))

2.9.1 Conversion to ImagCDF - Adding meta-information
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

To convert data from `IAGA <http://www.iaga-aiga.org/>`__ or IAF formats
to the new `INTERMAGNET <http://www.intermagnet.org>`__ CDF format, you
will usually need to add additional meta-information required for the
new format. MagPy can assist you here, firstly by extracting and
correctly adding already existing meta-information into newly defined
fields, and secondly by informing you of which information needs to be
added for producing the correct output format.

Example of IAGA02 to ImagCDF:

::

        mydata = read('IAGA02-file.min')
        mydata.write('/tmp',format_type='IMAGCDF')

The console output of the write command (see below) will tell you which
information needs to be added (and how) in order to obtain correct
ImagCDF files. Please note, MagPy will store the data in any case and
will be able to read it again even if information is missing. Before
submitting to a GIN, you need to make sure that the appropriate
information is contained. Attributes that relate to publication of the
data will not be checked at this point, and might be included later.

::

        >>>Writing IMAGCDF Format /tmp/wic_20150828_0000_PT1M_4.cdf
        >>>writeIMAGCDF: StandardLevel not defined - please specify by yourdata.header['DataStandardLevel'] = ['None','Partial','Full']
        >>>writeIMAGCDF: Found F column
        >>>writeIMAGCDF: given components are XYZF. Checking F column...
        >>>writeIMAGCDF: analyzed F column - values are apparently independend from vector components - using column name 'S'

Now add the missing information. Selecting 'Partial' will require
additional information. You will get a 'reminder' if you forget this.
Please check IMAGCDF instructions on specific codes:

::

        mydata.header['DataStandardLevel'] = 'Partial'
        mydata.header['DataPartialStandDesc'] = 'IMOS-01,IMOS-02,IMOS-03,IMOS-04,IMOS-05,IMOS-06,IMOS-11,IMOS-12,IMOS-13,IMOS-14,IMOS-15,IMOS-21,IMOS-22,IMOS-31,IMOS-41'

Similar reminders to fill out complete header information will be shown
for other conversions like:

::

        mydata.write('/tmp',format_type='IAGA')
        mydata.write('/tmp',format_type='IMF')
        mydata.write('/tmp',format_type='IAF',coverage='month')
        mydata.write('/tmp',format_type='WDC')

2.9.2 Providing location data
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Providing location data usually requires information on the reference
system (ellipsoid,...). By default MagPy assumes that these values are
provided in WGS84/WGS84 reference system. In order to facilitate most
easy referencing and conversions, MagPy supports
`EPSG <https://www.epsg-registry.org/>`__ codes for coordinates. If you
provide the geodetic references as follows, and provided that the
`proj4 <https://github.com/OSGeo/proj.4>`__ Python package is available,
MagPy will automatically convert location data to the requested output
format (currently WGS84).

::

        mydata.header['DataAcquisitionLongitude'] = -34949.9
        mydata.header['DataAcquisitionLatitude'] = 310087.0
        mydata.header['DataLocationReference'] = 'GK M34, EPSG: 31253'

        >>>...
        >>>writeIMAGCDF: converting coordinates to epsg 4326
        >>>...

2.9.3 Special meta-information fields
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The meta-information fields can hold much more information than required
by most output formats. This includes basevalue and baseline parameters,
flagging details, detailed sensor information, serial numbers and much
more. MagPy makes use of these possibilities. In order to save this
meta-information along with your data set you can use MagPy internal
archiving format, ``PYCDF``, which can later be converted to any of the
aforementioned output formats. You can even reconstruct a full data
base. Any upcoming meta-information or output request can be easily
added/modified without disrupting already existing data sets and the
ability to read and analyse old data. This data format is also based on
Nasa CDF. ASCII outputs are also supported by MagPy, of which the
``PYSTR`` format also contains all meta information and ``PYASCII`` is
the most compact. Please consider that ASCII formats require a lot of
memory, especially for one second and higher resolution data.

::

        mydata.write('/tmp',format_type='PYCDF',coverage='year')

2.10 Data transfer
~~~~~~~~~~~~~~~~~~

MagPy contains a number of methods to simplify data transfer for
observatory applications. Methods within the basic Python functionality
can also be very useful. Using the implemented methods requires:

::

        from magpy import transfer as mt

2.10.1 Downloads
^^^^^^^^^^^^^^^^

Use the ``read`` method as outlined above. No additional imports are
required.

2.10.2 FTP upload
^^^^^^^^^^^^^^^^^

Files can also be uploaded to an FTP server:

::

        mt.ftpdatatransfer(localfile='/path/to/data.cdf',ftppath='/remote/directory/',myproxy='ftpaddress or address of proxy',port=21,login='user',passwd='passwd',logfile='/path/mylog.log')
        

The upload methods using FTP, SCP and GIN support logging. If the data
file failed to upload correctly, the path is added to a log file and,
when called again, upload of the file is retried. This option is useful
for remote locations with unstable network connections.

2.10.3 Secure communication protocol (SCP)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

To transfer via SCP:

::

        mt.scptransfer('user@address:/remote/directory/','/path/to/data.cdf',passwd,timeout=60)

2.10.4 Upload data to GIN
^^^^^^^^^^^^^^^^^^^^^^^^^

Use the following command:

::

        mt.ginupload('/path/to/data.cdf', ginuser, ginpasswd, ginaddress, faillog=True, stdout=True)

2.10.5 Avoiding real-text passwords in scripts
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

In order to avoid using real-text password in scripts, MagPy comes along
with a simple encryption routine.

::

        from magpy.opt import cred as mpcred

Credentials will be saved to a hidden file with encrypted passwords. To
add information for data transfer to a machine called 'MyRemoteFTP' with
an IP of 192.168.0.99:

::

        mpcred.cc('transfer', 'MyRemoteFTP', user='user', passwd='secure', address='192.168.0.99', port=21)

Extracting passwd information within your data transfer scripts:

::

        user = mpcred.lc('MyRemoteFTP', 'user')
        password = mpcred.lc('MyRemoteFTP','passwd')

2.11 DI measurements, basevalues and baselines
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

These procedures require an additional import:

::

        from magpy import absolutes as di

2.11.1 Data structure of DI measurements
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Please check ``example3``, which is an example DI file. You can create
these DI files by using the input sheet from xmagpy or the online input
sheet provided by the Conrad Observatory. If you want to use this
service, please contact the Observatory staff. Also supported are
DI-files from the AUTODIF.

2.11.2 Reading DI data
^^^^^^^^^^^^^^^^^^^^^^

Reading and analyzing DI data requires valid DI file(s). For correct
analysis, variometer data and scalar field information needs to be
provided as well. Checkout ``help(di.absoluteAnalysis)`` for all
options. The analytical procedures are outlined in detail in the MagPy
article (citation). A typical analysis looks like:

::

        diresult = di.absoluteAnalysis('/path/to/DI/','path/to/vario/','path/to/scalar/')

Path to DI can either point to a single file, a directory or even use
wildcards to select data from a specific observatory/pillar. Using the
examples provided along with MagPy, the analysis line looks like

::

        diresult = di.absoluteAnalysis(example3,example2,example2)

Calling this method will provide terminal output as follows and a stream
object ``diresult`` which can be used for further analyses.

::

        >>>...
        >>>Analyzing manual measurement from 2015-03-25
        >>>Vector at: 2015-03-25 08:18:00+00:00
        >>>Declination: 3:53:46, Inclination: 64:17:17, H: 21027.2, Z: 43667.9, F: 48466.7
        >>>Collimation and Offset:
        >>>Declination:    S0: -3.081, delta H: -6.492, epsilon Z: -61.730
        >>>Inclination:    S0: -1.531, epsilon Z: -60.307
        >>>Scalevalue: 1.009 deg/unit
        >>>Fext with delta F of 0.0 nT
        >>>Delta D: 0.0, delta I: 0.0

Fext indicates that F values have been used from a separate file and not
provided along with DI data. Delta values for F, D, and I have not been
provided either. ``diresult`` is a stream object containing average D, I
and F values, the collimation angles, scale factors and the base values
for the selected variometer, beside some additional meta information
provided in the data input form.

2.11.3 Reading BLV files
^^^^^^^^^^^^^^^^^^^^^^^^

Basevalues:

::

        blvdata = read('/path/myfile.blv')
        mp.plot(blvdata, symbollist=['o','o','o'])

Adopted baseline:

::

        bldata = read('/path/myfile.blv',mode='adopted')
        mp.plot(bldata)

2.11.4 Basevalues and baselines
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Basevalues as obtained in (2.11.2) or (2.11.3) are stored in a normal
data stream object, therefore all analysis methods outlined above can be
applied to this data. The ``diresult`` object contains D, I, and F
values for each measurement in columns x,y,z. Basevalues for H, D and Z
related to the selected variometer are stored in columns dx,dy,dz. In
``example4``, you will find some more DI analysis results. To plot these
basevalues we can use the following plot command, where we specify the
columns, filled circles as plotsymbols and also define a minimum spread
of each y-axis of +/- 5 nT for H and Z, +/- 0.05 deg for D.

::

        basevalues = read(example4)
        mp.plot(basevalues, variables=['dx','dy','dz'], symbollist=['o','o','o'], padding=[5,0.05,5])

Fitting a baseline can be easily accomplished with the ``fit`` method.
First we test a linear fit to the data by fitting a polynomial function
with degree 1.

::

        func = basevalues.fit(['dx','dy','dz'],fitfunc='poly', fitdegree=1)
        mp.plot(basevalues, variables=['dx','dy','dz'], symbollist=['o','o','o'], padding=[5,0.05,5], function=func)

We then fit a spline function using 3 knotsteps over the timerange (the
knotstep option is always related to the given timerange).

::

        func = basevalues.fit(['dx','dy','dz'],fitfunc='spline', knotstep=0.33)
        mp.plot(basevalues, variables=['dx','dy','dz'], symbollist=['o','o','o'], padding=[5,0.05,5], function=func)

Hint: a good estimate on the necessary fit complexity can be obtained by
looking at delta F values. If delta F is mostly constant, then the
baseline should also not be very complex.

2.11.5 Applying baselines
^^^^^^^^^^^^^^^^^^^^^^^^^

The baseline method provides a number of options to assist the observer
in determining baseline corrections and realted issues. The basic
building block of the baseline method is the fit function as discussed
above. Lets first load raw vectorial geomagnetic data, the absevalues of
which are contained in above example:

::

        rawdata = read(example5)

Now we can apply the basevalue information and the spline function as
tested above:

::

        func = rawdata.baseline(basevalues, extradays=0, fitfunc='spline',
                                knotstep=0.33,startabs='2015-09-01',endabs='2016-01-22')

The ``baseline`` method will determine and return a fit function between
the two given timeranges based on the provided basevalue data
``blvdata``. The option ``extradays`` allows for adding days before and
after start/endtime for which the baseline function will be
extrapolated. This option is useful for providing quasi-definitive data.
When applying this method, a number of new meta-information attributes
will be added, containing basevalues and all functional parameters to
describe the baseline. Thus, the stream object still contains
uncorrected raw data, but all baseline correction information is now
contained within its meta data. To apply baseline correction you can use
the ``bc`` method:

::

        corrdata = rawdata.bc()

If baseline jumps/breaks are necessary due to missing data, you can call
the baseline function for each independent segment and combine the
resulting baseline functions to a list:

::

        stream = read(mydata,starttime='2016-01-01',endtime='2016-03-01')
        basevalues = read(mybasevalues)
        adoptedbasefunc = []
        adoptedbasefunc.append(stream.baseline(basevalues, extradays=0, fitfunc='poly', fitdegree=1,startabs='2016-01-01',endabs='2016-02-01')
        adoptedbasefunc.append(stream.baseline(basevalues, extradays=0, fitfunc='spline', knotstep=0.33,startabs='2016-01-02',endabs='2016-01-03')

        corr = stream.bc()

The combined baseline can be plotted accordingly. Extend the function
parameters with each additional segment.

::

        mp.plot(basevalues, variables=['dx','dy','dz'], symbollist=['o','o','o'], padding=[5,0.05,5], function=adoptedbasefunc)

Adding a baseline for scalar data, which is determined from the delta F
values provided within the basevalue data stream:

::

        scalarbasefunc = []
        scalarbasefunc.append(basevalues.baseline(basevalues, keys=['df'], extradays=0, fitfunc='poly', fitdegree=1,startabs='2016-01-01',endabs='2016-03-01'))
        plotfunc = adoptedbasefunc
        plotfunc.extend(scalarbasefunc)
        mp.plot(basevalues, variables=['dx','dy','dz','df'], symbollist=['o','o','o','o'], padding=[5,0.05,5,5], function=plotfunc)

Getting dailymeans and correction for scalar baseline can be acomplished
by:

::

        meanstream = stream.dailymeans()
        meanstream = meanstream.func2stream(scalarbasefunc,mode='sub',keys=['f'],fkeys=['df'])
        meanstream = meanstream.delta_f()

Please note that here the function originally determined from the deltaF
(df) values of the basevalue data needs to be applied to the F column
(f) from the data stream. Before saving we will also extract the
baseline parameters from the meta information, which is automatically
generated by the ``baseline`` method.

::

        absinfo = stream.header.get('DataAbsInfo','')
        fabsinfo = basevalues.header.get('DataAbsInfo','')

2.11.6 Saving basevalue and baseline information
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The following will create a BLV file:

::

        basevalues.write('/my/path', coverage='all', format_type='BLV', diff=meanstream, year='2016', absinfo=absinfo, deltaF=fabsinfo)

Information on the adopted baselines will be extracted from option
``absinfo``. If several functions are provided, baseline jumps will be
automatically inserted into the BLV data file. The output of adopted
scalar baselines is configured by option ``deltaF``. If a number is
provided, this value is assumed to represent the adopted scalar
baseline. If either 'mean' or 'median' are given (e.g.
``deltaF='mean'``), then the mean/median value of all delta F values in
the ``basevalues`` stream is used, requiring that such data is
contained. Providing functional parameters as stored in a
``DataAbsInfo`` meta information field, as shown above, will calculate
and use the scalar baseline function. The ``meanstream`` stream contains
daily averages of delta F values between variometer and F measurements
and the baseline adoption data in the meta-information. You can,
however, provide all this information manually as well. The typical way
to obtain such a ``meanstream`` is sketched above.

2.12 Database support
~~~~~~~~~~~~~~~~~~~~~

MagPy supports database access and many methods for optimizing data
treatment in connection with databases. Among many other benefits, using
a database simplifies many typical procedures related to
meta-information. Currently, MagPy supports
`MySQL <https://www.mysql.com/>`__ databases. To use these features, you
need to have MySQL installed on your system. In the following we provide
a brief outline of how to set up and use this optional addition. Please
note that a proper usage of the database requires sensor-specific
information. In geomagnetism, it is common to combine data from
different sensors into one file structure. In this case, such data needs
to remain separate for database usage and is only combined when
producing
`IAGA <http://www.iaga-aiga.org/>`__/`INTERMAGNET <http://www.intermagnet.org>`__
definitive data. Furthermore, unique sensor information such as type and
serial number is required.

::

        import magpy import database as mdb

2.12.1 Setting up a MagPy database (using MySQL)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Open mysql (e.g. Linux: ``mysql -u root -p mysql``) and create a new
database. Replace ``#DB-NAME`` with your database name (e.g. ``MyDB``).
After creation, you will need to grant priviledges to this database to a
user of your choice. Please refer to official MySQL documentations for
details and further commands.

::

         mysql> CREATE DATABASE #DB-NAME; 
         mysql> GRANT ALL PRIVILEGES ON #DB-NAME.* TO '#USERNAME'@'%' IDENTIFIED BY '#PASSWORD';

2.12.2 Initializing a MagPy database
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Connecting to a database using MagPy is done using following command:

::

        db = mdb.mysql.connect(host="localhost",user="#USERNAME",passwd="#PASSWORD",db="#DB-NAME")
        mdb.dbinit(db)

2.12.3 Adding data to the database
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Examples of useful meta-information:

::

        iagacode = 'WIC'
        data = read(example1)
        gsm = data.selectkeys(['f'])
        fge = data.selectkeys(['x','y','z'])
        gsm.header['SensorID'] = 'GSM90_12345_0002'
        gsm.header['StationID'] = iagacode
        fge.header['SensorID'] = 'FGE_22222_0001'
        fge.header['StationID'] = iagacode
        mdb.writeDB(db,gsm)
        mdb.writeDB(db,fge)

All available meta-information will be added automatically to the
relevant database tables. The SensorID scheme consists of three parts:
instrument (GSM90), serial number (12345), and a revision number (0002)
which might change in dependency of maintenance, calibration, etc. As
you can see in the example above, we separate data from different
instruments, which we recommend particularly for high resolution data,
as frequency and noise characteristics of sensor types will differ.

2.12.4 Reading data
^^^^^^^^^^^^^^^^^^^

To read data from an established database:

::

        data = mdb.readDB(db,'GSM90_12345_0002') 

Options e.g. starttime='' and endtime='' are similar as for normal
``read``.

2.12.5 Meta data
^^^^^^^^^^^^^^^^

An often used application of database connectivity with MagPy will be to
apply meta-information stored in the database to data files before
submission. The following command demostrates how to extract all missing
meta-information from the database for the selected sensor and add it to
the header dictionary of the data object.

::

        rawdata = read('/path/to/rawdata.bin')
        rawdata.header = mdb.dbfields2dict(db,'FGE_22222_0001')
        rawdata.write(..., format_type='IMAGCDF')

2.13 Monitoring scheduled scripts
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Automated analysis can e easily accomplished by adding a series of MagPy
commands into a script. A typical script could be:

::

        # read some data and get means
        data = read(example1)
        mean_f = data.mean('f')

        # import monitor method
        from magpy.opt import Analysismonitor
        analysisdict = Analysismonitor(logfile='/var/log/anamon.log')
        analysisdict = analysisdict.load()
        # check some arbitray threshold
        analysisdict.check({'data_threshold_f_GSM90': [mean_f,'>',20000]})

If provided criteria are invalid, then the logfile is changed
accordingly. This method can assist you particularly in checking data
actuality, data contents, data validity, upload success, etc. In
combination with an independent monitoring tool like
`Nagios <https://www.nagios.org/>`__, you can easily create mail/SMS
notfications of such changes, in addition to monitoring processes, live
times, disks etc. `MARCOS <https://github.com/geomagpy/MARCOS>`__ comes
along with some instructions on how to use Nagios/MagPy for data
acquisition monitoring.

2.14 Data acquisition support
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

MagPy contains a couple of packages which can be used for data
acquisition, collection and organization. These methods are primarily
contained in two applications:
`MARTAS <https://github.com/geomagpy/MARTAS>`__ and
`MARCOS <https://github.com/geomagpy/MARCOS>`__. MARTAS (Magpy Automated
Realtime Acquisition System) supports communication with many common
instruments (e.g. GSM, LEMI, POS1, FGE, and many non-magnetic
instruments) and transfers serial port signals to
`WAMP <http://wamp-proto.org/>`__ (Web Application Messaging Protocol),
which allows for real-time data access using e.g. WebSocket
communication through the internet. MARCOS (Magpy's Automated Realtime
Collection and Organistaion System) can access such real-time streams
and also data from many other sources and supports the observer by
storing, analyzing, archiving data, as well as monitoring all processes.
Details on these two applications can be found elsewhere.

2.15 Graphical user interface
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Many of the above mentioned methods are also available within the
graphical user interface of MagPy. To use this check the installation
instructions for your operating system. You will find Video Tutorials
online (to be added) describing its usage for specific analyses.

2.16 Current developments
~~~~~~~~~~~~~~~~~~~~~~~~~

2.16.1 Exchange data objects with `ObsPy <https://github.com/obspy/obspy>`__
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

MagPy supports the exchange of data with ObsPy, the seismological
toolbox. Data objects of both python packages are very similar. Note:
ObsPy assumes regular spaced time intervals. Please be careful if this
is not the case with your data. The example below shows a simple import
routine, on how to read a seed file and plot a spectrogram (which you
can identically obtain from ObsPy as well). Conversions to MagPy allow
for vectorial analyses, and geomagnetic applications. Conversions to
ObsPy are useful for effective high frequency analysis, requiring evenly
spaced time intervals, and for exporting to seismological data formats.

::

        from obspy import read as obsread
        seeddata = obsread('/path/to/seedfile')
        magpydata = obspy2magpy(seeddata,keydict={'ObsPyColName': 'x'})
        mp.plotSpectrogram(magpydata,['x'])

2.16.2 Flagging in ImagCDF
^^^^^^^^^^^^^^^^^^^^^^^^^^

::

        datawithspikes = read(example1)
        flaggeddata = datawithspikes.flag_outlier(keys=['f'],timerange=timedelta(minutes=1),threshold=3)
        mp.plot(flaggeddata,['f'],annotate=True)
        flaggeddata.write(tmpdir,format_type='IMAGCDF',addflags=True)

The ``addflags`` option denotes that flagging information will be added
to the ImagCDF format. Please note that this is still under development
and thus content and format specifications may change. So please use it
only for test purposes and not for archiving. To read and view flagged
ImagCDF data, just use the normal read command, and activate annotation
for plotting.

::

        new = read('/tmp/cnb_20120802_000000_PT1S_1.cdf')
        mp.plot(new,['f'],annotate=True)

3. Predefined scripts
---------------------

MagPy comes with a steadily increasing number of applications for
various purposes. These applications can be run from some command prompt
and allow to simplify/automize some commonly used applications of MagPy.
All applications have the same syntax, consisting of the name of
application and options. The option -h is available for all applications
and provides an overview about purpose and options of the application:

::

        $> application -h

3.1 Running applications in Linux/MacOs
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

On Linux Systems all applications are added the bin directory and can be
run directly from any command interface/terminal after installation of
MagPy:

::

        $> application -h

3.2 Running applications in Windows
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

After installing MagPy/GeomagPy on Windows, three executables are found
in the MagPy program folder. For running applications you have to start
the MagPy "command prompt". In this terminal you will have to go to the
Scripts directory:

::

        .../> cd Scripts

And here you now can run the application of your choice using the python
environment:

::

        .../Scripts>python application -h

3.3 Applications
~~~~~~~~~~~~~~~~

The available applications are briefly intruduced in the following.
Please refer to "application -h" for all available options for each
application.

3.3.1 mpconvert
^^^^^^^^^^^^^^^

mpconvert converts bewteen data formats based on MagPy. Typical
applications are the conversion of binary data formats to readable ASCII
data sets or the conversion.

Typical applications include

a) Convert IAGA seconds to IMAGCDF and include obligatory meta
   information:

   ::

       mpconvert -r "/iagaseconds/wic201701*" -f IMAGCDF -c month -w "/tmp"
                    -m "DataStandardLevel:Full,IAGACode:WIC,DataReferences:myref"

b) Convert IMAGCDF seconds to IAF minute (using IAGA/IM filtering
   procedures):

   ::

       mpconvert -r "/imagcdf/wic_201701_000000_PT1S_4.cdf" -f IAF -i -w "/tmp"

mpconvert -r
"/srv/products/data/magnetism/definitive/wic2017/ImagCDF/wic\_201708\_000000\_PT1S\_4.cdf"
-f IAF -i -w "/tmp"

3.3.2 addcred
^^^^^^^^^^^^^

Used to store encrypted credential information for automatic data
transfer. So that sensitive information has not to be written in plain
text in scripts or cron jobs.

a) Add information for ftp data transfer. This information is encrypted
   and can be accessed by referring to the shortcut "zamg".

   ::

       addcred -t transfer -c zamg -u max -p geheim 
                 -a "ftp://ftp.remote.ac.at" -l 21


            

Raw data

            {
    "_id": null,
    "home_page": "http://pypi.python.org/pypi/geomagpy/",
    "name": "geomagpy",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "",
    "author": "R. Leonhardt, R. Bailey, M. Miklavec, J. Fee, H. Schovanec, S. Bracke",
    "author_email": "roman.leonhardt@zamg.ac.at",
    "download_url": "https://files.pythonhosted.org/packages/f2/73/a5af2ae5778d754f45be2405b571737550d45189f4cd700d28a664b49cb0/geomagpy-1.1.7.tar.gz",
    "platform": null,
    "description": "MagPy\n=====\n\n**MagPy (or GeomagPy) is a Python package for analysing and displaying\ngeomagnetic data.**\n\nVersion Info: (please note: this package is still in a development state\nwith frequent modifcations) please check the release notes.\n\nMagPy provides tools for geomagnetic data analysis with special focus on\ntypical data processing routines in observatories. MagPy provides\nmethods for data format conversion, plotting and mathematical procedures\nwith specifically geomagnetic analysis routines such as basevalue and\nbaseline calculation and database handling. Among the supported data\nformats are *ImagCDF, IAGA-02, WDC, IMF, IAF, BLV*, and many more. Full\ninstallation also provides a graphical user interface, *xmagpy*. You\nwill find a complete manual for *xmagpy* in the docs.\n\nTypical usage of the basic MagPy package for reading and visualising\ndata looks like this:\n\n::\n\n        #!/usr/bin/env python\n\n        from magpy.stream import read\n        import magpy.mpplot as mp\n        stream = read('filename_or_url')\n        mp.plot(stream)\n\nBelow you will find a quick guide to usage of the basic MagPy package.\nFor instructions on *xmagpy* please refer to the document \"`An\nintroduction to\nXMagPy <https://github.com/geomagpy/magpy/blob/master/magpy/doc/xmagpy-manual.pdf>`__\"\nin the docs. You can also subscribe to our information channel at\n`Telegram <https://t.me/geomagpy>`__ for further information on updates\nand current issues.\n\n1. INSTALLATION\n---------------\n\nPleas note that with the publication of MagPy 1.0 the recommended python\nenironment is >= 3.6. The following installation instructions will\nassume such an environment. Particularly if you are using Python2.7\nplease go to the end of this sections for help.\n\nThis section is currently updated and will be ready with the publication\nof MagPy 1.0.\n\n1.1 Linux installation (Ubuntu,Debian)\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n1.1.1 Complete Install\n^^^^^^^^^^^^^^^^^^^^^^\n\nTested for Ubuntu 18.04 and Debian Stretch (full installation with all\noptional packages). Please note that installation requires python 3.x.\n\n::\n\n        $ sudo pip3 install geomagpy    #Will install MagPy and all dependencies\n        $ sudo pip3 install wxpython    #Will install WX graphics system\n\nIf wxpython installation via pip3 fails you can try\n\n        $ sudo apt-get install python3-wxgtk4.0\n\nYou can now run XMagPy by using the following command\n\n::\n\n        $ xmagpy\n\n1.1.2 Updates\n^^^^^^^^^^^^^\n\nTo upgrade to the most recent version:\n\n::\n\n        $ sudo pip3 install -U geomagpy\n\n1.1.3 Creating a desktop link\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nIn order to create a desktop link on linux systems please refer to\ninstruction too be found your distribution. For Ubunutu and other Debian\nsystems such links are created as follows:\n\nFirstly create a file \"xmagpy.desktop\" which contains:\n\n::\n\n        [Desktop Entry]\n        Type=Application\n        Name=XMagPy\n        GenericName=GeoMagPy User Interface\n        Exec=xmagpy\n        Icon=/usr/local/lib/python3.7/dist-packages/magpy/gui/magpy128.xpm\n        Terminal=false\n        Categories=Application;Development;\n\nThen copy this file to the systems application folder:\n\n::\n\n        sudo cp xmagpy.desktop /usr/share/applications/\n\n1.2 MacOs installation\n~~~~~~~~~~~~~~~~~~~~~~\n\n1.2.1 Install a python3 interpreter\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n-  we recommend\n   `Miniconda <https://docs.conda.io/en/latest/miniconda.html>`__ or\n   `Anaconda <https://www.continuum.io/downloads>`__\n-  see e.g. https://docs.continuum.io/anaconda/install for more details\n-  before continuiung, test whether python is working. Open a terminal\n   and run python\n\n1.2.2 Install MagPy\n^^^^^^^^^^^^^^^^^^^\n\nOpen a terminal and use the following commands:\n\n::\n\n        $ pip install geomagpy    #Will install MagPy and all dependencies\n        $ pip install wxpython    #Will install WX graphics system for XMagPy\n\nYou can now run XMagPy from the terminal by using the following command\n\n::\n\n        $ xmagpyw\n\n1.2.3 Creating a desktop link\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nOpen Finder and search for xmagpyw. Copy it to the desktop. To change\nthe icon, click on the xmagpyw link, open information and replace the\nimage on the upper left with e.g. magpy128.jpg (also to be found using\nfinder).\n\n1.3 Windows installation - WinPython Package\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n1.3.1 Install MagPy for Windows\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n-  get the `MagPy Windows\n   installer <https://cobs.zamg.ac.at/data/index.php/en/downloads/category/1-magnetism>`__\n   here (under Downloads): https://cobs.zamg.ac.at\n-  download and execute magpy-x.x.x.exe\n-  all required packages are included in the installer\n\n1.3.2 Post-installation information\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n-  MagPy will have a sub-folder in the Start menu. Here you will find\n   three items:\n\n   ::\n\n       * command -> opens a DOS shell within the Python environment e.g. for updates \n       * python  -> opens a python shell ready for MagPy\n       * xmagpy  -> opens the MagPy graphical user interface\n\n1.3.3 Update an existing MagPy installation on Windows\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n-  right-click on subfolder \"command\" in the start menu\n-  select \"run as administrator\"\n-  issue the following command \"pip install -U geomagpy\" (you can also\n   specify the version e.g. pip install geomagpy==0.x.x)\n\n1.3.4 Installation with user priviledges only\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n-  Download a most recent version of WinPython3.x\n-  Unpack in your home directory\n-  Go to the WinPython Folder and run WinPython command prompt\n-  issue the same commands as for MacOS installation\n-  to run XMagPy: use xmagpy from the WinPython command promt.\n\n1.4 Installation instructions for Python 2.7\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThe current version of magpy is still supporting python 2.7, although it\nis highly recommended to switch to python >= 3.6. Installation on python\n2.7 is more complex, as some packages for graphical user interface and\nCDF support not as well supported. Please note: None of the addtional\nsteps is necessary for python 3.x.\n\n1.4.1 Pre-installation work\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nGet a recent version of NasaCDF for your platform, enables CDF support\nfor formats like ImagCDF. Package details and files can be found at\nhttp://cdf.gsfc.nasa.gov/\n\nOn Linux such installation will look like\n(http://cdf.gsfc.nasa.gov/html/sw\\_and\\_docs.html)\n\n::\n\n        $ tar -zxvf cdf37_0-dist-all.tar.gz\n        $ cd cdf37...\n        $ make OS=linux ENV=gnu CURSES=yes FORTRAN=no UCOPTIONS=-O2 SHARED=yes all\n        $ sudo make INSTALLDIR=/usr/local/cdf install\n\nInstall the following additional compilers before continuing (required\nfor spacepy): Linux: install gcc MacOs: install gcc and gfortran\n\nInstall coordinate system transformation support:\n\n::\n\n        $ sudo apt-get install libproj-dev proj-data proj-bin\n\n1.4.2 Install MagPy and dependencies\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nOn Linux this will look like:\n\n::\n\n        $ sudo apt-get install python-matplotlib python-scipy python-h5py cython python-pip  \n        $ sudo apt-get install python-wxgtk3.0 # or python-wxgtk2.8 (Debian Stretch)  \n        $ sudo apt-get install python-twisted  \n        $ sudo pip install ffnet\n        $ sudo pip install pyproj==1.9.5\n        $ sudo pip install pyserial\n        $ sudo pip install service_identity\n        $ sudo pip install ownet\n        $ sudo pip install spacepy\n        $ sudo pip install geomagpy  \n\nOn Mac and Windows you need to download a python interpreter like\n`Anaconda <https://www.continuum.io/downloads>`__ or [WinPython] and\nthen install similar packages, particluarly the old wxpython 3.x.\n\n1.5 Platform independent container - Docker\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n1.5.1 Install `Docker <https://www.docker.com/>`__ (toolbox) on your operating system\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n::\n\n     - https://docs.docker.com/engine/installation/\n\n1.5.2 Get the MagPy Image\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\n::\n\n     - open a docker shell\n\n            >>> docker pull geomagpy/magpy:latest\n            >>> docker run -d --name magpy -p 8000:8000 geomagpy/magpy:latest\n\n1.5.3 Open a browser\n^^^^^^^^^^^^^^^^^^^^\n\n::\n\n     - open address http://localhost:8000 (or http://\"IP of your VM\":8000)\n     - NEW: first time access might require a token or passwd\n\n            >>> docker logs magpy\n\n          will show the token \n     - run python shell (not conda) \n     - in python shell\n\n            >>> %matplotlib inline\n            >>> from magpy.stream import read\n            >>> ...\n\n1.6 Install from source\n~~~~~~~~~~~~~~~~~~~~~~~\n\nRequirements: - Python 2.7, 3.x (recommended is >=3.6)\n\nRecommended: - Python packages: \\* wxpython (for python2.7 it needs to\nbe 3.x or older) \\* NasaCDF (python 2.7 only) \\* SpacePy (python 2.7\nonly)\n\n-  Other useful Software:\n\n   -  pyproj (for geographic coordinate systems)\n   -  MySQL (database features)\n   -  Webserver (e.g. Apache2, PHP)\n\n      git clone git://github.com/GeomagPy/MagPy.git cd magpy\\* sudo\n      python setup.py install\n\n2. A quick guide to MagPy\n-------------------------\n\nwritten by R. Leonhardt, R. Bailey (April 2017)\n\nMagPy's functionality can be accessed basically in three different ways:\n1) Directly import and use the magpy package into a python environment\n2) Run the graphical user interface xmagpy (xmagpyw for Mac) 3) Use\npredefined applications \"Scripts\"\n\nThe following section will primarily deal with way 1. For 2 - xmagpy -\nwe refer to the video tutorials whcih can be found here: Section 3\ncontains examples for predefined applications/scripts\n\n2.1 Getting started with the python package\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nStart python. Import all stream methods and classes using:\n\n::\n\n    from magpy.stream import *\n\nPlease note that this import will shadow any already existing ``read``\nmethod.\n\n2.2 Reading and writing data\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nMagPy supports the following data formats and thus conversions between\nthem: - WDC: World Data Centre format - JSON: JavaScript Object Notation\n- IMF: Intermagnet Format - IAF: Intermagnet Archive Format - NEIC: WGET\ndata from USGS - NEIC - IAGA: IAGA 2002 text format - IMAGCDF:\nIntermagnet CDF Format - GFZKP: GeoForschungsZentrum KP-Index format -\nGSM19/GSM90: Output formats from GSM magnetometers - POS1: POS-1 binary\noutput - BLV: Baseline format Intermagnet - IYFV: Yearly mean format\nIntermagnet\n\n... and many others. To get a full list, use:\n\n::\n\n        from magpy.stream import *\n        print(PYMAG_SUPPORTED_FORMATS)\n\nYou will find several example files provided with MagPy. The ``cdf``\nfile is stored along with meta information in NASA's common data format\n(cdf). Reading this file requires a working installation of Spacepy cdf.\n\nIf you do not have any geomagnetic data file you can access example data\nby using the following command (after ``import *``):\n\n::\n\n        data = read(example1)\n        \n\nThe data from ``example1`` has been read into a MagPy *DataStream* (or\n*stream*) object. Most data processing routines in MagPy are applied to\ndata streams.\n\nSeveral example data sets are provided within the MagPy package:\n\n-  ``example1``: `IAGA <http://www.iaga-aiga.org/>`__ ZIP (IAGA2002, zip\n   compressed) file with 1 second HEZ data\n-  ``example2``: `MagPy <#magpy>`__ Archive (CDF) file with 1 sec F data\n-  ``example3``: `MagPy <#magpy>`__ Basevalue (TXT) ascii file with DI\n   and baseline data\n-  ``example4``: `INTERMAGNET <http://www.intermagnet.org>`__ ImagCDF\n   (CDF) file with one week of 1 second data\n-  ``example5``: `MagPy <#magpy>`__ Archive (CDF) raw data file with xyz\n   and supporting data\n-  ``example6a``: `MagPy <#magpy>`__ DI (txt) raw data file with DI\n   measurement\n-  ``example6b``: `MagPy <#magpy>`__ like 6a to be used with example4\n\n-  ``flagging_example``: `MagPy <#magpy>`__ FlagDictionary (JSON)\n   flagging info to be used with example1\n-  ``recipe1_flags``: `MagPy <#magpy>`__ FlagDictionary (JSON) to be\n   used with cookbook recipe 1\n\n2.2.1 Reading\n^^^^^^^^^^^^^\n\nFor a file in the same directory:\n\n::\n\n        data = read(r'myfile.min') \n\n... or for specific paths in Linux:\n\n::\n\n        data = read(r'/path/to/file/myfile.min') \n\n... or for specific paths in Windows:\n\n::\n\n        data = read(r'c:\\path\\to\\file\\myfile.min')\n\nPathnames are related to your operating system. In this guide we will\nassume a Linux system. Files that are read in are uploaded to the memory\nand each data column (or piece of header information) is assigned to an\ninternal variable (key). To get a quick overview of the assigned keys in\nany given stream (``data``) you can use the following method:\n\n::\n\n        print(data._get_key_headers() )\n\n2.2.2 Writing\n^^^^^^^^^^^^^\n\nAfter loading data from a file, we can save the data in the standard\nIAGA02 and IMAGCDF formats with the following commands.\n\nTo create an IAGA-02 format file, use:\n\n::\n\n        data.write(r'/path/to/diretory/',format_type='IAGA')\n\nTo create an `INTERMAGNET <http://www.intermagnet.org>`__ CDF (ImagCDF)\nfile:\n\n::\n\n        data.write(r'/path/to/diretory/',format_type='IMAGCDF')\n\nThe filename will be created automatically according to the defined\nformat. By default, daily files are created and the date is added to the\nfilename in-between the optional parameters ``filenamebegins`` and\n``filenameends``. If ``filenameends`` is missing, ``.txt`` is used as\ndefault.\n\n2.2.3 Other possibilities for reading files\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nTo read all local files ending with .min within a directory (creates a\nsingle stream of all data):\n\n::\n\n        data = read(r'/path/to/file/*.min')\n\nGetting magnetic data directly from an online source such as the WDC:\n\n::\n\n        data = read(r'ftp://thewellknownaddress/single_year/2011/fur2011.wdc')\n\nGetting *kp* data from the GFZ Potsdam:\n\n::\n\n        data = read(r'http://www-app3.gfz-potsdam.de/kp_index/qlyymm.tab')\n\n(Please note: data access and usage is subjected to the terms and\nconditions of the individual data provider. Please make sure to read\nthem before accessing any of these products.)\n\nNo format specifications are required for reading. If MagPy can handle\nthe format, it will be automatically recognized.\n\nGetting data for a specific time window for local files:\n\n::\n\n        data = read(r'/path/to/files/*.min',starttime=\"2014-01-01\", endtime=\"2014-05-01\")\n\n... and remote files:\n\n::\n\n        data = read(r'ftp://address/fur2013.wdc',starttime=\"2013-01-01\", endtime=\"2013-02-01\")\n\nReading data from the INTERMAGNET Webservice (starting soon):\n\n::\n\n        data = read('http://www.intermagnet.org/test/ws/?id=WIC')\n\n2.2.4 Selecting timerange\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\nThe stream can be trimmed to a specific time interval after reading by\napplying the trim method, e.g. for a specific month:\n\n::\n\n        data = data.trim(starttime=\"2013-01-01\", endtime=\"2013-02-01\")\n\n2.3 Getting help on options and usage\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n2.3.1 Python's help function\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nInformation on individual methods and options can be obtained as\nfollows:\n\nFor basic functions:\n\n::\n\n        help(read)\n\nFor specific methods related to e.g. a stream object \"data\":\n\n::\n\n        help(data.fit)\n\nNote that this requires the existence of a \"data\" object, which is\nobtained e.g. by data = read(...). The help text can also be shown by\ndirectly calling the *DataStream* object method using:\n\n::\n\n        help(DataStream.fit)\n\n2.3.2 MagPy's logging system\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nMagPy automatically logs many function options and runtime information,\nwhich can be useful for debugging purposes. This log is saved by default\nin the temporary file directory of your operating system, e.g. for Linux\nthis would be ``/tmp/magpy.log``. The log is formatted as follows with\nthe date, module and function in use and the message leve\n(INFO/WARNING/ERROR):\n\n::\n\n        2017-04-22 09:50:11,308 INFO - magpy.stream - Initiating MagPy...\n\nMessages on the WARNING and ERROR level will automatically be printed to\nshell. Messages for more detailed debugging are written at the DEBUG\nlevel and will not be printed to the log unless an additional handler\nfor printing DEBUG is added.\n\nCustom loggers can be defined by creating a logger object after\nimporting MagPy and adding handlers (with formatting):\n\n::\n\n        from magpy.stream import *\n        import logging\n        \n        logger = logging.getLogger()\n        hdlr = logging.FileHandler('testlog.log')\n        formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')\n        hdlr.setFormatter(formatter)\n        logger.addHandler(hdlr)\n        \n\nThe logger can also be configured to print to shell (stdout, without\nformatting):\n\n::\n\n        import sys\n        logger = logging.getLogger()\n        stdoutlog = logging.StreamHandler(sys.stdout)\n        logger.addHandler(stdoutlog)\n\n2.4 Plotting\n~~~~~~~~~~~~\n\nYou will find some example plots at the `Conrad\nObservatory <http://www.conrad-observatory.at>`__.\n\n2.4.1 Quick (and not dirty)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n::\n\n        import magpy.mpplot as mp\n        mp.plot(data)\n\n2.4.2 Some options\n^^^^^^^^^^^^^^^^^^\n\nSelect specific keys to plot:\n\n::\n\n        mp.plot(data,variables=['x','y','z'])\n        \n\nDefining a plot title and specific colors (see ``help(mp.plot)`` for\nlist and all options):\n\n::\n\n        mp.plot(data,variables=['x','y'],plottitle=\"Test plot\",\n                colorlist=['g', 'c'])\n\n2.4.3 Data from multiple streams\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nVarious datasets from multiple data streams will be plotted above one\nanother. Provide a list of streams and an array of keys:\n\n::\n\n        mp.plotStreams([data1,data2],[['x','y','z'],['f']])\n\n2.5 Flagging data\n~~~~~~~~~~~~~~~~~\n\nThe flagging procedure allows the observer to mark specific data points\nor ranges. Falgs are useful for labelling data spikes, storm onsets,\npulsations, disturbances, lightning strikes, etc. Each flag is asociated\nwith a comment and a type number. The flagtype number ranges between 0\nand 4:\n\n-  0: normal data with comment (e.g. \"Hello World\")\n-  1: data marked by automated analysis (e.g. spike)\n-  2: data marked by observer as valid geomagnetic signature (e.g. storm\n   onset, pulsation). Such data cannot be marked invalid by automated\n   procedures\n-  3: data marked by observer as invalid (e.g. lightning, magnetic\n   disturbance)\n-  4: merged data (e.g. data inserted from another source/instrument as\n   defined in the comment)\n\nFlags can be stored along with the data set (requires CDF format output)\nor separately in a binary archive. These flags can then be applied to\nthe raw data again, ascertaining perfect reproducibility.\n\n2.5.1 Mark data spikes\n^^^^^^^^^^^^^^^^^^^^^^\n\nLoad a data record with data spikes:\n\n::\n\n        datawithspikes = read(example1)\n\nMark all spikes using the automated function ``flag_outlier`` with\ndefault options:\n\n::\n\n        flaggeddata = datawithspikes.flag_outlier(timerange=timedelta(minutes=1),threshold=3)\n\nShow flagged data in a plot:\n\n::\n\n        mp.plot(flaggeddata,['f'],annotate=True)\n\n2.5.2 Flag time range\n^^^^^^^^^^^^^^^^^^^^^\n\nFlag a certain time range:\n\n::\n\n        flaglist = flaggeddata.flag_range(keys=['f'], starttime='2012-08-02T04:33:40', \n                                          endtime='2012-08-02T04:44:10', \n                                          flagnum=3, text=\"iron metal near sensor\")\n\nApply these flags to the data:\n\n::\n\n        flaggeddata = flaggeddata.flag(flaglist)\n\nShow flagged data in a plot:\n\n::\n\n        mp.plot(flaggeddata,['f'],annotate=True)\n\n2.5.3 Save flagged data\n^^^^^^^^^^^^^^^^^^^^^^^\n\nTo save the data together with the list of flags to a CDF file:\n\n::\n\n        flaggeddata.write('/tmp/',filenamebegins='MyFlaggedExample_', format_type='PYCDF')\n\nTo check for correct save procedure, read and plot the new file:\n\n::\n\n        newdata = read(\"/tmp/MyFlaggedExample_*\")\n        mp.plot(newdata,annotate=True, plottitle='Reloaded flagged CDF data')\n\n2.5.4 Save flags separately\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nTo save the list of flags seperately from the data in a pickled binary\nfile:\n\n::\n\n        fullflaglist = flaggeddata.extractflags()\n        saveflags(fullflaglist,\"/tmp/MyFlagList.pkl\"))\n\nThese flags can be loaded in and then reapplied to the data set:\n\n::\n\n        data = read(example1)\n        flaglist = loadflags(\"/tmp/MyFlagList.pkl\")\n        data = data.flag(flaglist)\n        mp.plot(data,annotate=True, plottitle='Raw data with flags from file')\n\n2.5.5 Drop flagged data\n^^^^^^^^^^^^^^^^^^^^^^^\n\nFor some analyses it is necessary to use \"clean\" data, which can be\nproduced by dropping data flagged as invalid (e.g. spikes). By default,\nthe following method removes all data marked with flagtype numbers 1 and\n3.\n\n::\n\n        cleandata = flaggeddata.remove_flagged()\n        mp.plot(cleandata, ['f'], plottitle='Flagged data dropped')\n\n2.6 Basic methods\n~~~~~~~~~~~~~~~~~\n\n2.6.1 Filtering\n^^^^^^^^^^^^^^^\n\nMagPy's ``filter`` uses the settings recommended by\n`IAGA <http://www.iaga-aiga.org/>`__/`INTERMAGNET <http://www.intermagnet.org>`__.\nCkeck ``help(data.filter)`` for further options and definitions of\nfilter types and pass bands.\n\nFirst, get the sampling rate before filtering in seconds:\n\n::\n\n        print(\"Sampling rate before [sec]:\", cleandata.samplingrate())\n\nFilter the data set with default parameters (``filter`` automatically\nchooses the correct settings depending on the provided sanmpling rate):\n\n::\n\n        filtereddata = cleandata.filter()\n\nGet sampling rate and filtered data after filtering (please note that\nall filter information is added to the data's meta information\ndictionary (data.header):\n\n::\n\n        print(\"Sampling rate after [sec]:\", filtereddata.samplingrate())\n        print(\"Filter and pass band:\", filtereddata.header.get('DataSamplingFilter',''))\n\n2.6.2 Coordinate transformation\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nAssuming vector data in columns [x,y,z] you can freely convert between\nxyz, hdz, and idf coordinates:\n\n::\n\n        cleandata = cleandata.xyz2hdz()\n\n2.6.3 Calculate delta F\n^^^^^^^^^^^^^^^^^^^^^^^\n\nIf the data file contains xyz (hdz, idf) data and an independently\nmeasured f value, you can calculate delta F between the two instruments\nusing the following:\n\n::\n\n        cleandata = cleandata.delta_f()\n        mp.plot(cleandata,plottitle='delta F')\n\n2.6.4 Calculate Means\n^^^^^^^^^^^^^^^^^^^^^\n\nMean values for certain data columns can be obtained using the ``mean``\nmethod. The mean will only be calculated for data with the percentage of\nvalid data (in contrast to missing data) points not falling below the\nvalue given by the percentage option (default 95). If too much data is\nmissing, then no mean is calulated and the function returns NaN.\n\n::\n\n        print(cleandata.mean('df', percentage=80))\n        \n\nThe median can be calculated by defining the ``meanfunction`` option:\n\n::\n\n        print(cleandata.mean('df', meanfunction='median'))\n\n2.6.5 Applying offsets\n^^^^^^^^^^^^^^^^^^^^^^\n\nConstant offsets can be added to individual columns using the ``offset``\nmethod with a dictionary defining the MagPy stream column keys and the\noffset to be applied (datetime.timedelta object for time column, float\nfor all others):\n\n::\n\n        offsetdata = cleandata.offset({'time':timedelta(seconds=0.19),'f':1.24})\n\n2.6.6 Scaling data\n^^^^^^^^^^^^^^^^^^\n\nIndividual columns can also be multiplied by values provided in a\ndictionary:\n\n::\n\n        multdata = cleandata.multiply({'x':-1})\n\n2.6.7 Fit functions\n^^^^^^^^^^^^^^^^^^^\n\nMagPy offers the possibility to fit functions to data using either\npolynomial functions or cubic splines (default):\n\n::\n\n        func = cleandata.fit(keys=['x','y','z'],knotstep=0.1)\n        mp.plot(cleandata,variables=['x','y','z'],function=func)\n\n2.6.8 Derivatives\n^^^^^^^^^^^^^^^^^\n\nTime derivatives, which are useful to identify outliers and sharp\nchanges, are calculated as follows:\n\n::\n\n        diffdata = cleandata.differentiate(keys=['x','y','z'],put2keys = ['dx','dy','dz'])\n        mp.plot(diffdata,variables=['dx','dy','dz'])\n\n2.6.9 All methods at a glance\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nFor a summary of all supported methods, see the section **List of all\nMagPy methods** below.\n\n2.7 Geomagnetic analysis\n~~~~~~~~~~~~~~~~~~~~~~~~\n\n2.7.1 Determination of K indices\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nMagPy supports the FMI method for determination of K indices. Please\nconsult the MagPy publication for details on this method and\napplication.\n\nA month of one minute data is provided in ``example2``, which\ncorresponds to an `INTERMAGNET <http://www.intermagnet.org>`__ IAF\narchive file. Reading a file in this format will load one minute data by\ndefault. Accessing hourly data and other information is described below.\n\n::\n\n        data2 = read(example2)\n        kvals = data2.k_fmi()\n\nThe determination of K values will take some time as the filtering\nwindow is dynamically adjusted. In order to plot the original data (H\ncomponent) and K values together, we now use the multiple stream\nplotting method ``plotStreams``. Here you need to provide a list of\nstreams and an array containing variables for each stream. The\nadditional options determine the appearance of the plot (limits, bar\nchart):\n\n::\n\n        mp.plotStreams([data2,kvals],[['x'],['var1']],\n                       specialdict = [{},{'var1':[0,9]}],\n                       symbollist=['-','z'],\n                       bartrange=0.06)\n        \n\n``'z'`` in ``symbollist`` refers to the second subplot (K), which should\nbe plotted as bars rather than the standard line (``'-'``).\n\n2.7.2 Automated geomagnetic storm detection\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nGeomagnetic storm detection is supported by MagPy using two procedures\nbased on wavelets and the Akaike Information Criterion (AIC) as outlined\nin detail in Bailey and Leonhardt (2016). A basic example of usage to\nfind an SSC using a Discrete Wavelet Transform (DWT) is shown below:\n\n::\n\n        from magpy.stream import read\n        from magpy.opt.stormdet import seekStorm\n        stormdata = read(\"LEMI025_2015-03-17.cdf\")      # 1s variometer data\n        stormdata = stormdata.xyz2hdz()\n        stormdata = stormdata.smooth('x', window_len=25)\n        detection, ssc_list = seekStorm(stormdata, method=\"MODWT\")\n        print(\"Possible SSCs detected:\", ssc_list)\n        \n\nThe method ``seekStorm`` will return two variables: ``detection`` is\nTrue if any detection was made, while ``ssc_list`` is a list of\ndictionaries containing data on each detection. Note that this method\nalone can return a long list of possible SSCs (most incorrectly\ndetected), particularly during active storm times. It is most useful\nwhen additional restrictions based on satellite solar wind data apply\n(currently only optimised for ACE data, e.g. from the NOAA website):\n\n::\n\n        satdata_ace_1m = read('20150317_ace_swepam_1m.txt')\n        satdata_ace_5m = read('20150317_ace_epam_5m.txt')\n        detection, ssc_list, sat_cme_list = seekStorm(stormdata,\n                    satdata_1m=satdata_ace_1m, satdata_5m=satdata_ace_5m,\n                    method='MODWT', returnsat=True)\n        print(\"Possible CMEs detected:\", sat_cme_list)\n        print(\"Possible SSCs detected:\", ssc_list)\n\n2.7.3 Sq analysis\n^^^^^^^^^^^^^^^^^\n\nMethods are currently in preparation.\n\n2.7.4 Validity check of data\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nA common and important application used in the geomagnetism community is\na general validity check of geomagnetic data to be submitted to the\nofficial data repositories `IAGA <http://www.iaga-aiga.org/>`__, WDC, or\n`INTERMAGNET <http://www.intermagnet.org>`__. Please note: this is\ncurrently under development and will be extended in the near future. A\n'one-click' test method will be included in xmagpy in the future,\nchecking:\n\nA) Validity of data formats, e.g.:\n\n   ::\n\n       data = read('myiaffile.bin', debug=True) \n\nB) Completeness of meta-information\n\nC) Conformity of applied techniques to respective rules\n\nD) Internal consistency of data\n\nE) Optional: regional consistency\n\n2.7.5 Spectral Analysis and Noise\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nFor analysis of the spectral content of data, MagPy provides two basic\nplotting methods. ``plotPS`` will calculate and display a power spectrum\nof the selected component. ``plotSpectrogram`` will plot a spectrogram\nof the time series. As usual, there are many options for plot window and\nprocessing parameters that can be accessed using the help method.\n\n::\n\n        data = read(example1)\n        mp.plotPS(data,key='f')\n        mp.plotSpectrogram(data,['f'])\n\n2.8 Handling multiple streams\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n2.8.1 Merging streams\n^^^^^^^^^^^^^^^^^^^^^\n\nMerging data comprises combining two streams into one new stream. This\nincludes adding a new column from another stream, filling gaps with data\nfrom another stream or replacing data from one column with data from\nanother stream. The following example sketches the typical usage:\n\n::\n\n        print(\"Data columns in data2:\", data2._get_key_headers())\n        newstream = mergeStreams(data2,kvals,keys=['var1'])\n        print(\"Data columns after merging:\", data2._get_key_headers())\n        mp.plot(newstream, ['x','y','z','var1'],symbollist=['-','-','-','z'])\n\nIf column ``var1`` does not existing in data2 (as above), then this\ncolumn is added. If column ``var1`` had already existed, then missing\ndata would be inserted from stream ``kvals``. In order to replace any\nexisting data, use option ``mode='replace'``.\n\n2.8.2 Differences between streams\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nSometimes it is necessary to examine the differences between two data\nstreams e.g. differences between the F values of two instruments running\nin parallel at an observatory. The method ``subtractStreams`` is\nprovided for this analysis:\n\n::\n\n        diff = subtractStreams(data1,data2,keys=['f'])\n\n2.9 The art of meta-information\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nEach data set is accompanied by a dictionary containing meta-information\nfor this data. This dictionary is completely dynamic and can be filled\nfreely, but there are a number of predefined fields that help the user\nprovide essential meta-information as requested by\n`IAGA <http://www.iaga-aiga.org/>`__,\n`INTERMAGNET <http://www.intermagnet.org>`__ and other data providers.\nAll meta information is saved only to MagPy-specific archive formats\nPYCDF and PYSTR. All other export formats save only specific information\nas required by the projected format.\n\nThe current content of this dictionary can be accessed by:\n\n::\n\n        data = read(example1)\n        print(data.header)\n\nInformation is added/changed by using:\n\n::\n\n        data.header['SensorName'] = 'FGE'\n\nIndividual information is obtained from the dictionary using standard\nkey input:\n\n::\n\n        print(data.header.get('SensorName'))\n\nIf you want to have a more readable list of the header information, do:\n\n::\n\n        for key in data.header:\n            print (\"Key: {} \\t Content: {}\".format(key,data.header.get(key)))\n\n2.9.1 Conversion to ImagCDF - Adding meta-information\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nTo convert data from `IAGA <http://www.iaga-aiga.org/>`__ or IAF formats\nto the new `INTERMAGNET <http://www.intermagnet.org>`__ CDF format, you\nwill usually need to add additional meta-information required for the\nnew format. MagPy can assist you here, firstly by extracting and\ncorrectly adding already existing meta-information into newly defined\nfields, and secondly by informing you of which information needs to be\nadded for producing the correct output format.\n\nExample of IAGA02 to ImagCDF:\n\n::\n\n        mydata = read('IAGA02-file.min')\n        mydata.write('/tmp',format_type='IMAGCDF')\n\nThe console output of the write command (see below) will tell you which\ninformation needs to be added (and how) in order to obtain correct\nImagCDF files. Please note, MagPy will store the data in any case and\nwill be able to read it again even if information is missing. Before\nsubmitting to a GIN, you need to make sure that the appropriate\ninformation is contained. Attributes that relate to publication of the\ndata will not be checked at this point, and might be included later.\n\n::\n\n        >>>Writing IMAGCDF Format /tmp/wic_20150828_0000_PT1M_4.cdf\n        >>>writeIMAGCDF: StandardLevel not defined - please specify by yourdata.header['DataStandardLevel'] = ['None','Partial','Full']\n        >>>writeIMAGCDF: Found F column\n        >>>writeIMAGCDF: given components are XYZF. Checking F column...\n        >>>writeIMAGCDF: analyzed F column - values are apparently independend from vector components - using column name 'S'\n\nNow add the missing information. Selecting 'Partial' will require\nadditional information. You will get a 'reminder' if you forget this.\nPlease check IMAGCDF instructions on specific codes:\n\n::\n\n        mydata.header['DataStandardLevel'] = 'Partial'\n        mydata.header['DataPartialStandDesc'] = 'IMOS-01,IMOS-02,IMOS-03,IMOS-04,IMOS-05,IMOS-06,IMOS-11,IMOS-12,IMOS-13,IMOS-14,IMOS-15,IMOS-21,IMOS-22,IMOS-31,IMOS-41'\n\nSimilar reminders to fill out complete header information will be shown\nfor other conversions like:\n\n::\n\n        mydata.write('/tmp',format_type='IAGA')\n        mydata.write('/tmp',format_type='IMF')\n        mydata.write('/tmp',format_type='IAF',coverage='month')\n        mydata.write('/tmp',format_type='WDC')\n\n2.9.2 Providing location data\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nProviding location data usually requires information on the reference\nsystem (ellipsoid,...). By default MagPy assumes that these values are\nprovided in WGS84/WGS84 reference system. In order to facilitate most\neasy referencing and conversions, MagPy supports\n`EPSG <https://www.epsg-registry.org/>`__ codes for coordinates. If you\nprovide the geodetic references as follows, and provided that the\n`proj4 <https://github.com/OSGeo/proj.4>`__ Python package is available,\nMagPy will automatically convert location data to the requested output\nformat (currently WGS84).\n\n::\n\n        mydata.header['DataAcquisitionLongitude'] = -34949.9\n        mydata.header['DataAcquisitionLatitude'] = 310087.0\n        mydata.header['DataLocationReference'] = 'GK M34, EPSG: 31253'\n\n        >>>...\n        >>>writeIMAGCDF: converting coordinates to epsg 4326\n        >>>...\n\n2.9.3 Special meta-information fields\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nThe meta-information fields can hold much more information than required\nby most output formats. This includes basevalue and baseline parameters,\nflagging details, detailed sensor information, serial numbers and much\nmore. MagPy makes use of these possibilities. In order to save this\nmeta-information along with your data set you can use MagPy internal\narchiving format, ``PYCDF``, which can later be converted to any of the\naforementioned output formats. You can even reconstruct a full data\nbase. Any upcoming meta-information or output request can be easily\nadded/modified without disrupting already existing data sets and the\nability to read and analyse old data. This data format is also based on\nNasa CDF. ASCII outputs are also supported by MagPy, of which the\n``PYSTR`` format also contains all meta information and ``PYASCII`` is\nthe most compact. Please consider that ASCII formats require a lot of\nmemory, especially for one second and higher resolution data.\n\n::\n\n        mydata.write('/tmp',format_type='PYCDF',coverage='year')\n\n2.10 Data transfer\n~~~~~~~~~~~~~~~~~~\n\nMagPy contains a number of methods to simplify data transfer for\nobservatory applications. Methods within the basic Python functionality\ncan also be very useful. Using the implemented methods requires:\n\n::\n\n        from magpy import transfer as mt\n\n2.10.1 Downloads\n^^^^^^^^^^^^^^^^\n\nUse the ``read`` method as outlined above. No additional imports are\nrequired.\n\n2.10.2 FTP upload\n^^^^^^^^^^^^^^^^^\n\nFiles can also be uploaded to an FTP server:\n\n::\n\n        mt.ftpdatatransfer(localfile='/path/to/data.cdf',ftppath='/remote/directory/',myproxy='ftpaddress or address of proxy',port=21,login='user',passwd='passwd',logfile='/path/mylog.log')\n        \n\nThe upload methods using FTP, SCP and GIN support logging. If the data\nfile failed to upload correctly, the path is added to a log file and,\nwhen called again, upload of the file is retried. This option is useful\nfor remote locations with unstable network connections.\n\n2.10.3 Secure communication protocol (SCP)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nTo transfer via SCP:\n\n::\n\n        mt.scptransfer('user@address:/remote/directory/','/path/to/data.cdf',passwd,timeout=60)\n\n2.10.4 Upload data to GIN\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\nUse the following command:\n\n::\n\n        mt.ginupload('/path/to/data.cdf', ginuser, ginpasswd, ginaddress, faillog=True, stdout=True)\n\n2.10.5 Avoiding real-text passwords in scripts\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nIn order to avoid using real-text password in scripts, MagPy comes along\nwith a simple encryption routine.\n\n::\n\n        from magpy.opt import cred as mpcred\n\nCredentials will be saved to a hidden file with encrypted passwords. To\nadd information for data transfer to a machine called 'MyRemoteFTP' with\nan IP of 192.168.0.99:\n\n::\n\n        mpcred.cc('transfer', 'MyRemoteFTP', user='user', passwd='secure', address='192.168.0.99', port=21)\n\nExtracting passwd information within your data transfer scripts:\n\n::\n\n        user = mpcred.lc('MyRemoteFTP', 'user')\n        password = mpcred.lc('MyRemoteFTP','passwd')\n\n2.11 DI measurements, basevalues and baselines\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nThese procedures require an additional import:\n\n::\n\n        from magpy import absolutes as di\n\n2.11.1 Data structure of DI measurements\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nPlease check ``example3``, which is an example DI file. You can create\nthese DI files by using the input sheet from xmagpy or the online input\nsheet provided by the Conrad Observatory. If you want to use this\nservice, please contact the Observatory staff. Also supported are\nDI-files from the AUTODIF.\n\n2.11.2 Reading DI data\n^^^^^^^^^^^^^^^^^^^^^^\n\nReading and analyzing DI data requires valid DI file(s). For correct\nanalysis, variometer data and scalar field information needs to be\nprovided as well. Checkout ``help(di.absoluteAnalysis)`` for all\noptions. The analytical procedures are outlined in detail in the MagPy\narticle (citation). A typical analysis looks like:\n\n::\n\n        diresult = di.absoluteAnalysis('/path/to/DI/','path/to/vario/','path/to/scalar/')\n\nPath to DI can either point to a single file, a directory or even use\nwildcards to select data from a specific observatory/pillar. Using the\nexamples provided along with MagPy, the analysis line looks like\n\n::\n\n        diresult = di.absoluteAnalysis(example3,example2,example2)\n\nCalling this method will provide terminal output as follows and a stream\nobject ``diresult`` which can be used for further analyses.\n\n::\n\n        >>>...\n        >>>Analyzing manual measurement from 2015-03-25\n        >>>Vector at: 2015-03-25 08:18:00+00:00\n        >>>Declination: 3:53:46, Inclination: 64:17:17, H: 21027.2, Z: 43667.9, F: 48466.7\n        >>>Collimation and Offset:\n        >>>Declination:    S0: -3.081, delta H: -6.492, epsilon Z: -61.730\n        >>>Inclination:    S0: -1.531, epsilon Z: -60.307\n        >>>Scalevalue: 1.009 deg/unit\n        >>>Fext with delta F of 0.0 nT\n        >>>Delta D: 0.0, delta I: 0.0\n\nFext indicates that F values have been used from a separate file and not\nprovided along with DI data. Delta values for F, D, and I have not been\nprovided either. ``diresult`` is a stream object containing average D, I\nand F values, the collimation angles, scale factors and the base values\nfor the selected variometer, beside some additional meta information\nprovided in the data input form.\n\n2.11.3 Reading BLV files\n^^^^^^^^^^^^^^^^^^^^^^^^\n\nBasevalues:\n\n::\n\n        blvdata = read('/path/myfile.blv')\n        mp.plot(blvdata, symbollist=['o','o','o'])\n\nAdopted baseline:\n\n::\n\n        bldata = read('/path/myfile.blv',mode='adopted')\n        mp.plot(bldata)\n\n2.11.4 Basevalues and baselines\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nBasevalues as obtained in (2.11.2) or (2.11.3) are stored in a normal\ndata stream object, therefore all analysis methods outlined above can be\napplied to this data. The ``diresult`` object contains D, I, and F\nvalues for each measurement in columns x,y,z. Basevalues for H, D and Z\nrelated to the selected variometer are stored in columns dx,dy,dz. In\n``example4``, you will find some more DI analysis results. To plot these\nbasevalues we can use the following plot command, where we specify the\ncolumns, filled circles as plotsymbols and also define a minimum spread\nof each y-axis of +/- 5 nT for H and Z, +/- 0.05 deg for D.\n\n::\n\n        basevalues = read(example4)\n        mp.plot(basevalues, variables=['dx','dy','dz'], symbollist=['o','o','o'], padding=[5,0.05,5])\n\nFitting a baseline can be easily accomplished with the ``fit`` method.\nFirst we test a linear fit to the data by fitting a polynomial function\nwith degree 1.\n\n::\n\n        func = basevalues.fit(['dx','dy','dz'],fitfunc='poly', fitdegree=1)\n        mp.plot(basevalues, variables=['dx','dy','dz'], symbollist=['o','o','o'], padding=[5,0.05,5], function=func)\n\nWe then fit a spline function using 3 knotsteps over the timerange (the\nknotstep option is always related to the given timerange).\n\n::\n\n        func = basevalues.fit(['dx','dy','dz'],fitfunc='spline', knotstep=0.33)\n        mp.plot(basevalues, variables=['dx','dy','dz'], symbollist=['o','o','o'], padding=[5,0.05,5], function=func)\n\nHint: a good estimate on the necessary fit complexity can be obtained by\nlooking at delta F values. If delta F is mostly constant, then the\nbaseline should also not be very complex.\n\n2.11.5 Applying baselines\n^^^^^^^^^^^^^^^^^^^^^^^^^\n\nThe baseline method provides a number of options to assist the observer\nin determining baseline corrections and realted issues. The basic\nbuilding block of the baseline method is the fit function as discussed\nabove. Lets first load raw vectorial geomagnetic data, the absevalues of\nwhich are contained in above example:\n\n::\n\n        rawdata = read(example5)\n\nNow we can apply the basevalue information and the spline function as\ntested above:\n\n::\n\n        func = rawdata.baseline(basevalues, extradays=0, fitfunc='spline',\n                                knotstep=0.33,startabs='2015-09-01',endabs='2016-01-22')\n\nThe ``baseline`` method will determine and return a fit function between\nthe two given timeranges based on the provided basevalue data\n``blvdata``. The option ``extradays`` allows for adding days before and\nafter start/endtime for which the baseline function will be\nextrapolated. This option is useful for providing quasi-definitive data.\nWhen applying this method, a number of new meta-information attributes\nwill be added, containing basevalues and all functional parameters to\ndescribe the baseline. Thus, the stream object still contains\nuncorrected raw data, but all baseline correction information is now\ncontained within its meta data. To apply baseline correction you can use\nthe ``bc`` method:\n\n::\n\n        corrdata = rawdata.bc()\n\nIf baseline jumps/breaks are necessary due to missing data, you can call\nthe baseline function for each independent segment and combine the\nresulting baseline functions to a list:\n\n::\n\n        stream = read(mydata,starttime='2016-01-01',endtime='2016-03-01')\n        basevalues = read(mybasevalues)\n        adoptedbasefunc = []\n        adoptedbasefunc.append(stream.baseline(basevalues, extradays=0, fitfunc='poly', fitdegree=1,startabs='2016-01-01',endabs='2016-02-01')\n        adoptedbasefunc.append(stream.baseline(basevalues, extradays=0, fitfunc='spline', knotstep=0.33,startabs='2016-01-02',endabs='2016-01-03')\n\n        corr = stream.bc()\n\nThe combined baseline can be plotted accordingly. Extend the function\nparameters with each additional segment.\n\n::\n\n        mp.plot(basevalues, variables=['dx','dy','dz'], symbollist=['o','o','o'], padding=[5,0.05,5], function=adoptedbasefunc)\n\nAdding a baseline for scalar data, which is determined from the delta F\nvalues provided within the basevalue data stream:\n\n::\n\n        scalarbasefunc = []\n        scalarbasefunc.append(basevalues.baseline(basevalues, keys=['df'], extradays=0, fitfunc='poly', fitdegree=1,startabs='2016-01-01',endabs='2016-03-01'))\n        plotfunc = adoptedbasefunc\n        plotfunc.extend(scalarbasefunc)\n        mp.plot(basevalues, variables=['dx','dy','dz','df'], symbollist=['o','o','o','o'], padding=[5,0.05,5,5], function=plotfunc)\n\nGetting dailymeans and correction for scalar baseline can be acomplished\nby:\n\n::\n\n        meanstream = stream.dailymeans()\n        meanstream = meanstream.func2stream(scalarbasefunc,mode='sub',keys=['f'],fkeys=['df'])\n        meanstream = meanstream.delta_f()\n\nPlease note that here the function originally determined from the deltaF\n(df) values of the basevalue data needs to be applied to the F column\n(f) from the data stream. Before saving we will also extract the\nbaseline parameters from the meta information, which is automatically\ngenerated by the ``baseline`` method.\n\n::\n\n        absinfo = stream.header.get('DataAbsInfo','')\n        fabsinfo = basevalues.header.get('DataAbsInfo','')\n\n2.11.6 Saving basevalue and baseline information\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nThe following will create a BLV file:\n\n::\n\n        basevalues.write('/my/path', coverage='all', format_type='BLV', diff=meanstream, year='2016', absinfo=absinfo, deltaF=fabsinfo)\n\nInformation on the adopted baselines will be extracted from option\n``absinfo``. If several functions are provided, baseline jumps will be\nautomatically inserted into the BLV data file. The output of adopted\nscalar baselines is configured by option ``deltaF``. If a number is\nprovided, this value is assumed to represent the adopted scalar\nbaseline. If either 'mean' or 'median' are given (e.g.\n``deltaF='mean'``), then the mean/median value of all delta F values in\nthe ``basevalues`` stream is used, requiring that such data is\ncontained. Providing functional parameters as stored in a\n``DataAbsInfo`` meta information field, as shown above, will calculate\nand use the scalar baseline function. The ``meanstream`` stream contains\ndaily averages of delta F values between variometer and F measurements\nand the baseline adoption data in the meta-information. You can,\nhowever, provide all this information manually as well. The typical way\nto obtain such a ``meanstream`` is sketched above.\n\n2.12 Database support\n~~~~~~~~~~~~~~~~~~~~~\n\nMagPy supports database access and many methods for optimizing data\ntreatment in connection with databases. Among many other benefits, using\na database simplifies many typical procedures related to\nmeta-information. Currently, MagPy supports\n`MySQL <https://www.mysql.com/>`__ databases. To use these features, you\nneed to have MySQL installed on your system. In the following we provide\na brief outline of how to set up and use this optional addition. Please\nnote that a proper usage of the database requires sensor-specific\ninformation. In geomagnetism, it is common to combine data from\ndifferent sensors into one file structure. In this case, such data needs\nto remain separate for database usage and is only combined when\nproducing\n`IAGA <http://www.iaga-aiga.org/>`__/`INTERMAGNET <http://www.intermagnet.org>`__\ndefinitive data. Furthermore, unique sensor information such as type and\nserial number is required.\n\n::\n\n        import magpy import database as mdb\n\n2.12.1 Setting up a MagPy database (using MySQL)\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nOpen mysql (e.g. Linux: ``mysql -u root -p mysql``) and create a new\ndatabase. Replace ``#DB-NAME`` with your database name (e.g. ``MyDB``).\nAfter creation, you will need to grant priviledges to this database to a\nuser of your choice. Please refer to official MySQL documentations for\ndetails and further commands.\n\n::\n\n         mysql> CREATE DATABASE #DB-NAME; \n         mysql> GRANT ALL PRIVILEGES ON #DB-NAME.* TO '#USERNAME'@'%' IDENTIFIED BY '#PASSWORD';\n\n2.12.2 Initializing a MagPy database\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nConnecting to a database using MagPy is done using following command:\n\n::\n\n        db = mdb.mysql.connect(host=\"localhost\",user=\"#USERNAME\",passwd=\"#PASSWORD\",db=\"#DB-NAME\")\n        mdb.dbinit(db)\n\n2.12.3 Adding data to the database\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nExamples of useful meta-information:\n\n::\n\n        iagacode = 'WIC'\n        data = read(example1)\n        gsm = data.selectkeys(['f'])\n        fge = data.selectkeys(['x','y','z'])\n        gsm.header['SensorID'] = 'GSM90_12345_0002'\n        gsm.header['StationID'] = iagacode\n        fge.header['SensorID'] = 'FGE_22222_0001'\n        fge.header['StationID'] = iagacode\n        mdb.writeDB(db,gsm)\n        mdb.writeDB(db,fge)\n\nAll available meta-information will be added automatically to the\nrelevant database tables. The SensorID scheme consists of three parts:\ninstrument (GSM90), serial number (12345), and a revision number (0002)\nwhich might change in dependency of maintenance, calibration, etc. As\nyou can see in the example above, we separate data from different\ninstruments, which we recommend particularly for high resolution data,\nas frequency and noise characteristics of sensor types will differ.\n\n2.12.4 Reading data\n^^^^^^^^^^^^^^^^^^^\n\nTo read data from an established database:\n\n::\n\n        data = mdb.readDB(db,'GSM90_12345_0002') \n\nOptions e.g. starttime='' and endtime='' are similar as for normal\n``read``.\n\n2.12.5 Meta data\n^^^^^^^^^^^^^^^^\n\nAn often used application of database connectivity with MagPy will be to\napply meta-information stored in the database to data files before\nsubmission. The following command demostrates how to extract all missing\nmeta-information from the database for the selected sensor and add it to\nthe header dictionary of the data object.\n\n::\n\n        rawdata = read('/path/to/rawdata.bin')\n        rawdata.header = mdb.dbfields2dict(db,'FGE_22222_0001')\n        rawdata.write(..., format_type='IMAGCDF')\n\n2.13 Monitoring scheduled scripts\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nAutomated analysis can e easily accomplished by adding a series of MagPy\ncommands into a script. A typical script could be:\n\n::\n\n        # read some data and get means\n        data = read(example1)\n        mean_f = data.mean('f')\n\n        # import monitor method\n        from magpy.opt import Analysismonitor\n        analysisdict = Analysismonitor(logfile='/var/log/anamon.log')\n        analysisdict = analysisdict.load()\n        # check some arbitray threshold\n        analysisdict.check({'data_threshold_f_GSM90': [mean_f,'>',20000]})\n\nIf provided criteria are invalid, then the logfile is changed\naccordingly. This method can assist you particularly in checking data\nactuality, data contents, data validity, upload success, etc. In\ncombination with an independent monitoring tool like\n`Nagios <https://www.nagios.org/>`__, you can easily create mail/SMS\nnotfications of such changes, in addition to monitoring processes, live\ntimes, disks etc. `MARCOS <https://github.com/geomagpy/MARCOS>`__ comes\nalong with some instructions on how to use Nagios/MagPy for data\nacquisition monitoring.\n\n2.14 Data acquisition support\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nMagPy contains a couple of packages which can be used for data\nacquisition, collection and organization. These methods are primarily\ncontained in two applications:\n`MARTAS <https://github.com/geomagpy/MARTAS>`__ and\n`MARCOS <https://github.com/geomagpy/MARCOS>`__. MARTAS (Magpy Automated\nRealtime Acquisition System) supports communication with many common\ninstruments (e.g. GSM, LEMI, POS1, FGE, and many non-magnetic\ninstruments) and transfers serial port signals to\n`WAMP <http://wamp-proto.org/>`__ (Web Application Messaging Protocol),\nwhich allows for real-time data access using e.g. WebSocket\ncommunication through the internet. MARCOS (Magpy's Automated Realtime\nCollection and Organistaion System) can access such real-time streams\nand also data from many other sources and supports the observer by\nstoring, analyzing, archiving data, as well as monitoring all processes.\nDetails on these two applications can be found elsewhere.\n\n2.15 Graphical user interface\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nMany of the above mentioned methods are also available within the\ngraphical user interface of MagPy. To use this check the installation\ninstructions for your operating system. You will find Video Tutorials\nonline (to be added) describing its usage for specific analyses.\n\n2.16 Current developments\n~~~~~~~~~~~~~~~~~~~~~~~~~\n\n2.16.1 Exchange data objects with `ObsPy <https://github.com/obspy/obspy>`__\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nMagPy supports the exchange of data with ObsPy, the seismological\ntoolbox. Data objects of both python packages are very similar. Note:\nObsPy assumes regular spaced time intervals. Please be careful if this\nis not the case with your data. The example below shows a simple import\nroutine, on how to read a seed file and plot a spectrogram (which you\ncan identically obtain from ObsPy as well). Conversions to MagPy allow\nfor vectorial analyses, and geomagnetic applications. Conversions to\nObsPy are useful for effective high frequency analysis, requiring evenly\nspaced time intervals, and for exporting to seismological data formats.\n\n::\n\n        from obspy import read as obsread\n        seeddata = obsread('/path/to/seedfile')\n        magpydata = obspy2magpy(seeddata,keydict={'ObsPyColName': 'x'})\n        mp.plotSpectrogram(magpydata,['x'])\n\n2.16.2 Flagging in ImagCDF\n^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n::\n\n        datawithspikes = read(example1)\n        flaggeddata = datawithspikes.flag_outlier(keys=['f'],timerange=timedelta(minutes=1),threshold=3)\n        mp.plot(flaggeddata,['f'],annotate=True)\n        flaggeddata.write(tmpdir,format_type='IMAGCDF',addflags=True)\n\nThe ``addflags`` option denotes that flagging information will be added\nto the ImagCDF format. Please note that this is still under development\nand thus content and format specifications may change. So please use it\nonly for test purposes and not for archiving. To read and view flagged\nImagCDF data, just use the normal read command, and activate annotation\nfor plotting.\n\n::\n\n        new = read('/tmp/cnb_20120802_000000_PT1S_1.cdf')\n        mp.plot(new,['f'],annotate=True)\n\n3. Predefined scripts\n---------------------\n\nMagPy comes with a steadily increasing number of applications for\nvarious purposes. These applications can be run from some command prompt\nand allow to simplify/automize some commonly used applications of MagPy.\nAll applications have the same syntax, consisting of the name of\napplication and options. The option -h is available for all applications\nand provides an overview about purpose and options of the application:\n\n::\n\n        $> application -h\n\n3.1 Running applications in Linux/MacOs\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nOn Linux Systems all applications are added the bin directory and can be\nrun directly from any command interface/terminal after installation of\nMagPy:\n\n::\n\n        $> application -h\n\n3.2 Running applications in Windows\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\nAfter installing MagPy/GeomagPy on Windows, three executables are found\nin the MagPy program folder. For running applications you have to start\nthe MagPy \"command prompt\". In this terminal you will have to go to the\nScripts directory:\n\n::\n\n        .../> cd Scripts\n\nAnd here you now can run the application of your choice using the python\nenvironment:\n\n::\n\n        .../Scripts>python application -h\n\n3.3 Applications\n~~~~~~~~~~~~~~~~\n\nThe available applications are briefly intruduced in the following.\nPlease refer to \"application -h\" for all available options for each\napplication.\n\n3.3.1 mpconvert\n^^^^^^^^^^^^^^^\n\nmpconvert converts bewteen data formats based on MagPy. Typical\napplications are the conversion of binary data formats to readable ASCII\ndata sets or the conversion.\n\nTypical applications include\n\na) Convert IAGA seconds to IMAGCDF and include obligatory meta\n   information:\n\n   ::\n\n       mpconvert -r \"/iagaseconds/wic201701*\" -f IMAGCDF -c month -w \"/tmp\"\n                    -m \"DataStandardLevel:Full,IAGACode:WIC,DataReferences:myref\"\n\nb) Convert IMAGCDF seconds to IAF minute (using IAGA/IM filtering\n   procedures):\n\n   ::\n\n       mpconvert -r \"/imagcdf/wic_201701_000000_PT1S_4.cdf\" -f IAF -i -w \"/tmp\"\n\nmpconvert -r\n\"/srv/products/data/magnetism/definitive/wic2017/ImagCDF/wic\\_201708\\_000000\\_PT1S\\_4.cdf\"\n-f IAF -i -w \"/tmp\"\n\n3.3.2 addcred\n^^^^^^^^^^^^^\n\nUsed to store encrypted credential information for automatic data\ntransfer. So that sensitive information has not to be written in plain\ntext in scripts or cron jobs.\n\na) Add information for ftp data transfer. This information is encrypted\n   and can be accessed by referring to the shortcut \"zamg\".\n\n   ::\n\n       addcred -t transfer -c zamg -u max -p geheim \n                 -a \"ftp://ftp.remote.ac.at\" -l 21\n\n",
    "bugtrack_url": null,
    "license": "LICENSE.txt",
    "summary": "Geomagnetic analysis tools.",
    "version": "1.1.7",
    "project_urls": {
        "Homepage": "http://pypi.python.org/pypi/geomagpy/"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f273a5af2ae5778d754f45be2405b571737550d45189f4cd700d28a664b49cb0",
                "md5": "f2260600aaaf06b59b3e1e9faa011b21",
                "sha256": "1d436dec013f579855cad91ae90bbffa5825cc7511f34d13285a876d22474e6a"
            },
            "downloads": -1,
            "filename": "geomagpy-1.1.7.tar.gz",
            "has_sig": false,
            "md5_digest": "f2260600aaaf06b59b3e1e9faa011b21",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 25470910,
            "upload_time": "2023-12-01T09:27:33",
            "upload_time_iso_8601": "2023-12-01T09:27:33.680218Z",
            "url": "https://files.pythonhosted.org/packages/f2/73/a5af2ae5778d754f45be2405b571737550d45189f4cd700d28a664b49cb0/geomagpy-1.1.7.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-12-01 09:27:33",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "geomagpy"
}
        
Elapsed time: 0.16848s