bigmler


Namebigmler JSON
Version 5.9.1 PyPI version JSON
download
home_pagehttps://bigml.com/developers
SummaryA command-line tool for BigML.io, the public BigML API
upload_time2024-06-13 21:46:54
maintainerNone
docs_urlNone
authorThe BigML Team
requires_pythonNone
licensehttp://www.apache.org/licenses/LICENSE-2.0
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            BigMLer - A command-line tool for BigML's API
=============================================

BigMLer makes `BigML <https://bigml.com>`_ even easier.

BigMLer wraps `BigML's API Python bindings <http://bigml.readthedocs.org>`_  to
offer a high-level command-line script to easily create and publish datasets
and models, create ensembles,
make local predictions from multiple models, and simplify many other machine
learning tasks. For additional information, see
the
`full documentation for BigMLer on Read the Docs <http://bigmler.readthedocs.org>`_.

BigMLer is open sourced under the `Apache License, Version
2.0 <http://www.apache.org/licenses/LICENSE-2.0.html>`_.

Requirements
============

BigMLer needs Python 3.8 or higher versions to work.
Compatibility with Python 2.X was discontinued in version 3.27.2.

BigMLer requires `bigml 9.7.1 <https://github.com/bigmlcom/python>`_  or
higher, that contains the bindings providing support to use the ``BigML``
platform to create, update, get and delete resources,
but also to produce local predictions using the
models created in ``BigML``. Most of them will be actionable with the basic
installation, but some additional dependencies are needed
to use local ``Topic Models`` to produce ``Topic Distributions``. These can
be installed using:

.. code-block:: bash

    pip install bigmler[topics]

The bindings also support local predictions for models generated from images.
To use these models, an additional set of libraries needs to be installed
using:

.. code-block:: bash

    pip install bigmler[images]

The external libraries used in this case exist for the majority of recent
Operating System versions. Still, some of them might need especific
compiler versions or dlls, so their installation may require an additional
setup effort and will not be supported by default.

The full set of libraries can be installed using

.. code-block:: bash

    pip install bigmler[full]


BigMLer Installation
====================

To install the latest stable release with
`pip <http://www.pip-installer.org/>`_

.. code-block:: bash

    $ pip install bigmler

You can also install the development version of bigmler directly
from the Git repository

.. code-block:: bash

    $ pip install -e git://github.com/bigmlcom/bigmler.git#egg=bigmler

For a detailed description of install instructions on Windows see the
:ref:bigmler-on-windows section.

Support for local Topic Distributions (Topic Models' predictions)
and local predictions for datasets that include Images will only be
available as extras, because the libraries used for that are not
usually available in all Operating Systems. If you need to support those,
please check the `Installation Extras <#installation-extras>`_ section.

Installation Extras
===================

Local Topic Distributions support can be installed using:

.. code-block:: bash

    pip install bigmler[topics]

Images local predictions support can be installed using:

.. code-block:: bash

    pip install bigmler[images]

The full set of features can be installed using:

.. code-block:: bash

    pip install bigmler[full]


WARNING: Mind that installing these extras can require some extra work, as
explained in the :ref:requirements section.

BigML Authentication
====================

All the requests to BigML.io must be authenticated using your username
and `API key <https://bigml.com/account/apikey>`_ and are always
transmitted over HTTPS.

BigML module will look for your username and API key in the environment
variables ``BIGML_USERNAME`` and ``BIGML_API_KEY`` respectively. You can
add the following lines to your ``.bashrc`` or ``.bash_profile`` to set
those variables automatically when you log in

.. code-block:: bash

    export BIGML_USERNAME=myusername
    export BIGML_API_KEY=ae579e7e53fb9abd646a6ff8aa99d4afe83ac291

Otherwise, you can initialize directly when running the BigMLer
script as follows

.. code-block:: bash

    bigmler --train data/iris.csv --username myusername \
            --api-key ae579e7e53fb9abd646a6ff8aa99d4afe83ac291

For a detailed description of authentication instructions on Windows see the
`BigMLer on Windows <#bigmler-on-windows>`_ section.


BigMLer on Windows
==================

To install BigMLer on Windows environments, you'll need Python installed.
The code has been tested with Python 3.10 and you can create a *conda*
environment with that Python version or download it from `Python for Windows
<http://www.python.org/download/>`_ and install it. In the latter case, you'll
also need too install the ``pip`` tool to install BigMLer.

To install ``pip``, first you need to open your command terminal window
(write ``cmd`` in
the input field that appears when you click on ``Start`` and hit ``enter``).
Then you can follow the steps described, for example, in this `guide
<https://monovm.com/blog/how-to-install-pip-on-windows-linux/#How-to-install-PIP-on-Windows?-[A-Step-by-Step-Guide]>`_
to install its latest version.

And finally, to install BigMLer in its basic capacities, just type

.. code-block:: bash

    python -m pip install bigmler

and BigMLer should be installed in your computer or conda environment. Then
issuing

.. code-block:: bash

    bigmler --version

should show BigMLer version information.

Extensions of BigMLer to use images are usually not available in Windows.
The libraries needed for those models are not available usually for that
operating system. If your Machine Learning project involves images, we
recommend that you choose a Linux based operating system.

Finally, to start using BigMLer to handle your BigML resources, you need to
set your credentials in BigML for authentication. If you want them to be
permanently stored in your system, use

.. code-block:: bash

    setx BIGML_USERNAME myusername
    setx BIGML_API_KEY ae579e7e53fb9abd646a6ff8aa99d4afe83ac291

Note that ``setx`` will not change the environment variables of your actual
console, so you will need to open a new one to start using them.


BigML Development Mode
======================

Also, you can instruct BigMLer to work in BigML's Sandbox
environment by using the parameter ``---dev``

.. code-block:: bash

    bigmler --train data/iris.csv --dev

Using the development flag you can run tasks under 1 MB without spending any of
your BigML credits.

Using BigMLer
=============

To run BigMLer you can use the console script directly. The `--help` option will
describe all the available options

.. code-block:: bash

    bigmler --help

Alternatively you can just call bigmler as follows

.. code-block:: bash

    python bigmler.py --help

This will display the full list of optional arguments. You can read a brief
explanation for each option below.

Quick Start
===========

Let's see some basic usage examples. Check the `installation` and `authentication`
sections in `BigMLer on Read the Docs <http://bigmler.readthedocs.org>`_ if
you are not familiar with BigML.

Basics
------

You can create a new model just with

.. code-block:: bash

    bigmler --train data/iris.csv

If you check your `dashboard at BigML <https://bigml.com/dashboard>`_, you will
see a new source, dataset, and model. Isn't it magic?

You can generate predictions for a test set using

.. code-block:: bash

    bigmler --train data/iris.csv --test data/test_iris.csv

You can also specify a file name to save the newly created predictions

.. code-block:: bash

    bigmler --train data/iris.csv --test data/test_iris.csv --output predictions

If you do not specify the path to an output file, BigMLer will auto-generate
one for you under a
new directory named after the current date and time
(e.g., `MonNov1212_174715/predictions.csv`).
With ``--prediction-info``
flag set to ``brief`` only the prediction result will be stored (default is
``normal`` and includes confidence information).

A different ``objective field`` (the field that you want to predict) can
be selected using

.. code-block:: bash

    bigmler --train data/iris.csv  \
            --test data/test_iris.csv \
            --objective 'sepal length'

If you do not explicitly specify an objective field, BigML will
default to the last
column in your dataset.

Also, if your test file uses a particular field separator for its data,
you can tell BigMLer using ``--test-separator``.
For example, if your test file uses the tab character as field separator the
call should be like

.. code-block:: bash

    bigmler --train data/iris.csv --test data/test_iris.tsv \
            --test-separator '\t'

If you don't provide a file name for your training source, BigMLer will try to
read it from the standard input

.. code-block:: bash

    cat data/iris.csv | bigmler --train

BigMLer will try to use the locale of the model both to create a new source
(if ``--train`` flag is used) and to interpret test data. In case
it fails, it will try ``en_US.UTF-8``
or ``English_United States.1252`` and a warning message will be printed.
If you want to change this behaviour you can specify your preferred locale

.. code-block:: bash

    bigmler --train data/iris.csv --test data/test_iris.csv \
    --locale "English_United States.1252"

If you check your working directory you will see that BigMLer creates a file
with the
model ids that have been generated (e.g., FriNov0912_223645/models).
This file is handy if then you want to use those model ids to generate local
predictions. BigMLer also creates a file with the dataset id that has been
generated (e.g., TueNov1312_003451/dataset) and another one summarizing
the steps taken in the session progress: ``bigmler_sessions``. You can also
store a copy of every created or retrieved resource in your output directory
(e.g., TueNov1312_003451/model_50c23e5e035d07305a00004f) by setting the flag
``--store``.

Prior Versions Compatibility Issues
-----------------------------------

BigMLer will accept flags written with underscore as word separator like
``--clear_logs`` for compatibility with prior versions. Also ``--field-names``
is accepted, although the more complete ``--field-attributes`` flag is
preferred. ``--stat_pruning`` and ``--no_stat_pruning`` are discontinued
and their effects can be achived by setting the actual ``--pruning`` flag
to ``statistical`` or ``no-pruning`` values respectively.

Running the Tests
-----------------

The tests will be run using `pytest <https://docs.pytest.org/en/7.2.x/>`_.
You'll need to set up your authentication
via environment variables, as explained in the authentication section.
Also some of the tests need other environment
variables like ``BIGML_ORGANIZATION`` to test calls when used by Organization
members and ``BIGML_EXTERNAL_CONN_HOST``, ``BIGML_EXTERNAL_CONN_PORT``,
``BIGML_EXTERNAL_CONN_DB``, ``BIGML_EXTERNAL_CONN_USER``,
``BIGML_EXTERNAL_CONN_PWD`` and ``BIGML_EXTERNAL_CONN_SOURCE``
in order to test external data connectors.

With that in place, you can run the test suite simply by issuing

.. code-block:: bash

    $ pytest

Additional Information
----------------------

For additional information, see
the `full documentation for BigMLer on Read the Docs <http://bigmler.readthedocs.org>`_.


Support
=======

Please report problems and bugs to our `BigML.io issue
tracker <https://github.com/bigmlcom/io/issues>`_.

Discussions about the different bindings take place in the general
`BigML mailing list <http://groups.google.com/group/bigml>`_.


.. :changelog:

History
-------

5.9.1 (2024-06-13)
~~~~~~~~~~~~~~~~~~

- Fixing issues with Readthedocs templates.

5.9.0 (2024-06-12)
~~~~~~~~~~~~~~~~~~

- Adding MS COCO to BigML-COCO translator.

5.8.1 (2024-05-31)
~~~~~~~~~~~~~~~~~~

- Fixing Pascal VOC to BigML-COCO translator.

5.8.0 (2024-04-05)
~~~~~~~~~~~~~~~~~~

- Adding option to bigmler whizzml subcommand that stores scripts and
  libraries as packages in the local file system.

5.7.0 (2023-12-14)
~~~~~~~~~~~~~~~~~~

- Upgrading underlying bindings version and updating API query string
  separator.

5.6.1 (2023-11-14)
~~~~~~~~~~~~~~~~~~

- Changing readthedocs configuration file.

5.6.0 (2023-11-10)
~~~~~~~~~~~~~~~~~~

- Adding `exclude-types` flag to bigmler delete subcommand to provide types
  of resources to be excluded from deletion using `--resource-types`.
- Improving deletion filter order to ensure that composed objects, like
  ensembles or clusters, are deleted first.

5.5.1 (2023-10-27)
~~~~~~~~~~~~~~~~~~

- Fixing delete query strings for sources.

5.5.0 (2023-04-12)
~~~~~~~~~~~~~~~~~~

- Improving delete capabilities and updating the underlying bindings.

5.4.0 (2023-01-10)
~~~~~~~~~~~~~~~~~~

- Changing test tool to pytest.
- Updating underlying bindings version.
- Changing header for centroid local predictions to `cluster` to match
  remote header.

5.3.0 (2022-10-21)
~~~~~~~~~~~~~~~~~~

- Updating underlying bindings version.
- Fixing bigmler delete query for synchronous resources.

5.2.0 (2022-09-30)
~~~~~~~~~~~~~~~~~~

- Adding Image Feature Extraction options.
- Adding local prediction for Images classification, regression and object
  detection.
- Improving souce naming.

5.1.0 (2022-09-21)
~~~~~~~~~~~~~~~~~~

- Upgrading dependencies.
- Improving bigmler delete subcommand.
- Moving the default output directory to `.bigmler_outputs` in the current
  working directory.

5.0.0 (2022-03-26)
~~~~~~~~~~~~~~~~~~

- Adding support for composites of images.
- Adding --id-fields to bigmler anomaly.
- Adding OptiML to the bigmler delete list of resources.
- Cloning first if trying to update a closed source.

4.1.1 (2021-05-04)
~~~~~~~~~~~~~~~~~~

- Changing the ascii used in output to console to avoid problems with
  Windows encodings.
- Updating copyright

4.1.0 (2020-09-11)
~~~~~~~~~~~~~~~~~~

- Updating the underlying Python bindings version to use minized models in
  local predictions.

4.0.0 (2020-08-01)
~~~~~~~~~~~~~~~~~~

- Python 3 only version. Deprecating Python 2.X compatibility.

3.27.2 (2020-07-15)
~~~~~~~~~~~~~~~~~~~

- Fixing bulk delete pagination.

3.27.1 (2020-05-26)
~~~~~~~~~~~~~~~~~~~

- Refactoring resources management code.

3.27.0 (2020-05-26)
~~~~~~~~~~~~~~~~~~~

- Adding bigmler connector subcommand.
- Fixing issues with --resources-log and --clear-logs in Python 3.X.
- Fixing encoding issues when using non-ascii characters in paths.

3.26.3 (2020-04-17)
~~~~~~~~~~~~~~~~~~~

- Fixing local deepnet predictions headers for regressions.
- Fixing output for bigml reify into Python when using multidatasets.

3.26.2 (2020-04-15)
~~~~~~~~~~~~~~~~~~~

- Fixing bigmler whizzml subcommand. Categories were not retrieved from
  metadata file.

3.26.1 (2020-03-16)
~~~~~~~~~~~~~~~~~~~

- Fixing bug in the --output option when no directory is used in the file path.

3.26.0 (2020-01-26)
~~~~~~~~~~~~~~~~~~~

- Fixing problem in bigmler fusion predictions.
- Fixing problems for windows users with particular stdout encodings.
- Dependendy version bump.

3.25.0 (2019-11-27)
~~~~~~~~~~~~~~~~~~~

- Dependency version bump.

3.24.0 (2019-09-20)
~~~~~~~~~~~~~~~~~~~

- Adding --split-field and --focus-field options to bigmler.

3.23.1 (2019-08-08)
~~~~~~~~~~~~~~~~~~~

- Changing the underlying Python bindings version to fix local linear
  regression predictions when `scipy` is not installed.

3.23.0 (2019-06-26)
~~~~~~~~~~~~~~~~~~~

- Changing the underlying Python bindings version.

3.22.0 (2019-05-14)
~~~~~~~~~~~~~~~~~~~

- Adding bigmler fusion subcommand.

3.21.1 (2019-05-06)
~~~~~~~~~~~~~~~~~~~

- Fixing bug in bigmler whizzml when reading some options.

3.21.0 (2019-04-16)
~~~~~~~~~~~~~~~~~~~

- Adding bigmler pca subcommand.

3.20.0 (2019-04-06)
~~~~~~~~~~~~~~~~~~~

- Adding bigmler linear subcommand.

3.19.0 (2019-04-05)
~~~~~~~~~~~~~~~~~~~

- Adding bigmler dataset subcommand to allow generating datasets using
  transformations.

3.18.8 (2019-04-04)
~~~~~~~~~~~~~~~~~~~

- Fixing arguments that were not propagated in complex subcommands.
- Fixing bigmler delete subcommand. Ensuring IDs uniqueness in the delete
  list.

3.18.7 (2019-01-09)
~~~~~~~~~~~~~~~~~~~

- Fixing bigmler retrain for models based on transformed datasets.

3.18.6 (2019-01-07)
~~~~~~~~~~~~~~~~~~~

- Fixing bigmler deepnet and logistic-regression that lacked the
  --batch-prediction-attributes option.
- Adding the --minimum-name-terms option to bigmler topic-model.

3.18.5 (2018-12-12)
~~~~~~~~~~~~~~~~~~~

- Fixing bigmler retrain and execute-related subcommands for organizations.
- Fixing bigmler retrain when using a model-tag.

3.18.4 (2018-11-07)
~~~~~~~~~~~~~~~~~~~

- Adding jupyter notebook output format to bigmler reify.

3.18.3 (2018-10-16)
~~~~~~~~~~~~~~~~~~~

- Fixing bigmler reify subcommand for multidatasets.

3.18.2 (2018-10-12)
~~~~~~~~~~~~~~~~~~~

- Fixing bigmler deepnet subcommand predictions.
- Adding operating point options to bigmler deepnet.
- Adding model types to bigmler retrain.

3.18.1 (2018-09-19)
~~~~~~~~~~~~~~~~~~~

- Updating underlying bindings version.
- Updating reify library.
- Improving bigmler retrain to allow remote sources

3.18.0 (2018-05-23)
~~~~~~~~~~~~~~~~~~~

- Updating underlying bindings version.
- Adapting to new evaluation metrics.

3.17.0 (2018-01-30)
~~~~~~~~~~~~~~~~~~~

- Adding support for organizations.

3.16.0 (2018-01-23)
~~~~~~~~~~~~~~~~~~~

- Removing --dev flag: development mode has been deprecated.

3.15.2 (2018-01-10)
~~~~~~~~~~~~~~~~~~~

- Fixing bug in remote predictions with models and ensembles when --no-batch
  was used.

3.15.1 (2017-12-26)
~~~~~~~~~~~~~~~~~~~

- Fixing bug caused by pystemmer not being installed as a bindings dependency.

3.15.0 (2017-12-21)
~~~~~~~~~~~~~~~~~~~

- Adding the bigmler retrain command to retrain modeling resources with
  incremental data.

3.14.1 (2017-11-28)
~~~~~~~~~~~~~~~~~~~

- Adding the --upgrade flag to the bigmler execute and package subcommands to
  check whether a script is already loaded and its version.

3.14.0 (2017-11-22)
~~~~~~~~~~~~~~~~~~~

- Adding the --operating-point option for models, ensembles and logistic
  regressions.

3.13.2 (2017-11-10)
~~~~~~~~~~~~~~~~~~~

- Extending bigmler export to generate the code for the models in boosted
  ensembles.

3.13.1 (2017-11-05)
~~~~~~~~~~~~~~~~~~~

- Extending bigmler export to generate the code for the models in an ensemble.
- Fixing code generation in bigmler export for models with missings.

3.13.0 (2017-07-22)
~~~~~~~~~~~~~~~~~~~

- Adding bigmler deepnet command to create deepnet models and predictions.

3.12.0 (2017-07-22)
~~~~~~~~~~~~~~~~~~~

- Adding bigmler timeseries subcommand to create time series models and
  forecasts.
- Solving issues in cross-validation due to new evaluation formats.

3.11.2 (2017-06-16)
~~~~~~~~~~~~~~~~~~~

- Improving boosted ensembles local predictions by using new bindings version.

3.11.1 (2017-05-25)
~~~~~~~~~~~~~~~~~~~

- Fixing bug in bigmler export when non-ascii characters are used in a model.

3.11.0 (2017-05-16)
~~~~~~~~~~~~~~~~~~~

- Adding bigmler export subcommand to generate the prediction function from
  a decision tree in several languages.

3.10.3 (2017-04-21)
~~~~~~~~~~~~~~~~~~~

- Fixing bug: Adapting to changes in the structure of evaluations that caused
  cross-validation failure.

3.10.2 (2017-04-13)
~~~~~~~~~~~~~~~~~~~

- Fixing bug: the --package-dir option in bigmler whizzml did not expand
  the ~ character to its associated user path.
- Fixing bug: Multi-label predictions failed because of changes in the
  bindings internal coding for combiners.

3.10.1 (2017-03-25)
~~~~~~~~~~~~~~~~~~~

- Adding --embed-libs and --embedded-libraries to bigmler whizzml and
  bigmler execute subcommands to embed the libraries'
  code in the scripts.

3.10.0 (2017-03-21)
~~~~~~~~~~~~~~~~~~~

- Adding suport for booted ensembles' new options.

3.9.3 (2017-03-08)
~~~~~~~~~~~~~~~~~~

- Fixing bug in bigmler whizzml when using --username and --api-key.

3.9.2 (2017-02-16)
~~~~~~~~~~~~~~~~~~

- Fixing bug in bigmler subcommands when publishing datasets.

3.9.1 (2017-01-04)
~~~~~~~~~~~~~~~~~~

- Fixing bug in bigmler: --evaluation-attributes were not used.
- Fixing bug in bigmler: --threshold and --class were not used.
- Fixing bug in bigmler topic-model: adding --topic-model-attributes.

3.9.0 (2016-12-03)
~~~~~~~~~~~~~~~~~~

- Adding new bigmler topic-model subcommand.

3.8.7 (2016-11-04)
~~~~~~~~~~~~~~~~~~

- Fixing bug in bigmler commands when using samples to create different model
  types.

3.8.6 (2016-10-25)
~~~~~~~~~~~~~~~~~~

- Fixing bug in bigmler commands when using local files storing the model
  info as input for local predictions.

3.8.5 (2016-10-20)
~~~~~~~~~~~~~~~~~~

- Fixing bug in bigmler commands when using local predictions form development
  mode resources.

3.8.4 (2016-10-13)
~~~~~~~~~~~~~~~~~~

- Fixing bug in bigmler package. Libraries where created more than once.
- Fixing bug in bigmler analyze --features when adding batch prediction.
- Improving bigmler delete when deleting projects and executions. Deleting in
  two steps: first the projects and executions and then the remaining
  resources.

3.8.3 (2016-09-30)
~~~~~~~~~~~~~~~~~~

- Fixing bug in logistic regression evaluation.
- Adding --balance-fields flag to bigmler logistic-regression.
- Refactoring and style changes.
- Adding the logistic regression options to documentation.

3.8.2 (2016-09-23)
~~~~~~~~~~~~~~~~~~

- Changing the bias for Logistic Regressions to a boolean.
- Adding the new attributes to control ensemble's sampling.

3.8.1 (2016-07-06)
~~~~~~~~~~~~~~~~~~

- Adding types of deletable resources to bigmler delete. Adding option
  --execution-only to avoid deleting the output resources of an
  execution when the execution is deleted.
- Fixing bug: directory structure in bigmler whizzml was wrong when components
  were found in metadata.
- Upgrading the underlying Python bindings version.

3.8.0 (2016-07-04)
~~~~~~~~~~~~~~~~~~

- Adding new bigmler whizzml subcommand to create scripts and libraries
  from packages with metadata info.

3.7.1 (2016-06-27)
~~~~~~~~~~~~~~~~~~

- Adding new --field-codings option to bigmler logisitic-regression
  subcommand.
- Changing underlying bindings version

3.7.0 (2016-06-03)
~~~~~~~~~~~~~~~~~~

- Adding the new bigmler execute subcommand, which can create scripts,
  executions and libraries.

3.6.4 (2016-04-08)
~~~~~~~~~~~~~~~~~~

- Fixing bug: the --predictions-csv flag in the bigmler analyze command did
  not work with ensembles (--number-of-models > 1)

3.6.3 (2016-04-04)
~~~~~~~~~~~~~~~~~~

- Adding the --predictions-csv flag to bigmler analyze --features. It
  creates a file which contains all the data tagged with the corresponding
  k-fold and the prediction and confidence values for the best
  score cross-validation.

3.6.2 (2016-04-01)
~~~~~~~~~~~~~~~~~~

- Improving bigmler analyze --features CSV output to reflect the best fields
  set found at each step.

3.6.1 (2016-03-14)
~~~~~~~~~~~~~~~~~~

- Adding the --export-fields and --import-fields to manage field summaries
  and attribute changes in sources and datasets.

3.6.0 (2016-03-08)
~~~~~~~~~~~~~~~~~~

- Adding subcommand bigmler logistic-regression.
- Changing tests to adapt to backend random numbers changes.

3.5.4 (2016-02-09)
~~~~~~~~~~~~~~~~~~

- Fixing bug: wrong types had been added to default options in bigmler.ini
- Updating copyright --version notice.

3.5.3 (2016-02-07)
~~~~~~~~~~~~~~~~~~

- Adding links to docs and changing tests to adapt bigmler reify
  to new automatically generated names for resources.

3.5.2 (2016-01-01)
~~~~~~~~~~~~~~~~~~

- Fixing bug in bigmler reify subcommand for datasets generated from other
  datasets comming from batch resources.

3.5.1 (2015-12-26)
~~~~~~~~~~~~~~~~~~

- Adding docs for association discovery.

3.5.0 (2015-12-24)
~~~~~~~~~~~~~~~~~~

- Adding bigmler association subcommand to manage associations.

3.4.0 (2015-12-21)
~~~~~~~~~~~~~~~~~~

- Adding bigmler project subcommand for project creation and update.

3.3.9 (2015-12-19)
~~~~~~~~~~~~~~~~~~

- Fixing bug: wrong reify output for datasets created from another dataset.
- Improving bigmler reify code style and making file executable.

3.3.8 (2015-11-24)
~~~~~~~~~~~~~~~~~~

- Fixing bug: simplifying bigmler reify output for datasets created from
  batch resources.
- Allowing column numbers as keys for fields structures in
  --source-attributes, --dataset-attributes, etc

3.3.7 (2015-11-18)
~~~~~~~~~~~~~~~~~~

- Adding --datasets as option for bigmler analyze.
- Adding --summary-fields as option for bigmler analyze.

3.3.6 (2015-11-16)
~~~~~~~~~~~~~~~~~~

- Fixing bug: Report title for feature analysis was not shown.

3.3.5 (2015-11-15)
~~~~~~~~~~~~~~~~~~

- Upgrading the underlying bindings version.

3.3.4 (2015-11-10)
~~~~~~~~~~~~~~~~~~

- Fixing bug: bigmler cluster did not use the --prediction-fields option.

3.3.3 (2015-11-04)
~~~~~~~~~~~~~~~~~~

- Adding --status option to bigmler delete. Selects the resources to delete
  according to their status (finished if not set). You can check the available
  status in the
  `developers documentation
  <https://bigml.com/developers/status_codes#sc_resource_status_code_summary>`_.

3.3.2 (2015-10-31)
~~~~~~~~~~~~~~~~~~

- Fixing bug: bigmler reify failed for dataset generated from batch
  predictions, batch centroids or batch anomaly scores.

3.3.1 (2015-10-15)
~~~~~~~~~~~~~~~~~~

- Fixing bug: improving datasets download handling to cope with transmission
  errors.
- Fixing bug: solving failure when using the first column of a dataset as
  objective field in models and ensembles.


3.3.0 (2015-09-14)
~~~~~~~~~~~~~~~~~~

- Adding new bigmler analyze option, --random-fields to analyze performance of
  random forests chaging the number of random candidates.

3.2.1 (2015-09-05)
~~~~~~~~~~~~~~~~~~

- Fixing bug in reify subcommand for unordered reifications.

3.2.0 (2015-08-23)
~~~~~~~~~~~~~~~~~~

- Adding bigmler reify subcommand to script the resource creation.

3.1.1 (2015-08-16)
~~~~~~~~~~~~~~~~~~

- Fixing bug: changing the related Python bindings version to solve encoding
  problem when using Python 3 on Windows.

3.1.0 (2015-08-05)
~~~~~~~~~~~~~~~~~~

- Adding bigmler report subcommand to generate reports for cross-validation
  results in bigmler analyze.

3.0.5 (2015-07-30)
~~~~~~~~~~~~~~~~~~

- Fixing bug: bigmler analyze and filtering datasets failed when the origin
  dataset was a filtered one.

3.0.4 (2015-07-22)
~~~~~~~~~~~~~~~~~~

- Fixing bug: bigmler analyze --features could not analyze phi for a user-given
  category because the metric is called phi_coefficient.
- Modifying the output of bigmler analyze --features and --nodes to include
  the command to generate the best performing model and the command to
  clean all the generated resources.

3.0.3 (2015-07-01)
~~~~~~~~~~~~~~~~~~

- Fixing bug: dataset generation with a filter on a previous dataset
  was not working.

3.0.2 (2015-06-24)
~~~~~~~~~~~~~~~~~~

- Adding the --project-tag option to bigmler delete.
- Fixing that the --test-dataset and related options can be used in model
  evaluation.
- Fixing bug: bigmler anomalies for datasets with more than 1000 fields failed.

3.0.1 (2015-06-12)
~~~~~~~~~~~~~~~~~~

- Adding the --top-n, --forest-size and --anomalies-dataset to the bigmler
  anomaly subcommand.
- Fixing bug: source upload failed when using arguments that contain
  unicodes.
- Fixing bug: bigmler analyze subcommand failed for datasets with more than
  1000 fields.

3.0.0 (2015-04-25)
~~~~~~~~~~~~~~~~~~

- Supporting Python 3 and changing the test suite to nose.
- Adding --cluster-models option to generate the models related to
  cluster datasets.

2.2.0 (2015-04-15)
~~~~~~~~~~~~~~~~~~

- Adding --score flag to create batch anomaly scores for the training set.
- Allowing --median to be used also in ensembles predictions.
- Using --seed option also in ensembles.

2.1.0 (2015-04-10)
~~~~~~~~~~~~~~~~~~

- Adding --median flag to use median instead of mean in single models'
  predictions.
- Updating underlying BigML python bindings' version to 4.0.2 (Python 3
  compatible).


2.0.1 (2015-04-09)
~~~~~~~~~~~~~~~~~~

- Fixing bug: resuming commands failed retrieving the output directory

2.0.0 (2015-03-26)
~~~~~~~~~~~~~~~~~~

- Fixing docs formatting errors.
- Adding --to-dataset and --no-csv flags causing batch predictions,
  batch centroids and batch anomaly scores to be stored in a new remote
  dataset and not in a local CSV respectively.
- Adding the sample subcommand to generate samples from datasets

1.15.6 (2015-01-28)
~~~~~~~~~~~~~~~~~~~

- Fixing bug: using --model-fields with --max-categories failed.

1.15.5 (2015-01-20)
~~~~~~~~~~~~~~~~~~~

- Fixing bug: Failed field retrieval for batch predictions starting from
  source or dataset test data.

1.15.4 (2015-01-15)
~~~~~~~~~~~~~~~~~~~

- Adding the --project and --project-id to manage projects and associate
  them to newly created sources.
- Adding the --cluster-seed and --anomaly-seed options to choose the seed
  for deterministic clusters and anomalies.
- Refactoring dataset processing to avoid setting the objective field when
  possible.

1.15.3 (2014-12-26)
~~~~~~~~~~~~~~~~~~~

- Adding --optimize-category in bigmler analyze subcommands to select
  the category whose evaluations will be optimized.

1.15.2 (2014-12-17)
~~~~~~~~~~~~~~~~~~~

- Fixing bug: k-fold cross-validation failed for ensembles.

1.15.1 (2014-12-15)
~~~~~~~~~~~~~~~~~~~

- Fixing bug: ensembles' evaluations failed when using the ensemble id.
- Fixing bug: bigmler analyze lacked model configuration options (weight-field,
  objective-fields, pruning, model-attributes...)

1.15.0 (2014-12-06)
~~~~~~~~~~~~~~~~~~~

- Adding k-fold cross-validation for ensembles in bigmler analyze.

1.14.6 (2014-11-26)
~~~~~~~~~~~~~~~~~~~

- Adding the --model-file, --cluster-file, --anomaly-file and --ensemble-file
  to produce entirely local predictions.
- Fixing bug: the bigmler delete subcommand was not using the --anomaly-tag,
  --anomaly-score-tag and --batch-anomaly-score-tag options.
- Fixing bug: the --no-test-header flag was not working.

1.14.5 (2014-11-14)
~~~~~~~~~~~~~~~~~~~

- Fixing bug: --field-attributes was not working when used in addition
  to --types option.

1.14.4 (2014-11-10)
~~~~~~~~~~~~~~~~~~~

- Adding the capability of creating a model/cluster/anomaly and its
  corresponding batch prediction from a train/test split using --test-split.

1.14.3 (2014-11-10)
~~~~~~~~~~~~~~~~~~~

- Improving domain transformations for customized private settings.
- Fixing bug: model fields were not correctly set when the origin dataset
  was a new dataset generated by the --new-fields option.

1.14.2 (2014-10-30)
~~~~~~~~~~~~~~~~~~~

- Refactoring predictions code, improving some cases performance and memory
  usage.
- Adding the --fast option to speed prediction by not storing partial results
  in files.
- Adding the --optimize option to the bigmler analyze --features command.

1.14.1 (2014-10-23)
~~~~~~~~~~~~~~~~~~~

- Improving perfomance in individual model predictions.
- Forcing garbage collection to lower memory usage in ensemble's predictions.
- Fixing bug: batch predictions were not adding confidence when
  --prediction-info full was used.

1.14.0 (2014-10-19)
~~~~~~~~~~~~~~~~~~~

- Adding bigmler anomaly as new subcommand to generate anomaly detectors,
  anomaly scores and batch anomaly scores.

1.13.3 (2014-10-13)
~~~~~~~~~~~~~~~~~~~

- Fixing bug: source updates failed when using --locale and --types flags
  together.
- Updating bindings version and fixing code accordingly.
- Adding --k option to bigmler cluster to change the number of centroids.

1.13.2 (2014-10-05)
~~~~~~~~~~~~~~~~~~~

- Fixing bug: --source-attributes and --dataset-attributes where not updated.

1.13.1 (2014-09-22)
~~~~~~~~~~~~~~~~~~~

- Fixing bug: bigmler analyze was needlessly sampling data to evaluate.

1.13.0 (2014-09-10)
~~~~~~~~~~~~~~~~~~~

- Adding the new --missing-splits flag to control if missing values are
  included in tree branches.

1.12.4 (2014-08-03)
~~~~~~~~~~~~~~~~~~~

- Fixing bug: handling unicode command parameters on Windows.

1.12.3 (2014-07-30)
~~~~~~~~~~~~~~~~~~~

- Fixing bug: handling stdout writes of unicodes on Windows.

1.12.2 (2014-07-29)
~~~~~~~~~~~~~~~~~~~

- Fixing but for bigmler analyze: the subcommand failed when used in
  development created resources.

1.12.1 (2014-07-25)
~~~~~~~~~~~~~~~~~~~

- Fixing bug when many models are evaluated in k-fold cross-validations. The
  create evaluation could fail when called with a non-finished model.

1.12.0 (2014-07-15)
~~~~~~~~~~~~~~~~~~~

- Improving delete process. Promoting delete to a subcommand and filtering
  the type of resource to be deleted.
- Adding --dry-run option to delete.
- Adding --from-dir option to delete.
- Fixing bug when Gazibit report is used with personalized URL dashboards.

1.11.0 (2014-07-11)
~~~~~~~~~~~~~~~~~~~

- Adding the --to-csv option to export datasets to a CSV file.

1.10.0 (2014-07-11)
~~~~~~~~~~~~~~~~~~~

- Adding the --cluster-datasets option to generate the datasets related to
  the centroids in a cluster.

1.9.2 (2014-07-07)
~~~~~~~~~~~~~~~~~~

- Fixing bug for the --delete flag. Cluster, centroids and batch centroids
  could not be deleted.

1.9.1 (2014-07-02)
~~~~~~~~~~~~~~~~~~

- Documentation update.

1.9.0 (2014-07-02)
~~~~~~~~~~~~~~~~~~

- Adding cluster subcommand to generate clusters and centroid predictions.

1.8.12 (2014-06-10)
~~~~~~~~~~~~~~~~~~~

- Fixing bug for the analyze subcommand. The --resume flag crashed when no
  --ouput-dir was used.
- Fixing bug for the analyze subcommand. The --features flag crashed when
  many long feature names were used.

1.8.11 (2014-05-30)
~~~~~~~~~~~~~~~~~~~

- Fixing bug for --delete flag, broken by last fix.

1.8.10 (2014-05-29)
~~~~~~~~~~~~~~~~~~~

- Fixing bug when field names contain commas and --model-fields tag is used.
- Fixing bug when deleting all resources by tag when ensembles were found.
- Adding --exclude-features flag to analyze.

1.8.9 (2014-05-28)
~~~~~~~~~~~~~~~~~~

- Fixing bug when utf8 characters were used in command lines.

1.8.8 (2014-05-27)
~~~~~~~~~~~~~~~~~~

- Adding the --balance flag to the analyze subcommand.
- Fixing bug for analyze. Some common flags allowed were not used.

1.8.7 (2014-05-23)
~~~~~~~~~~~~~~~~~~

- Fixing bug for analyze. User-given objective field was changed when using
  filtered datasets.

1.8.6 (2014-05-22)
~~~~~~~~~~~~~~~~~~

- Fixing bug for analyze. User-given objective field was not used.

1.8.5 (2014-05-19)
~~~~~~~~~~~~~~~~~~

- Docs update and test change to adapt to backend node threshold changes.

1.8.4 (2014-05-07)
~~~~~~~~~~~~~~~~~~

- Fixing bug in analyze --nodes. The default node steps could not be found.

1.8.3 (2014-05-06)
~~~~~~~~~~~~~~~~~~

- Setting dependency of new python bindings version 1.3.1.

1.8.2 (2014-05-06)
~~~~~~~~~~~~~~~~~~

- Fixing bug: --shared and --unshared should be considered only when set
  in the command line by the user. They were always updated, even when absent.
- Fixing bug: --remote predictions were not working when --model was used as
  training start point.

1.8.1 (2014-05-04)
~~~~~~~~~~~~~~~~~~

- Changing the Gazibit report for shared resources to include the model
  shared url in embedded format.
- Fixing bug: train and tests data could not be read from stdin.

1.8.0 (2014-04-29)
~~~~~~~~~~~~~~~~~~

- Adding the ``analyze`` subcommand. The subcommand presents new features,
  such as:

    ``--cross-validation`` that performs k-fold cross-validation,
    ``--features`` that selects the best features to increase accuracy
    (or any other evaluation metric) using a smart search algorithm and
    ``--nodes`` that selects the node threshold that ensures best accuracy
    (or any other evaluation metric) in user defined range of nodes.

1.7.1 (2014-04-21)
~~~~~~~~~~~~~~~~~~

- Fixing bug: --no-upload flag was not really used.

1.7.0 (2014-04-20)
~~~~~~~~~~~~~~~~~~

- Adding the --reports option to generate Gazibit reports.

1.6.0 (2014-04-18)
~~~~~~~~~~~~~~~~~~

- Adding the --shared flag to share the created dataset, model and evaluation.

1.5.1 (2014-04-04)
~~~~~~~~~~~~~~~~~~

- Fixing bug for model building, when objective field was specified and
  no --max-category was present the user given objective was not used.
- Fixing bug: max-category data stored even when --max-category was not
  used.

1.5.0 (2014-03-24)
~~~~~~~~~~~~~~~~~~

- Adding --missing-strategy option to allow different prediction strategies
  when a missing value is found in a split field. Available for local
  predictions, batch predictions and evaluations.
- Adding new --delete options: --newer-than and --older-than to delete lists
  of resources according to their creation date.
- Adding --multi-dataset flag to generate a new dataset from a list of
  equally structured datasets.

1.4.7 (2014-03-14)
~~~~~~~~~~~~~~~~~~

- Bug fixing: resume from multi-label processing from dataset was not working.
- Bug fixing: max parallel resource creation check did not check that all the
  older tasks ended, only the last of the slot. This caused
  more tasks than permitted to be sent in parallel.
- Improving multi-label training data uploads by zipping the extended file and
  transforming booleans from True/False to 1/0.

1.4.6 (2014-02-21)
~~~~~~~~~~~~~~~~~~

- Bug fixing: dataset objective field is not updated each time --objective
  is used, but only if it differs from the existing objective.

1.4.5 (2014-02-04)
~~~~~~~~~~~~~~~~~~

- Storing the --max-categories info (its number and the chosen `other` label)
  in user_metadata.

1.4.4 (2014-02-03)
~~~~~~~~~~~~~~~~~~

- Fix when using the combined method in --max-categories models.
  The combination function now uses confidence to choose the predicted
  category.
- Allowing full content text fields to be also used as --max-categories
  objective fields.
- Fix solving objective issues when its column number is zero.

1.4.3 (2014-01-28)
~~~~~~~~~~~~~~~~~~

- Adding the --objective-weights option to point to a CSV file containing the
  weights assigned to each class.
- Adding the --label-aggregates option to create new aggregate fields on the
  multi label fields such as count, first or last.

1.4.2 (2014-01-24)
~~~~~~~~~~~~~~~~~~

- Fix in local random forests' predictions. Sometimes the fields used in all
  the models were not correctly retrieved and some predictions could be
  erroneus.

1.4.1 (2014-01-23)
~~~~~~~~~~~~~~~~~~

- Fix to allow the input data for multi-label predictions to be expanded.
- Fix to retrieve from the models definition info the labels that were
  given by the user in its creation in multi-label models.

1.4.0 (2014-01-20)
~~~~~~~~~~~~~~~~~~

- Adding new --balance option to automatically balance all the classes evenly.
- Adding new --weight-field option to use the field contents as weights for
  the instances.

1.3.0 (2014-01-17)
~~~~~~~~~~~~~~~~~~

- Adding new --source-attributes, --ensemble-attributes,
  --evaluation-attributes and --batch-prediction-attributes options.
- Refactoring --multi-label resources to include its related info in
  the user_metadata attribute.
- Refactoring the main routine.
- Adding --batch-prediction-tag for delete operations.

1.2.3 (2014-01-16)
~~~~~~~~~~~~~~~~~~

- Fix to transmit --training-separator when creating remote sources.

1.2.2 (2014-01-14)
~~~~~~~~~~~~~~~~~~

- Fix for multiple multi-label fields: headers did not match rows contents in
  some cases.

1.2.1 (2014-01-12)
~~~~~~~~~~~~~~~~~~

- Fix for datasets generated using the --new-fields option. The new dataset
  was not used in model generation.

1.2.0 (2014-01-09)
~~~~~~~~~~~~~~~~~~

- Adding --multi-label-fields to provide a comma-separated list of multi-label
  fields in a file.

1.1.0 (2014-01-08)
~~~~~~~~~~~~~~~~~~

- Fix for ensembles' local predictions when order is used in tie break.
- Fix for duplicated model ids in models file.
- Adding new --node-threshold option to allow node limit in models.
- Adding new --model-attributes option pointing to a JSON file containing
  model attributes for model creation.

1.0.1 (2014-01-06)
~~~~~~~~~~~~~~~~~~

- Fix for missing modules during installation.

1.0 (2014-01-02)
~~~~~~~~~~~~~~~~~~

- Adding the --max-categories option to handle datasets with a high number of
  categories.
- Adding the --method combine option to produce predictions with the sets
  of datasets generated using --max-categories option.
- Fixing problem with --max-categories when the categorical field is not
  a preferred field of the dataset.
- Changing the --datasets option behaviour: it points to a file where
  dataset ids are stored, one per line, and now it reads all of them to be
  used in model and ensemble creation.

0.7.2 (2013-12-20)
~~~~~~~~~~~~~~~~~~

- Adding confidence to predictions output in full format

0.7.1 (2013-12-19)
~~~~~~~~~~~~~~~~~~

- Bug fixing: multi-label predictions failed when the --ensembles option
  is used to provide the ensemble information

0.7.0 (2013-11-24)
~~~~~~~~~~~~~~~~~~

- Bug fixing: --dataset-price could not be set.
- Adding the threshold combination method to the local ensemble.

0.6.1 (2013-11-23)
~~~~~~~~~~~~~~~~~~

- Bug fixing: --model-fields option with absolute field names was not
  compatible with multi-label classification models.
- Changing resource type checking function.
- Bug fixing: evaluations did not use the given combination method.
- Bug fixing: evaluation of an ensemble had turned into evaluations of its
              models.
- Adding pruning to the ensemble creation configuration options

0.6.0 (2013-11-08)
~~~~~~~~~~~~~~~~~~

- Changing fields_map column order: previously mapped dataset column
  number to model column number, now maps model column number to
  dataset column number.
- Adding evaluations to multi-label models.
- Bug fixing: unicode characters greater than ascii-127 caused crash in
  multi-label classification

0.5.0 (2013-10-08)
~~~~~~~~~~~~~~~~~~

- Adapting to predictions issued by the high performance prediction server and
  the 0.9.0 version of the python bindings.
- Support for shared models using the same version on python bindings.
- Support for different server names using environment variables.

0.4.1 (2013-10-02)
~~~~~~~~~~~~~~~~~~

- Adding ensembles' predictions for multi-label objective fields
- Bug fixing: in evaluation mode, evaluation for --dataset and
  --number-of-models > 1 did not select the 20% hold out instances to test the
  generated ensemble.

0.4.0 (2013-08-15)
~~~~~~~~~~~~~~~~~~

- Adding text analysis through the corresponding bindings

0.3.7 (2013-09-17)
~~~~~~~~~~~~~~~~~~

- Adding support for multi-label objective fields
- Adding --prediction-headers and --prediction-fields to improve
  --prediction-info formatting options for the predictions file
- Adding the ability to read --test input data from stdin
- Adding --seed option to generate different splits from a dataset

0.3.6 (2013-08-21)
~~~~~~~~~~~~~~~~~~

- Adding --test-separator flag

0.3.5 (2013-08-16)
~~~~~~~~~~~~~~~~~~

- Bug fixing: resume crash when remote predictions were not completed
- Bug fixing: Fields object for input data dict building lacked fields
- Bug fixing: test data was repeated in remote prediction function
- Bug fixing: Adding replacement=True as default for ensembles' creation

0.3.4 (2013-08-09)
~~~~~~~~~~~~~~~~~~

- Adding --max-parallel-evaluations flag
- Bug fixing: matching seeds in models and evaluations for cross validation

0.3.3 (2013-08-09)
~~~~~~~~~~~~~~~~~~
- Changing --model-fields and --dataset-fields flag to allow adding/removing
  fields with +/- prefix
- Refactoring local and remote prediction functions
- Adding 'full data' option to the --prediction-info flag to join test input
  data with prediction results in predictions file
- Fixing errors in documentation and adding install for windows info

0.3.2 (2013-07-04)
~~~~~~~~~~~~~~~~~~
- Adding new flag to control predictions file information
- Bug fixing: using default sample-rate in ensemble evaluations
- Adding standard deviation to evaluation measures in cross-validation
- Bug fixing: using only-model argument to download fields in models

0.3.1 (2013-05-14)
~~~~~~~~~~~~~~~~~~

- Adding delete for ensembles
- Creating ensembles when the number of models is greater than one
- Remote predictions using ensembles

0.3.0 (2013-04-30)
~~~~~~~~~~~~~~~~~~

- Adding cross-validation feature
- Using user locale to create new resources in BigML
- Adding --ensemble flag to use ensembles in predictions and evaluations

0.2.1 (2013-03-03)
~~~~~~~~~~~~~~~~~~

- Deep refactoring of main resources management
- Fixing bug in batch_predict for no headers test sets
- Fixing bug for wide dataset's models than need query-string to retrieve all fields
- Fixing bug in test asserts to catch subprocess raise
- Adding default missing tokens to models
- Adding stdin input for --train flag
- Fixing bug when reading descriptions in --field-attributes
- Refactoring to get status from api function
- Adding confidence to combined predictions

0.2.0 (2012-01-21)
~~~~~~~~~~~~~~~~~~
- Evaluations management
- console monitoring of process advance
- resume option
- user defaults
- Refactoring to improve readability

0.1.4 (2012-12-21)
~~~~~~~~~~~~~~~~~~

- Improved locale management.
- Adds progressive handling for large numbers of models.
- More options in field attributes update feature.
- New flag to combine local existing predictions.
- More methods in local predictions: plurality, confidence weighted.

0.1.3 (2012-12-06)
~~~~~~~~~~~~~~~~~~

- New flag for locale settings configuration.
- Filtering only finished resources.

0.1.2 (2012-12-06)
~~~~~~~~~~~~~~~~~~

- Fix to ensure windows compatibility.

0.1.1 (2012-11-07)
~~~~~~~~~~~~~~~~~~

- Initial release.

            

Raw data

            {
    "_id": null,
    "home_page": "https://bigml.com/developers",
    "name": "bigmler",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": null,
    "author": "The BigML Team",
    "author_email": "bigml@bigml.com",
    "download_url": "https://files.pythonhosted.org/packages/88/1e/7929a373573262ae7b818bb3b8cf46b7daafaad21637a7c1d35b2005036b/bigmler-5.9.1.tar.gz",
    "platform": null,
    "description": "BigMLer - A command-line tool for BigML's API\n=============================================\n\nBigMLer makes `BigML <https://bigml.com>`_ even easier.\n\nBigMLer wraps `BigML's API Python bindings <http://bigml.readthedocs.org>`_  to\noffer a high-level command-line script to easily create and publish datasets\nand models, create ensembles,\nmake local predictions from multiple models, and simplify many other machine\nlearning tasks. For additional information, see\nthe\n`full documentation for BigMLer on Read the Docs <http://bigmler.readthedocs.org>`_.\n\nBigMLer is open sourced under the `Apache License, Version\n2.0 <http://www.apache.org/licenses/LICENSE-2.0.html>`_.\n\nRequirements\n============\n\nBigMLer needs Python 3.8 or higher versions to work.\nCompatibility with Python 2.X was discontinued in version 3.27.2.\n\nBigMLer requires `bigml 9.7.1 <https://github.com/bigmlcom/python>`_  or\nhigher, that contains the bindings providing support to use the ``BigML``\nplatform to create, update, get and delete resources,\nbut also to produce local predictions using the\nmodels created in ``BigML``. Most of them will be actionable with the basic\ninstallation, but some additional dependencies are needed\nto use local ``Topic Models`` to produce ``Topic Distributions``. These can\nbe installed using:\n\n.. code-block:: bash\n\n    pip install bigmler[topics]\n\nThe bindings also support local predictions for models generated from images.\nTo use these models, an additional set of libraries needs to be installed\nusing:\n\n.. code-block:: bash\n\n    pip install bigmler[images]\n\nThe external libraries used in this case exist for the majority of recent\nOperating System versions. Still, some of them might need especific\ncompiler versions or dlls, so their installation may require an additional\nsetup effort and will not be supported by default.\n\nThe full set of libraries can be installed using\n\n.. code-block:: bash\n\n    pip install bigmler[full]\n\n\nBigMLer Installation\n====================\n\nTo install the latest stable release with\n`pip <http://www.pip-installer.org/>`_\n\n.. code-block:: bash\n\n    $ pip install bigmler\n\nYou can also install the development version of bigmler directly\nfrom the Git repository\n\n.. code-block:: bash\n\n    $ pip install -e git://github.com/bigmlcom/bigmler.git#egg=bigmler\n\nFor a detailed description of install instructions on Windows see the\n:ref:bigmler-on-windows section.\n\nSupport for local Topic Distributions (Topic Models' predictions)\nand local predictions for datasets that include Images will only be\navailable as extras, because the libraries used for that are not\nusually available in all Operating Systems. If you need to support those,\nplease check the `Installation Extras <#installation-extras>`_ section.\n\nInstallation Extras\n===================\n\nLocal Topic Distributions support can be installed using:\n\n.. code-block:: bash\n\n    pip install bigmler[topics]\n\nImages local predictions support can be installed using:\n\n.. code-block:: bash\n\n    pip install bigmler[images]\n\nThe full set of features can be installed using:\n\n.. code-block:: bash\n\n    pip install bigmler[full]\n\n\nWARNING: Mind that installing these extras can require some extra work, as\nexplained in the :ref:requirements section.\n\nBigML Authentication\n====================\n\nAll the requests to BigML.io must be authenticated using your username\nand `API key <https://bigml.com/account/apikey>`_ and are always\ntransmitted over HTTPS.\n\nBigML module will look for your username and API key in the environment\nvariables ``BIGML_USERNAME`` and ``BIGML_API_KEY`` respectively. You can\nadd the following lines to your ``.bashrc`` or ``.bash_profile`` to set\nthose variables automatically when you log in\n\n.. code-block:: bash\n\n    export BIGML_USERNAME=myusername\n    export BIGML_API_KEY=ae579e7e53fb9abd646a6ff8aa99d4afe83ac291\n\nOtherwise, you can initialize directly when running the BigMLer\nscript as follows\n\n.. code-block:: bash\n\n    bigmler --train data/iris.csv --username myusername \\\n            --api-key ae579e7e53fb9abd646a6ff8aa99d4afe83ac291\n\nFor a detailed description of authentication instructions on Windows see the\n`BigMLer on Windows <#bigmler-on-windows>`_ section.\n\n\nBigMLer on Windows\n==================\n\nTo install BigMLer on Windows environments, you'll need Python installed.\nThe code has been tested with Python 3.10 and you can create a *conda*\nenvironment with that Python version or download it from `Python for Windows\n<http://www.python.org/download/>`_ and install it. In the latter case, you'll\nalso need too install the ``pip`` tool to install BigMLer.\n\nTo install ``pip``, first you need to open your command terminal window\n(write ``cmd`` in\nthe input field that appears when you click on ``Start`` and hit ``enter``).\nThen you can follow the steps described, for example, in this `guide\n<https://monovm.com/blog/how-to-install-pip-on-windows-linux/#How-to-install-PIP-on-Windows?-[A-Step-by-Step-Guide]>`_\nto install its latest version.\n\nAnd finally, to install BigMLer in its basic capacities, just type\n\n.. code-block:: bash\n\n    python -m pip install bigmler\n\nand BigMLer should be installed in your computer or conda environment. Then\nissuing\n\n.. code-block:: bash\n\n    bigmler --version\n\nshould show BigMLer version information.\n\nExtensions of BigMLer to use images are usually not available in Windows.\nThe libraries needed for those models are not available usually for that\noperating system. If your Machine Learning project involves images, we\nrecommend that you choose a Linux based operating system.\n\nFinally, to start using BigMLer to handle your BigML resources, you need to\nset your credentials in BigML for authentication. If you want them to be\npermanently stored in your system, use\n\n.. code-block:: bash\n\n    setx BIGML_USERNAME myusername\n    setx BIGML_API_KEY ae579e7e53fb9abd646a6ff8aa99d4afe83ac291\n\nNote that ``setx`` will not change the environment variables of your actual\nconsole, so you will need to open a new one to start using them.\n\n\nBigML Development Mode\n======================\n\nAlso, you can instruct BigMLer to work in BigML's Sandbox\nenvironment by using the parameter ``---dev``\n\n.. code-block:: bash\n\n    bigmler --train data/iris.csv --dev\n\nUsing the development flag you can run tasks under 1 MB without spending any of\nyour BigML credits.\n\nUsing BigMLer\n=============\n\nTo run BigMLer you can use the console script directly. The `--help` option will\ndescribe all the available options\n\n.. code-block:: bash\n\n    bigmler --help\n\nAlternatively you can just call bigmler as follows\n\n.. code-block:: bash\n\n    python bigmler.py --help\n\nThis will display the full list of optional arguments. You can read a brief\nexplanation for each option below.\n\nQuick Start\n===========\n\nLet's see some basic usage examples. Check the `installation` and `authentication`\nsections in `BigMLer on Read the Docs <http://bigmler.readthedocs.org>`_ if\nyou are not familiar with BigML.\n\nBasics\n------\n\nYou can create a new model just with\n\n.. code-block:: bash\n\n    bigmler --train data/iris.csv\n\nIf you check your `dashboard at BigML <https://bigml.com/dashboard>`_, you will\nsee a new source, dataset, and model. Isn't it magic?\n\nYou can generate predictions for a test set using\n\n.. code-block:: bash\n\n    bigmler --train data/iris.csv --test data/test_iris.csv\n\nYou can also specify a file name to save the newly created predictions\n\n.. code-block:: bash\n\n    bigmler --train data/iris.csv --test data/test_iris.csv --output predictions\n\nIf you do not specify the path to an output file, BigMLer will auto-generate\none for you under a\nnew directory named after the current date and time\n(e.g., `MonNov1212_174715/predictions.csv`).\nWith ``--prediction-info``\nflag set to ``brief`` only the prediction result will be stored (default is\n``normal`` and includes confidence information).\n\nA different ``objective field`` (the field that you want to predict) can\nbe selected using\n\n.. code-block:: bash\n\n    bigmler --train data/iris.csv  \\\n            --test data/test_iris.csv \\\n            --objective 'sepal length'\n\nIf you do not explicitly specify an objective field, BigML will\ndefault to the last\ncolumn in your dataset.\n\nAlso, if your test file uses a particular field separator for its data,\nyou can tell BigMLer using ``--test-separator``.\nFor example, if your test file uses the tab character as field separator the\ncall should be like\n\n.. code-block:: bash\n\n    bigmler --train data/iris.csv --test data/test_iris.tsv \\\n            --test-separator '\\t'\n\nIf you don't provide a file name for your training source, BigMLer will try to\nread it from the standard input\n\n.. code-block:: bash\n\n    cat data/iris.csv | bigmler --train\n\nBigMLer will try to use the locale of the model both to create a new source\n(if ``--train`` flag is used) and to interpret test data. In case\nit fails, it will try ``en_US.UTF-8``\nor ``English_United States.1252`` and a warning message will be printed.\nIf you want to change this behaviour you can specify your preferred locale\n\n.. code-block:: bash\n\n    bigmler --train data/iris.csv --test data/test_iris.csv \\\n    --locale \"English_United States.1252\"\n\nIf you check your working directory you will see that BigMLer creates a file\nwith the\nmodel ids that have been generated (e.g., FriNov0912_223645/models).\nThis file is handy if then you want to use those model ids to generate local\npredictions. BigMLer also creates a file with the dataset id that has been\ngenerated (e.g., TueNov1312_003451/dataset) and another one summarizing\nthe steps taken in the session progress: ``bigmler_sessions``. You can also\nstore a copy of every created or retrieved resource in your output directory\n(e.g., TueNov1312_003451/model_50c23e5e035d07305a00004f) by setting the flag\n``--store``.\n\nPrior Versions Compatibility Issues\n-----------------------------------\n\nBigMLer will accept flags written with underscore as word separator like\n``--clear_logs`` for compatibility with prior versions. Also ``--field-names``\nis accepted, although the more complete ``--field-attributes`` flag is\npreferred. ``--stat_pruning`` and ``--no_stat_pruning`` are discontinued\nand their effects can be achived by setting the actual ``--pruning`` flag\nto ``statistical`` or ``no-pruning`` values respectively.\n\nRunning the Tests\n-----------------\n\nThe tests will be run using `pytest <https://docs.pytest.org/en/7.2.x/>`_.\nYou'll need to set up your authentication\nvia environment variables, as explained in the authentication section.\nAlso some of the tests need other environment\nvariables like ``BIGML_ORGANIZATION`` to test calls when used by Organization\nmembers and ``BIGML_EXTERNAL_CONN_HOST``, ``BIGML_EXTERNAL_CONN_PORT``,\n``BIGML_EXTERNAL_CONN_DB``, ``BIGML_EXTERNAL_CONN_USER``,\n``BIGML_EXTERNAL_CONN_PWD`` and ``BIGML_EXTERNAL_CONN_SOURCE``\nin order to test external data connectors.\n\nWith that in place, you can run the test suite simply by issuing\n\n.. code-block:: bash\n\n    $ pytest\n\nAdditional Information\n----------------------\n\nFor additional information, see\nthe `full documentation for BigMLer on Read the Docs <http://bigmler.readthedocs.org>`_.\n\n\nSupport\n=======\n\nPlease report problems and bugs to our `BigML.io issue\ntracker <https://github.com/bigmlcom/io/issues>`_.\n\nDiscussions about the different bindings take place in the general\n`BigML mailing list <http://groups.google.com/group/bigml>`_.\n\n\n.. :changelog:\n\nHistory\n-------\n\n5.9.1 (2024-06-13)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing issues with Readthedocs templates.\n\n5.9.0 (2024-06-12)\n~~~~~~~~~~~~~~~~~~\n\n- Adding MS COCO to BigML-COCO translator.\n\n5.8.1 (2024-05-31)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing Pascal VOC to BigML-COCO translator.\n\n5.8.0 (2024-04-05)\n~~~~~~~~~~~~~~~~~~\n\n- Adding option to bigmler whizzml subcommand that stores scripts and\n  libraries as packages in the local file system.\n\n5.7.0 (2023-12-14)\n~~~~~~~~~~~~~~~~~~\n\n- Upgrading underlying bindings version and updating API query string\n  separator.\n\n5.6.1 (2023-11-14)\n~~~~~~~~~~~~~~~~~~\n\n- Changing readthedocs configuration file.\n\n5.6.0 (2023-11-10)\n~~~~~~~~~~~~~~~~~~\n\n- Adding `exclude-types` flag to bigmler delete subcommand to provide types\n  of resources to be excluded from deletion using `--resource-types`.\n- Improving deletion filter order to ensure that composed objects, like\n  ensembles or clusters, are deleted first.\n\n5.5.1 (2023-10-27)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing delete query strings for sources.\n\n5.5.0 (2023-04-12)\n~~~~~~~~~~~~~~~~~~\n\n- Improving delete capabilities and updating the underlying bindings.\n\n5.4.0 (2023-01-10)\n~~~~~~~~~~~~~~~~~~\n\n- Changing test tool to pytest.\n- Updating underlying bindings version.\n- Changing header for centroid local predictions to `cluster` to match\n  remote header.\n\n5.3.0 (2022-10-21)\n~~~~~~~~~~~~~~~~~~\n\n- Updating underlying bindings version.\n- Fixing bigmler delete query for synchronous resources.\n\n5.2.0 (2022-09-30)\n~~~~~~~~~~~~~~~~~~\n\n- Adding Image Feature Extraction options.\n- Adding local prediction for Images classification, regression and object\n  detection.\n- Improving souce naming.\n\n5.1.0 (2022-09-21)\n~~~~~~~~~~~~~~~~~~\n\n- Upgrading dependencies.\n- Improving bigmler delete subcommand.\n- Moving the default output directory to `.bigmler_outputs` in the current\n  working directory.\n\n5.0.0 (2022-03-26)\n~~~~~~~~~~~~~~~~~~\n\n- Adding support for composites of images.\n- Adding --id-fields to bigmler anomaly.\n- Adding OptiML to the bigmler delete list of resources.\n- Cloning first if trying to update a closed source.\n\n4.1.1 (2021-05-04)\n~~~~~~~~~~~~~~~~~~\n\n- Changing the ascii used in output to console to avoid problems with\n  Windows encodings.\n- Updating copyright\n\n4.1.0 (2020-09-11)\n~~~~~~~~~~~~~~~~~~\n\n- Updating the underlying Python bindings version to use minized models in\n  local predictions.\n\n4.0.0 (2020-08-01)\n~~~~~~~~~~~~~~~~~~\n\n- Python 3 only version. Deprecating Python 2.X compatibility.\n\n3.27.2 (2020-07-15)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bulk delete pagination.\n\n3.27.1 (2020-05-26)\n~~~~~~~~~~~~~~~~~~~\n\n- Refactoring resources management code.\n\n3.27.0 (2020-05-26)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler connector subcommand.\n- Fixing issues with --resources-log and --clear-logs in Python 3.X.\n- Fixing encoding issues when using non-ascii characters in paths.\n\n3.26.3 (2020-04-17)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing local deepnet predictions headers for regressions.\n- Fixing output for bigml reify into Python when using multidatasets.\n\n3.26.2 (2020-04-15)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bigmler whizzml subcommand. Categories were not retrieved from\n  metadata file.\n\n3.26.1 (2020-03-16)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in the --output option when no directory is used in the file path.\n\n3.26.0 (2020-01-26)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing problem in bigmler fusion predictions.\n- Fixing problems for windows users with particular stdout encodings.\n- Dependendy version bump.\n\n3.25.0 (2019-11-27)\n~~~~~~~~~~~~~~~~~~~\n\n- Dependency version bump.\n\n3.24.0 (2019-09-20)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding --split-field and --focus-field options to bigmler.\n\n3.23.1 (2019-08-08)\n~~~~~~~~~~~~~~~~~~~\n\n- Changing the underlying Python bindings version to fix local linear\n  regression predictions when `scipy` is not installed.\n\n3.23.0 (2019-06-26)\n~~~~~~~~~~~~~~~~~~~\n\n- Changing the underlying Python bindings version.\n\n3.22.0 (2019-05-14)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler fusion subcommand.\n\n3.21.1 (2019-05-06)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in bigmler whizzml when reading some options.\n\n3.21.0 (2019-04-16)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler pca subcommand.\n\n3.20.0 (2019-04-06)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler linear subcommand.\n\n3.19.0 (2019-04-05)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler dataset subcommand to allow generating datasets using\n  transformations.\n\n3.18.8 (2019-04-04)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing arguments that were not propagated in complex subcommands.\n- Fixing bigmler delete subcommand. Ensuring IDs uniqueness in the delete\n  list.\n\n3.18.7 (2019-01-09)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bigmler retrain for models based on transformed datasets.\n\n3.18.6 (2019-01-07)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bigmler deepnet and logistic-regression that lacked the\n  --batch-prediction-attributes option.\n- Adding the --minimum-name-terms option to bigmler topic-model.\n\n3.18.5 (2018-12-12)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bigmler retrain and execute-related subcommands for organizations.\n- Fixing bigmler retrain when using a model-tag.\n\n3.18.4 (2018-11-07)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding jupyter notebook output format to bigmler reify.\n\n3.18.3 (2018-10-16)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bigmler reify subcommand for multidatasets.\n\n3.18.2 (2018-10-12)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bigmler deepnet subcommand predictions.\n- Adding operating point options to bigmler deepnet.\n- Adding model types to bigmler retrain.\n\n3.18.1 (2018-09-19)\n~~~~~~~~~~~~~~~~~~~\n\n- Updating underlying bindings version.\n- Updating reify library.\n- Improving bigmler retrain to allow remote sources\n\n3.18.0 (2018-05-23)\n~~~~~~~~~~~~~~~~~~~\n\n- Updating underlying bindings version.\n- Adapting to new evaluation metrics.\n\n3.17.0 (2018-01-30)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding support for organizations.\n\n3.16.0 (2018-01-23)\n~~~~~~~~~~~~~~~~~~~\n\n- Removing --dev flag: development mode has been deprecated.\n\n3.15.2 (2018-01-10)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in remote predictions with models and ensembles when --no-batch\n  was used.\n\n3.15.1 (2017-12-26)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug caused by pystemmer not being installed as a bindings dependency.\n\n3.15.0 (2017-12-21)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding the bigmler retrain command to retrain modeling resources with\n  incremental data.\n\n3.14.1 (2017-11-28)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding the --upgrade flag to the bigmler execute and package subcommands to\n  check whether a script is already loaded and its version.\n\n3.14.0 (2017-11-22)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding the --operating-point option for models, ensembles and logistic\n  regressions.\n\n3.13.2 (2017-11-10)\n~~~~~~~~~~~~~~~~~~~\n\n- Extending bigmler export to generate the code for the models in boosted\n  ensembles.\n\n3.13.1 (2017-11-05)\n~~~~~~~~~~~~~~~~~~~\n\n- Extending bigmler export to generate the code for the models in an ensemble.\n- Fixing code generation in bigmler export for models with missings.\n\n3.13.0 (2017-07-22)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler deepnet command to create deepnet models and predictions.\n\n3.12.0 (2017-07-22)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler timeseries subcommand to create time series models and\n  forecasts.\n- Solving issues in cross-validation due to new evaluation formats.\n\n3.11.2 (2017-06-16)\n~~~~~~~~~~~~~~~~~~~\n\n- Improving boosted ensembles local predictions by using new bindings version.\n\n3.11.1 (2017-05-25)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in bigmler export when non-ascii characters are used in a model.\n\n3.11.0 (2017-05-16)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler export subcommand to generate the prediction function from\n  a decision tree in several languages.\n\n3.10.3 (2017-04-21)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: Adapting to changes in the structure of evaluations that caused\n  cross-validation failure.\n\n3.10.2 (2017-04-13)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: the --package-dir option in bigmler whizzml did not expand\n  the ~ character to its associated user path.\n- Fixing bug: Multi-label predictions failed because of changes in the\n  bindings internal coding for combiners.\n\n3.10.1 (2017-03-25)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding --embed-libs and --embedded-libraries to bigmler whizzml and\n  bigmler execute subcommands to embed the libraries'\n  code in the scripts.\n\n3.10.0 (2017-03-21)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding suport for booted ensembles' new options.\n\n3.9.3 (2017-03-08)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in bigmler whizzml when using --username and --api-key.\n\n3.9.2 (2017-02-16)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in bigmler subcommands when publishing datasets.\n\n3.9.1 (2017-01-04)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in bigmler: --evaluation-attributes were not used.\n- Fixing bug in bigmler: --threshold and --class were not used.\n- Fixing bug in bigmler topic-model: adding --topic-model-attributes.\n\n3.9.0 (2016-12-03)\n~~~~~~~~~~~~~~~~~~\n\n- Adding new bigmler topic-model subcommand.\n\n3.8.7 (2016-11-04)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in bigmler commands when using samples to create different model\n  types.\n\n3.8.6 (2016-10-25)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in bigmler commands when using local files storing the model\n  info as input for local predictions.\n\n3.8.5 (2016-10-20)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in bigmler commands when using local predictions form development\n  mode resources.\n\n3.8.4 (2016-10-13)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in bigmler package. Libraries where created more than once.\n- Fixing bug in bigmler analyze --features when adding batch prediction.\n- Improving bigmler delete when deleting projects and executions. Deleting in\n  two steps: first the projects and executions and then the remaining\n  resources.\n\n3.8.3 (2016-09-30)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in logistic regression evaluation.\n- Adding --balance-fields flag to bigmler logistic-regression.\n- Refactoring and style changes.\n- Adding the logistic regression options to documentation.\n\n3.8.2 (2016-09-23)\n~~~~~~~~~~~~~~~~~~\n\n- Changing the bias for Logistic Regressions to a boolean.\n- Adding the new attributes to control ensemble's sampling.\n\n3.8.1 (2016-07-06)\n~~~~~~~~~~~~~~~~~~\n\n- Adding types of deletable resources to bigmler delete. Adding option\n  --execution-only to avoid deleting the output resources of an\n  execution when the execution is deleted.\n- Fixing bug: directory structure in bigmler whizzml was wrong when components\n  were found in metadata.\n- Upgrading the underlying Python bindings version.\n\n3.8.0 (2016-07-04)\n~~~~~~~~~~~~~~~~~~\n\n- Adding new bigmler whizzml subcommand to create scripts and libraries\n  from packages with metadata info.\n\n3.7.1 (2016-06-27)\n~~~~~~~~~~~~~~~~~~\n\n- Adding new --field-codings option to bigmler logisitic-regression\n  subcommand.\n- Changing underlying bindings version\n\n3.7.0 (2016-06-03)\n~~~~~~~~~~~~~~~~~~\n\n- Adding the new bigmler execute subcommand, which can create scripts,\n  executions and libraries.\n\n3.6.4 (2016-04-08)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: the --predictions-csv flag in the bigmler analyze command did\n  not work with ensembles (--number-of-models > 1)\n\n3.6.3 (2016-04-04)\n~~~~~~~~~~~~~~~~~~\n\n- Adding the --predictions-csv flag to bigmler analyze --features. It\n  creates a file which contains all the data tagged with the corresponding\n  k-fold and the prediction and confidence values for the best\n  score cross-validation.\n\n3.6.2 (2016-04-01)\n~~~~~~~~~~~~~~~~~~\n\n- Improving bigmler analyze --features CSV output to reflect the best fields\n  set found at each step.\n\n3.6.1 (2016-03-14)\n~~~~~~~~~~~~~~~~~~\n\n- Adding the --export-fields and --import-fields to manage field summaries\n  and attribute changes in sources and datasets.\n\n3.6.0 (2016-03-08)\n~~~~~~~~~~~~~~~~~~\n\n- Adding subcommand bigmler logistic-regression.\n- Changing tests to adapt to backend random numbers changes.\n\n3.5.4 (2016-02-09)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: wrong types had been added to default options in bigmler.ini\n- Updating copyright --version notice.\n\n3.5.3 (2016-02-07)\n~~~~~~~~~~~~~~~~~~\n\n- Adding links to docs and changing tests to adapt bigmler reify\n  to new automatically generated names for resources.\n\n3.5.2 (2016-01-01)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in bigmler reify subcommand for datasets generated from other\n  datasets comming from batch resources.\n\n3.5.1 (2015-12-26)\n~~~~~~~~~~~~~~~~~~\n\n- Adding docs for association discovery.\n\n3.5.0 (2015-12-24)\n~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler association subcommand to manage associations.\n\n3.4.0 (2015-12-21)\n~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler project subcommand for project creation and update.\n\n3.3.9 (2015-12-19)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: wrong reify output for datasets created from another dataset.\n- Improving bigmler reify code style and making file executable.\n\n3.3.8 (2015-11-24)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: simplifying bigmler reify output for datasets created from\n  batch resources.\n- Allowing column numbers as keys for fields structures in\n  --source-attributes, --dataset-attributes, etc\n\n3.3.7 (2015-11-18)\n~~~~~~~~~~~~~~~~~~\n\n- Adding --datasets as option for bigmler analyze.\n- Adding --summary-fields as option for bigmler analyze.\n\n3.3.6 (2015-11-16)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: Report title for feature analysis was not shown.\n\n3.3.5 (2015-11-15)\n~~~~~~~~~~~~~~~~~~\n\n- Upgrading the underlying bindings version.\n\n3.3.4 (2015-11-10)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: bigmler cluster did not use the --prediction-fields option.\n\n3.3.3 (2015-11-04)\n~~~~~~~~~~~~~~~~~~\n\n- Adding --status option to bigmler delete. Selects the resources to delete\n  according to their status (finished if not set). You can check the available\n  status in the\n  `developers documentation\n  <https://bigml.com/developers/status_codes#sc_resource_status_code_summary>`_.\n\n3.3.2 (2015-10-31)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: bigmler reify failed for dataset generated from batch\n  predictions, batch centroids or batch anomaly scores.\n\n3.3.1 (2015-10-15)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: improving datasets download handling to cope with transmission\n  errors.\n- Fixing bug: solving failure when using the first column of a dataset as\n  objective field in models and ensembles.\n\n\n3.3.0 (2015-09-14)\n~~~~~~~~~~~~~~~~~~\n\n- Adding new bigmler analyze option, --random-fields to analyze performance of\n  random forests chaging the number of random candidates.\n\n3.2.1 (2015-09-05)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in reify subcommand for unordered reifications.\n\n3.2.0 (2015-08-23)\n~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler reify subcommand to script the resource creation.\n\n3.1.1 (2015-08-16)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: changing the related Python bindings version to solve encoding\n  problem when using Python 3 on Windows.\n\n3.1.0 (2015-08-05)\n~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler report subcommand to generate reports for cross-validation\n  results in bigmler analyze.\n\n3.0.5 (2015-07-30)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: bigmler analyze and filtering datasets failed when the origin\n  dataset was a filtered one.\n\n3.0.4 (2015-07-22)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: bigmler analyze --features could not analyze phi for a user-given\n  category because the metric is called phi_coefficient.\n- Modifying the output of bigmler analyze --features and --nodes to include\n  the command to generate the best performing model and the command to\n  clean all the generated resources.\n\n3.0.3 (2015-07-01)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: dataset generation with a filter on a previous dataset\n  was not working.\n\n3.0.2 (2015-06-24)\n~~~~~~~~~~~~~~~~~~\n\n- Adding the --project-tag option to bigmler delete.\n- Fixing that the --test-dataset and related options can be used in model\n  evaluation.\n- Fixing bug: bigmler anomalies for datasets with more than 1000 fields failed.\n\n3.0.1 (2015-06-12)\n~~~~~~~~~~~~~~~~~~\n\n- Adding the --top-n, --forest-size and --anomalies-dataset to the bigmler\n  anomaly subcommand.\n- Fixing bug: source upload failed when using arguments that contain\n  unicodes.\n- Fixing bug: bigmler analyze subcommand failed for datasets with more than\n  1000 fields.\n\n3.0.0 (2015-04-25)\n~~~~~~~~~~~~~~~~~~\n\n- Supporting Python 3 and changing the test suite to nose.\n- Adding --cluster-models option to generate the models related to\n  cluster datasets.\n\n2.2.0 (2015-04-15)\n~~~~~~~~~~~~~~~~~~\n\n- Adding --score flag to create batch anomaly scores for the training set.\n- Allowing --median to be used also in ensembles predictions.\n- Using --seed option also in ensembles.\n\n2.1.0 (2015-04-10)\n~~~~~~~~~~~~~~~~~~\n\n- Adding --median flag to use median instead of mean in single models'\n  predictions.\n- Updating underlying BigML python bindings' version to 4.0.2 (Python 3\n  compatible).\n\n\n2.0.1 (2015-04-09)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: resuming commands failed retrieving the output directory\n\n2.0.0 (2015-03-26)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing docs formatting errors.\n- Adding --to-dataset and --no-csv flags causing batch predictions,\n  batch centroids and batch anomaly scores to be stored in a new remote\n  dataset and not in a local CSV respectively.\n- Adding the sample subcommand to generate samples from datasets\n\n1.15.6 (2015-01-28)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: using --model-fields with --max-categories failed.\n\n1.15.5 (2015-01-20)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: Failed field retrieval for batch predictions starting from\n  source or dataset test data.\n\n1.15.4 (2015-01-15)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding the --project and --project-id to manage projects and associate\n  them to newly created sources.\n- Adding the --cluster-seed and --anomaly-seed options to choose the seed\n  for deterministic clusters and anomalies.\n- Refactoring dataset processing to avoid setting the objective field when\n  possible.\n\n1.15.3 (2014-12-26)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding --optimize-category in bigmler analyze subcommands to select\n  the category whose evaluations will be optimized.\n\n1.15.2 (2014-12-17)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: k-fold cross-validation failed for ensembles.\n\n1.15.1 (2014-12-15)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: ensembles' evaluations failed when using the ensemble id.\n- Fixing bug: bigmler analyze lacked model configuration options (weight-field,\n  objective-fields, pruning, model-attributes...)\n\n1.15.0 (2014-12-06)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding k-fold cross-validation for ensembles in bigmler analyze.\n\n1.14.6 (2014-11-26)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding the --model-file, --cluster-file, --anomaly-file and --ensemble-file\n  to produce entirely local predictions.\n- Fixing bug: the bigmler delete subcommand was not using the --anomaly-tag,\n  --anomaly-score-tag and --batch-anomaly-score-tag options.\n- Fixing bug: the --no-test-header flag was not working.\n\n1.14.5 (2014-11-14)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: --field-attributes was not working when used in addition\n  to --types option.\n\n1.14.4 (2014-11-10)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding the capability of creating a model/cluster/anomaly and its\n  corresponding batch prediction from a train/test split using --test-split.\n\n1.14.3 (2014-11-10)\n~~~~~~~~~~~~~~~~~~~\n\n- Improving domain transformations for customized private settings.\n- Fixing bug: model fields were not correctly set when the origin dataset\n  was a new dataset generated by the --new-fields option.\n\n1.14.2 (2014-10-30)\n~~~~~~~~~~~~~~~~~~~\n\n- Refactoring predictions code, improving some cases performance and memory\n  usage.\n- Adding the --fast option to speed prediction by not storing partial results\n  in files.\n- Adding the --optimize option to the bigmler analyze --features command.\n\n1.14.1 (2014-10-23)\n~~~~~~~~~~~~~~~~~~~\n\n- Improving perfomance in individual model predictions.\n- Forcing garbage collection to lower memory usage in ensemble's predictions.\n- Fixing bug: batch predictions were not adding confidence when\n  --prediction-info full was used.\n\n1.14.0 (2014-10-19)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding bigmler anomaly as new subcommand to generate anomaly detectors,\n  anomaly scores and batch anomaly scores.\n\n1.13.3 (2014-10-13)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: source updates failed when using --locale and --types flags\n  together.\n- Updating bindings version and fixing code accordingly.\n- Adding --k option to bigmler cluster to change the number of centroids.\n\n1.13.2 (2014-10-05)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: --source-attributes and --dataset-attributes where not updated.\n\n1.13.1 (2014-09-22)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: bigmler analyze was needlessly sampling data to evaluate.\n\n1.13.0 (2014-09-10)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding the new --missing-splits flag to control if missing values are\n  included in tree branches.\n\n1.12.4 (2014-08-03)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: handling unicode command parameters on Windows.\n\n1.12.3 (2014-07-30)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: handling stdout writes of unicodes on Windows.\n\n1.12.2 (2014-07-29)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing but for bigmler analyze: the subcommand failed when used in\n  development created resources.\n\n1.12.1 (2014-07-25)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug when many models are evaluated in k-fold cross-validations. The\n  create evaluation could fail when called with a non-finished model.\n\n1.12.0 (2014-07-15)\n~~~~~~~~~~~~~~~~~~~\n\n- Improving delete process. Promoting delete to a subcommand and filtering\n  the type of resource to be deleted.\n- Adding --dry-run option to delete.\n- Adding --from-dir option to delete.\n- Fixing bug when Gazibit report is used with personalized URL dashboards.\n\n1.11.0 (2014-07-11)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding the --to-csv option to export datasets to a CSV file.\n\n1.10.0 (2014-07-11)\n~~~~~~~~~~~~~~~~~~~\n\n- Adding the --cluster-datasets option to generate the datasets related to\n  the centroids in a cluster.\n\n1.9.2 (2014-07-07)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug for the --delete flag. Cluster, centroids and batch centroids\n  could not be deleted.\n\n1.9.1 (2014-07-02)\n~~~~~~~~~~~~~~~~~~\n\n- Documentation update.\n\n1.9.0 (2014-07-02)\n~~~~~~~~~~~~~~~~~~\n\n- Adding cluster subcommand to generate clusters and centroid predictions.\n\n1.8.12 (2014-06-10)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug for the analyze subcommand. The --resume flag crashed when no\n  --ouput-dir was used.\n- Fixing bug for the analyze subcommand. The --features flag crashed when\n  many long feature names were used.\n\n1.8.11 (2014-05-30)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug for --delete flag, broken by last fix.\n\n1.8.10 (2014-05-29)\n~~~~~~~~~~~~~~~~~~~\n\n- Fixing bug when field names contain commas and --model-fields tag is used.\n- Fixing bug when deleting all resources by tag when ensembles were found.\n- Adding --exclude-features flag to analyze.\n\n1.8.9 (2014-05-28)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug when utf8 characters were used in command lines.\n\n1.8.8 (2014-05-27)\n~~~~~~~~~~~~~~~~~~\n\n- Adding the --balance flag to the analyze subcommand.\n- Fixing bug for analyze. Some common flags allowed were not used.\n\n1.8.7 (2014-05-23)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug for analyze. User-given objective field was changed when using\n  filtered datasets.\n\n1.8.6 (2014-05-22)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug for analyze. User-given objective field was not used.\n\n1.8.5 (2014-05-19)\n~~~~~~~~~~~~~~~~~~\n\n- Docs update and test change to adapt to backend node threshold changes.\n\n1.8.4 (2014-05-07)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug in analyze --nodes. The default node steps could not be found.\n\n1.8.3 (2014-05-06)\n~~~~~~~~~~~~~~~~~~\n\n- Setting dependency of new python bindings version 1.3.1.\n\n1.8.2 (2014-05-06)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: --shared and --unshared should be considered only when set\n  in the command line by the user. They were always updated, even when absent.\n- Fixing bug: --remote predictions were not working when --model was used as\n  training start point.\n\n1.8.1 (2014-05-04)\n~~~~~~~~~~~~~~~~~~\n\n- Changing the Gazibit report for shared resources to include the model\n  shared url in embedded format.\n- Fixing bug: train and tests data could not be read from stdin.\n\n1.8.0 (2014-04-29)\n~~~~~~~~~~~~~~~~~~\n\n- Adding the ``analyze`` subcommand. The subcommand presents new features,\n  such as:\n\n    ``--cross-validation`` that performs k-fold cross-validation,\n    ``--features`` that selects the best features to increase accuracy\n    (or any other evaluation metric) using a smart search algorithm and\n    ``--nodes`` that selects the node threshold that ensures best accuracy\n    (or any other evaluation metric) in user defined range of nodes.\n\n1.7.1 (2014-04-21)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug: --no-upload flag was not really used.\n\n1.7.0 (2014-04-20)\n~~~~~~~~~~~~~~~~~~\n\n- Adding the --reports option to generate Gazibit reports.\n\n1.6.0 (2014-04-18)\n~~~~~~~~~~~~~~~~~~\n\n- Adding the --shared flag to share the created dataset, model and evaluation.\n\n1.5.1 (2014-04-04)\n~~~~~~~~~~~~~~~~~~\n\n- Fixing bug for model building, when objective field was specified and\n  no --max-category was present the user given objective was not used.\n- Fixing bug: max-category data stored even when --max-category was not\n  used.\n\n1.5.0 (2014-03-24)\n~~~~~~~~~~~~~~~~~~\n\n- Adding --missing-strategy option to allow different prediction strategies\n  when a missing value is found in a split field. Available for local\n  predictions, batch predictions and evaluations.\n- Adding new --delete options: --newer-than and --older-than to delete lists\n  of resources according to their creation date.\n- Adding --multi-dataset flag to generate a new dataset from a list of\n  equally structured datasets.\n\n1.4.7 (2014-03-14)\n~~~~~~~~~~~~~~~~~~\n\n- Bug fixing: resume from multi-label processing from dataset was not working.\n- Bug fixing: max parallel resource creation check did not check that all the\n  older tasks ended, only the last of the slot. This caused\n  more tasks than permitted to be sent in parallel.\n- Improving multi-label training data uploads by zipping the extended file and\n  transforming booleans from True/False to 1/0.\n\n1.4.6 (2014-02-21)\n~~~~~~~~~~~~~~~~~~\n\n- Bug fixing: dataset objective field is not updated each time --objective\n  is used, but only if it differs from the existing objective.\n\n1.4.5 (2014-02-04)\n~~~~~~~~~~~~~~~~~~\n\n- Storing the --max-categories info (its number and the chosen `other` label)\n  in user_metadata.\n\n1.4.4 (2014-02-03)\n~~~~~~~~~~~~~~~~~~\n\n- Fix when using the combined method in --max-categories models.\n  The combination function now uses confidence to choose the predicted\n  category.\n- Allowing full content text fields to be also used as --max-categories\n  objective fields.\n- Fix solving objective issues when its column number is zero.\n\n1.4.3 (2014-01-28)\n~~~~~~~~~~~~~~~~~~\n\n- Adding the --objective-weights option to point to a CSV file containing the\n  weights assigned to each class.\n- Adding the --label-aggregates option to create new aggregate fields on the\n  multi label fields such as count, first or last.\n\n1.4.2 (2014-01-24)\n~~~~~~~~~~~~~~~~~~\n\n- Fix in local random forests' predictions. Sometimes the fields used in all\n  the models were not correctly retrieved and some predictions could be\n  erroneus.\n\n1.4.1 (2014-01-23)\n~~~~~~~~~~~~~~~~~~\n\n- Fix to allow the input data for multi-label predictions to be expanded.\n- Fix to retrieve from the models definition info the labels that were\n  given by the user in its creation in multi-label models.\n\n1.4.0 (2014-01-20)\n~~~~~~~~~~~~~~~~~~\n\n- Adding new --balance option to automatically balance all the classes evenly.\n- Adding new --weight-field option to use the field contents as weights for\n  the instances.\n\n1.3.0 (2014-01-17)\n~~~~~~~~~~~~~~~~~~\n\n- Adding new --source-attributes, --ensemble-attributes,\n  --evaluation-attributes and --batch-prediction-attributes options.\n- Refactoring --multi-label resources to include its related info in\n  the user_metadata attribute.\n- Refactoring the main routine.\n- Adding --batch-prediction-tag for delete operations.\n\n1.2.3 (2014-01-16)\n~~~~~~~~~~~~~~~~~~\n\n- Fix to transmit --training-separator when creating remote sources.\n\n1.2.2 (2014-01-14)\n~~~~~~~~~~~~~~~~~~\n\n- Fix for multiple multi-label fields: headers did not match rows contents in\n  some cases.\n\n1.2.1 (2014-01-12)\n~~~~~~~~~~~~~~~~~~\n\n- Fix for datasets generated using the --new-fields option. The new dataset\n  was not used in model generation.\n\n1.2.0 (2014-01-09)\n~~~~~~~~~~~~~~~~~~\n\n- Adding --multi-label-fields to provide a comma-separated list of multi-label\n  fields in a file.\n\n1.1.0 (2014-01-08)\n~~~~~~~~~~~~~~~~~~\n\n- Fix for ensembles' local predictions when order is used in tie break.\n- Fix for duplicated model ids in models file.\n- Adding new --node-threshold option to allow node limit in models.\n- Adding new --model-attributes option pointing to a JSON file containing\n  model attributes for model creation.\n\n1.0.1 (2014-01-06)\n~~~~~~~~~~~~~~~~~~\n\n- Fix for missing modules during installation.\n\n1.0 (2014-01-02)\n~~~~~~~~~~~~~~~~~~\n\n- Adding the --max-categories option to handle datasets with a high number of\n  categories.\n- Adding the --method combine option to produce predictions with the sets\n  of datasets generated using --max-categories option.\n- Fixing problem with --max-categories when the categorical field is not\n  a preferred field of the dataset.\n- Changing the --datasets option behaviour: it points to a file where\n  dataset ids are stored, one per line, and now it reads all of them to be\n  used in model and ensemble creation.\n\n0.7.2 (2013-12-20)\n~~~~~~~~~~~~~~~~~~\n\n- Adding confidence to predictions output in full format\n\n0.7.1 (2013-12-19)\n~~~~~~~~~~~~~~~~~~\n\n- Bug fixing: multi-label predictions failed when the --ensembles option\n  is used to provide the ensemble information\n\n0.7.0 (2013-11-24)\n~~~~~~~~~~~~~~~~~~\n\n- Bug fixing: --dataset-price could not be set.\n- Adding the threshold combination method to the local ensemble.\n\n0.6.1 (2013-11-23)\n~~~~~~~~~~~~~~~~~~\n\n- Bug fixing: --model-fields option with absolute field names was not\n  compatible with multi-label classification models.\n- Changing resource type checking function.\n- Bug fixing: evaluations did not use the given combination method.\n- Bug fixing: evaluation of an ensemble had turned into evaluations of its\n              models.\n- Adding pruning to the ensemble creation configuration options\n\n0.6.0 (2013-11-08)\n~~~~~~~~~~~~~~~~~~\n\n- Changing fields_map column order: previously mapped dataset column\n  number to model column number, now maps model column number to\n  dataset column number.\n- Adding evaluations to multi-label models.\n- Bug fixing: unicode characters greater than ascii-127 caused crash in\n  multi-label classification\n\n0.5.0 (2013-10-08)\n~~~~~~~~~~~~~~~~~~\n\n- Adapting to predictions issued by the high performance prediction server and\n  the 0.9.0 version of the python bindings.\n- Support for shared models using the same version on python bindings.\n- Support for different server names using environment variables.\n\n0.4.1 (2013-10-02)\n~~~~~~~~~~~~~~~~~~\n\n- Adding ensembles' predictions for multi-label objective fields\n- Bug fixing: in evaluation mode, evaluation for --dataset and\n  --number-of-models > 1 did not select the 20% hold out instances to test the\n  generated ensemble.\n\n0.4.0 (2013-08-15)\n~~~~~~~~~~~~~~~~~~\n\n- Adding text analysis through the corresponding bindings\n\n0.3.7 (2013-09-17)\n~~~~~~~~~~~~~~~~~~\n\n- Adding support for multi-label objective fields\n- Adding --prediction-headers and --prediction-fields to improve\n  --prediction-info formatting options for the predictions file\n- Adding the ability to read --test input data from stdin\n- Adding --seed option to generate different splits from a dataset\n\n0.3.6 (2013-08-21)\n~~~~~~~~~~~~~~~~~~\n\n- Adding --test-separator flag\n\n0.3.5 (2013-08-16)\n~~~~~~~~~~~~~~~~~~\n\n- Bug fixing: resume crash when remote predictions were not completed\n- Bug fixing: Fields object for input data dict building lacked fields\n- Bug fixing: test data was repeated in remote prediction function\n- Bug fixing: Adding replacement=True as default for ensembles' creation\n\n0.3.4 (2013-08-09)\n~~~~~~~~~~~~~~~~~~\n\n- Adding --max-parallel-evaluations flag\n- Bug fixing: matching seeds in models and evaluations for cross validation\n\n0.3.3 (2013-08-09)\n~~~~~~~~~~~~~~~~~~\n- Changing --model-fields and --dataset-fields flag to allow adding/removing\n  fields with +/- prefix\n- Refactoring local and remote prediction functions\n- Adding 'full data' option to the --prediction-info flag to join test input\n  data with prediction results in predictions file\n- Fixing errors in documentation and adding install for windows info\n\n0.3.2 (2013-07-04)\n~~~~~~~~~~~~~~~~~~\n- Adding new flag to control predictions file information\n- Bug fixing: using default sample-rate in ensemble evaluations\n- Adding standard deviation to evaluation measures in cross-validation\n- Bug fixing: using only-model argument to download fields in models\n\n0.3.1 (2013-05-14)\n~~~~~~~~~~~~~~~~~~\n\n- Adding delete for ensembles\n- Creating ensembles when the number of models is greater than one\n- Remote predictions using ensembles\n\n0.3.0 (2013-04-30)\n~~~~~~~~~~~~~~~~~~\n\n- Adding cross-validation feature\n- Using user locale to create new resources in BigML\n- Adding --ensemble flag to use ensembles in predictions and evaluations\n\n0.2.1 (2013-03-03)\n~~~~~~~~~~~~~~~~~~\n\n- Deep refactoring of main resources management\n- Fixing bug in batch_predict for no headers test sets\n- Fixing bug for wide dataset's models than need query-string to retrieve all fields\n- Fixing bug in test asserts to catch subprocess raise\n- Adding default missing tokens to models\n- Adding stdin input for --train flag\n- Fixing bug when reading descriptions in --field-attributes\n- Refactoring to get status from api function\n- Adding confidence to combined predictions\n\n0.2.0 (2012-01-21)\n~~~~~~~~~~~~~~~~~~\n- Evaluations management\n- console monitoring of process advance\n- resume option\n- user defaults\n- Refactoring to improve readability\n\n0.1.4 (2012-12-21)\n~~~~~~~~~~~~~~~~~~\n\n- Improved locale management.\n- Adds progressive handling for large numbers of models.\n- More options in field attributes update feature.\n- New flag to combine local existing predictions.\n- More methods in local predictions: plurality, confidence weighted.\n\n0.1.3 (2012-12-06)\n~~~~~~~~~~~~~~~~~~\n\n- New flag for locale settings configuration.\n- Filtering only finished resources.\n\n0.1.2 (2012-12-06)\n~~~~~~~~~~~~~~~~~~\n\n- Fix to ensure windows compatibility.\n\n0.1.1 (2012-11-07)\n~~~~~~~~~~~~~~~~~~\n\n- Initial release.\n",
    "bugtrack_url": null,
    "license": "http://www.apache.org/licenses/LICENSE-2.0",
    "summary": "A command-line tool for BigML.io, the public BigML API",
    "version": "5.9.1",
    "project_urls": {
        "Download": "https://github.com/bigmlcom/bigmler",
        "Homepage": "https://bigml.com/developers"
    },
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d3f49cdc423c3c5111a13db4b6141d7b7472e984ed05f9aafddbddb9c69dc501",
                "md5": "9d78ac140c3b15a69147993e29ff4e61",
                "sha256": "5869af27cf5bb1bf307f2a64285c768004525947d24c05b0929460c5598a742d"
            },
            "downloads": -1,
            "filename": "bigmler-5.9.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "9d78ac140c3b15a69147993e29ff4e61",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 1060327,
            "upload_time": "2024-06-13T21:46:50",
            "upload_time_iso_8601": "2024-06-13T21:46:50.346146Z",
            "url": "https://files.pythonhosted.org/packages/d3/f4/9cdc423c3c5111a13db4b6141d7b7472e984ed05f9aafddbddb9c69dc501/bigmler-5.9.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "881e7929a373573262ae7b818bb3b8cf46b7daafaad21637a7c1d35b2005036b",
                "md5": "49157565a73e6107626e1144c9dadcd3",
                "sha256": "42d0cc7d4e562ce9bdd56a6cfac33492ea0101150e4d9a6a7f95bf23720301d4"
            },
            "downloads": -1,
            "filename": "bigmler-5.9.1.tar.gz",
            "has_sig": false,
            "md5_digest": "49157565a73e6107626e1144c9dadcd3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 501956,
            "upload_time": "2024-06-13T21:46:54",
            "upload_time_iso_8601": "2024-06-13T21:46:54.166281Z",
            "url": "https://files.pythonhosted.org/packages/88/1e/7929a373573262ae7b818bb3b8cf46b7daafaad21637a7c1d35b2005036b/bigmler-5.9.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-06-13 21:46:54",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "bigmlcom",
    "github_project": "bigmler",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "bigmler"
}
        
Elapsed time: 0.23822s