algormeter


Namealgormeter JSON
Version 1.1.1 PyPI version JSON
download
home_page
SummaryTool for developing, testing and measuring optimizers algorithms
upload_time2023-08-28 12:40:56
maintainer
docs_urlNone
author
requires_python>=3.10
license
keywords convex-optimization difference-convex-function optimization-algorithms
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # AlgorMeter: Tool for developing, testing, measuring and exchange optimizers algorithms

AlgorMeter is a python implementation of an  environment for develop, test, measure, report and  compare optimization algorithms. 
Having a common platform that simplifies developing, testing and exchange of optimization algorithms allows for better collaboration and sharing of resources among researchers in the field. This can lead to more efficient development and testing of new algorithms, as well as faster progress in the field overall.
AlgorMeter produces comparative measures among algorithms  in csv format with effective test function call count.  
It embeds a specific feature devoted to optimize the number of function calls, so that multiple function  calls at the same point are accounted for just once, without storing intermediate results, with benefit in terms of algorithm coding.  
AlgorMeter contains a standard library of 10 DC problems and 7 convex problems for testing algorithms. More problem collections can be easily added.  
AlgorMeter provide integrated performance profiles graphics, as developed by E. D. Dolan and J. J. More. They are a powerful standard tool, within the optimization community, to assess the performance of optimization software. 

<img src="figs/perfprof.png" alt="Performance profiles" width="800px" height="391px"/>

## problems + algorithms = experiments

- A problem is a function f where f: R(n) -> R with n called dimension.  
- f = f1() - f2() difference of convex function where f1, f2: R(n) -> R. 
- 'problems' is a list of problem
- 'algorithm' is a code that try to find problem local minima
- 'experiment' is an algorMeter run with a list of problems and a list of algorithms that produce a result report

## How to use...

### Implement an algorithm...

Copy and customize algorithm examples like the following *(there are many included example?.py)*

```python
def gradient(p, **kwargs):
    '''Simple gradient'''
    for k in p.loop():
        p.Xkp1 = p.Xk - 1/(k+1) * p.gfXk / np.linalg.norm(p.gfXk) 
```

and refer to the available following system properties

| algorMeter properties | Description
|-----|-----------|
|k, p.K | current iteration |
| p.Xk | current point |
| p.Xkp1 | next point. **to be set for next iteration** |
| p.fXk | p.f(p.Xk) = p.f1(p.Xk) - p.f2(p.Xk)  |
|p.fXkPrev| previous iteration f(x)|
| p.f1Xk | p.f1(p.Xk) |
| p.f2Xk | p.f1(p.Xk) |
| p.gfXk | p.gf(p.Xk) = p.gf1(p.Xk) - p.gf2(p.Xk)  |
| p.gf1Xk | p.gf1(p.Xk) |
| p.gf2Xk | p.gf2(p.Xk) |
| p.optimumPoint | Optimum X |
| p.optimumValue | p.f(p.optimumPoint) |
| p.XStart | Start Point |

to determine the p.Xkp1 for the next iteration.  
...and run it:

```python
df, pv = algorMeter(algorithms = [gradient], problems = probList_covx, iterations = 500, absTol=1E-2)
print('\n', pv,'\n', df)
```

pv and df are pandas dataframe with run result. 

<img src="figs/dataframe.png" alt="dataframe result" width="720px" height="230px"/>

A .csv file with result is also created in csv folder. 


*(see example\*.py)*

## AlgorMeter interface

```python
def algorMeter(algorithms, problems, tuneParameters = None, iterations = 500, timeout = 180
    runs = 1, trace = False, dbprint= False, csv = True, savedata = False,
     absTol =1.E-4, relTol = 1.E-5,  **kwargs):
```

- algorithms: algorithms list. *(algoList_simple is available )* 
- problems: problem list. See problems list in example4.py for syntax.   *(probList_base, probList_covx, probList_DCJBKM are available)*
- tuneParameters = None: see tuneParameters section 
- iterations = 500: max iterations number 
- timeout = 180: time out in seconds
- runs = 1: see random section 
- trace = False: see trace section 
- dbprint= False: see dbprint section 
- csv = True: write a report in csv format in csv folder
- savedata = False: save data in data folder
- absTol =1.E-4, relTol = 1.E-5: tolerance used in numpy allClose and isClose
- **kwargs: python kwargs propagated to algorithms

call to algorMeter returns two pandas dataframe p1, p2. p2 is a success and fail summary count.
p1 is a detailed report with the following columns.

- Problem  
- Dim
- Algorithm  
- Status: Success, Fail or Error
- Iterations  
- f(XStar  
- f(BKXStar)  
- Delta: absolute difference between  f(XStar) and f(BKXStar)  
- Seconds  
- Start point  
- XStar: minimum
- BKXStar: best known minum
- \f1	f2 gf1	gf2: effective calls count
- ... : other columns with count to counter.up utility (see below)


## Stop and success condition

```python
    def stop(self) -> bool:
        '''return True if experiment must stop. Override it if needed'''
        return bool(np.isclose(self.fXk,self.fXkPrev,rtol=self.relTol,atol=self.absTol)  
                  or np.allclose (self.gfXk,np.zeros(self.dimension),rtol=self.relTol,atol=self.absTol) )

    def isSuccess(self) -> bool:
        '''return True if experiment success. Override it if needed'''
        return  self.isMinimum(self.XStar)
 
```

can be overriden like in

```python
    def stop():
        return status 
        ...

    p.stop = stop
    p.isSuccess = stop

```
Another maybe more simple way is to call statement break in main loop.  
See example3.py
## Problems function call optimization

AlgorMeter embeds a specific feature devoted to optimize the number of function calls, so that multiple function  calls at the same point are accounted for just once, without storing intermediate results, with benefit in terms of algorithm coding.  So in algorithm implementation is not necessary to store the previous result in variables to reduce f1, f2, gf1, gf2 function calls. AlgorMeter cache 128 previous calls to obtain such automatic optimization.  

## Problems ready to use

Importing 'algormeter.libs' probList_base, probList_covx, probList_DCJBKM problems list are available.    
 **probList_DCJBKM** contains ten frequently used unconstrained DC optimization problems, where objective functions are presented as DC (Difference of Convex) functions:
𝑓(𝑥)=𝑓1(𝑥)−𝑓2(𝑥).
 [Joki, Bagirov](https://link.springer.com/article/10.1007/s10898-016-0488-3)

 **probList_covx**  contains  DemMal,Mifflin1, Miffilin2,LQ,QL,MAXQ,MAXL,CB2,CB3,MaxQuad, Rosen, Shor, TR48, A48 and Goffin test convex functions/problem

 **probList_no_covx**  contains special no convex functions: Rosenbrock, Crescent

 **probList_base** contains Parab, ParAbs, Acad simple functions for algorithms early development and test.  

 See 'ProblemsLib.pdf'

### Counters

Instruction like 
> counter.up('lb<0', cls='qp')  

is used to count events in code, summerized in statistics at the end of experiment as a column, available in dataframe returned by call to algorMeter and in final csv.
For the code above a column with count of counter.up calls and head 'qp.lb>0' is produced.  
Also are automatically available columns 'f1', 'f2', 'gf1', 'gf1' with effective calls to f1, f2, gf1, gf2  
See example3.py

### dbprint = True

Instruction dbx.print produce print out only if algorMeter call has option dbprint == True
> dbx.print('t:',t, 'Xprev:',Xprev, 'f(Xprev):',p.f(Xprev) ).  

See example3.py  
NB: If dbprint = True python exceptions are not handled and raised.

### Trace == True

If Default.TRACE = True a line with function values are shown as follows in the console for each iteration for algorithms analysis purpose.
>  Acad-2 k:0,f:-0.420,x:[ 0.7 -1.3],gf:[ 1.4 -0.6],f1:2.670,gf1:[ 3.1 -2.9],f2:3.090,gf2:[ 1.7 -2.3]   
 > Acad-2 k:1,f:-1.816,x:[-1.0004 -0.5712],gf:[-8.3661e-04  8.5750e-01],f1:0.419,gf1:[-2.0013 -0.7137],f2:2.235,gf2:[-2.0004 -1.5712]  
> Acad-2 k:2,f:-1.754,x:[-0.9995 -1.4962],gf:[ 9.6832e-04 -9.9250e-01],f1:2.361,gf1:[-1.9985 -3.4887],f2:4.115,gf2:[-1.9995 -2.4962]

These lines represent the path followed by the algorithm for the specific problem.  
NB: If trace = True python exceptions are not handled and raised.  
See example3.py

### tuneParameters
Some time is necessary tune some parameter combinations.  Procede as follow (See example4.py):

- Define and use module, non locals parameters in your algo code.
- Define a list of lists with possible values of tuning parameters as follows:

```python
tpar = [ # [name, [values list]]
    ('alpha', [1. + i for i in np.arange(.05,.9,.05)]),
    # ('beta', [1. + i for i in np.arange(.05,.9,.05)]),
]
```

- call algorMeter with csv = True and tuneParameters=<list of parameters values> like tuneParameters=tpar.
- open csv file produced and analyze the performance of parameters combinations by looking column '# TuneParams'. Useful is a pivot table on such columns.
See example4.py
## Random start point 

If algorMeter parameter run is set with a number greater than 1, each algorithm is repeated on the same problem with random start point in range -1 to 1 for all dimensions.
By the method setRandom(center, size) random X can be set in [center-size, center+size] interval.  
See example5.py

## Record data 

with option data == True store in 'npy' folder one file in numpy format, for each experiment with X and Y=f(X) for all iterations.
It is a numpy array with:
> X = data[:,:-1]  
Y = data[:,-1] 

File name is like 'gradient,JB05-50.npy'.  
These files are read by viewer.py data visualizer.

## Performance Profile
Performance profiles graphics, as developed by E. D. Dolan and J. J. More, are a powerful tool to assess the performance of optimization software. For this reason they are standard accepted within the optimization community. See example2.py

```python
    df, pv = algorMeter(algorithms = algorithms, ...)

    perfProf(df, costs= ['f1','Iterations'] ) 
    # df: first pandas dataframe output of algormeter call
    # costs: list of column labels in df

    plt.show(block=True)
```
It is possible to graph performance profiles by preparing a pandas data frame using a spreadsheet with the mandatory columns 'Problem','Dim','Algorithm','Status' and the columns costs that you want to draw
```python
    df = pd.read_excel(r'Path of Excel file\File name.xlsx', sheet_name='your Excel sheet name')

    perfProf(df, costs= ['cost1','cost2'] ) 

    plt.show(block=True)
```
## Minimize

In case you need to find the minimum of a problem/function by applying an algorithm developed with algormeter, the minimize method is available. (See example6.py):

```python
    p = MyProb(K) 
    found, x, y = p.minimize(myAlgo)
```

## Visualizer.py

Running visualizer.py produce or updates contour image in folder 'pics' for each experiment with dimension = 2 with data in folder 'npy'.

# Examples index

- example1.py: Simplest possible example 
- example2.py: Dolan, More performance profile
- example3.py: dbx.print, trace,counter.up, counter.log, override stop, break  example
- example4.py: algorithm parameters tuning 
- example5.py: multiple run of each problem with random start point
- example6.py: minimize new problem with algometer algorithm


# Contributing

You can download or fork the repository freely.  
https://github.com/xedla/algormeter  
If you see a mistake you can send me a mail at pietrodalessandro@gmail.com 
If you open up a ticket, please make sure it describes the problem or feature request fully.  
Any suggestion are welcome.

# License
**If you use AlgorMeter for the preparation of a scientific paper, the citation with a link to this repository would be appreciated.**

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY. 

# Installation
Algormeter is available as pypi pip package.
```python
    pip3 install algormeter
```

# Dependencies
Python version at least
- Python 3.10.6

Package installable with pip3
- numpy
- pandas
- matplotlib

Algormeter plays well with [Visual Studio Code](https://code.visualstudio.com) and in jupyter

            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "algormeter",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "",
    "keywords": "convex-optimization,difference-convex-function,optimization-algorithms",
    "author": "",
    "author_email": "Pietro d Alessandro <pietrodalessandro@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/0e/e7/4efe0b6d2cb5a7eb4571e72f49c8b8204a601f40aae53dd8bd3ca175dcaa/algormeter-1.1.1.tar.gz",
    "platform": null,
    "description": "# AlgorMeter: Tool for developing, testing, measuring and exchange optimizers algorithms\n\nAlgorMeter is a python implementation of an  environment for develop, test, measure, report and  compare optimization algorithms. \nHaving a common platform that simplifies developing, testing and exchange of optimization algorithms allows for better collaboration and sharing of resources among researchers in the field. This can lead to more efficient development and testing of new algorithms, as well as faster progress in the field overall.\nAlgorMeter produces comparative measures among algorithms  in csv format with effective test function call count.  \nIt embeds a specific feature devoted to optimize the number of function calls, so that multiple function  calls at the same point are accounted for just once, without storing intermediate results, with benefit in terms of algorithm coding.  \nAlgorMeter contains a standard library of 10 DC problems and 7 convex problems for testing algorithms. More problem collections can be easily added.  \nAlgorMeter provide integrated performance profiles graphics, as developed by E. D. Dolan and J. J. More. They are a powerful standard tool, within the optimization community, to assess the performance of optimization software. \n\n<img src=\"figs/perfprof.png\" alt=\"Performance profiles\" width=\"800px\" height=\"391px\"/>\n\n## problems + algorithms = experiments\n\n- A problem is a function f where f: R(n) -> R with n called dimension.  \n- f = f1() - f2() difference of convex function where f1, f2: R(n) -> R. \n- 'problems' is a list of problem\n- 'algorithm' is a code that try to find problem local minima\n- 'experiment' is an algorMeter run with a list of problems and a list of algorithms that produce a result report\n\n## How to use...\n\n### Implement an algorithm...\n\nCopy and customize algorithm examples like the following *(there are many included example?.py)*\n\n```python\ndef gradient(p, **kwargs):\n    '''Simple gradient'''\n    for k in p.loop():\n        p.Xkp1 = p.Xk - 1/(k+1) * p.gfXk / np.linalg.norm(p.gfXk) \n```\n\nand refer to the available following system properties\n\n| algorMeter properties | Description\n|-----|-----------|\n|k, p.K | current iteration |\n| p.Xk | current point |\n| p.Xkp1 | next point. **to be set for next iteration** |\n| p.fXk | p.f(p.Xk) = p.f1(p.Xk) - p.f2(p.Xk)  |\n|p.fXkPrev| previous iteration f(x)|\n| p.f1Xk | p.f1(p.Xk) |\n| p.f2Xk | p.f1(p.Xk) |\n| p.gfXk | p.gf(p.Xk) = p.gf1(p.Xk) - p.gf2(p.Xk)  |\n| p.gf1Xk | p.gf1(p.Xk) |\n| p.gf2Xk | p.gf2(p.Xk) |\n| p.optimumPoint | Optimum X |\n| p.optimumValue | p.f(p.optimumPoint) |\n| p.XStart | Start Point |\n\nto determine the p.Xkp1 for the next iteration.  \n...and run it:\n\n```python\ndf, pv = algorMeter(algorithms = [gradient], problems = probList_covx, iterations = 500, absTol=1E-2)\nprint('\\n', pv,'\\n', df)\n```\n\npv and df are pandas dataframe with run result. \n\n<img src=\"figs/dataframe.png\" alt=\"dataframe result\" width=\"720px\" height=\"230px\"/>\n\nA .csv file with result is also created in csv folder. \n\n\n*(see example\\*.py)*\n\n## AlgorMeter interface\n\n```python\ndef algorMeter(algorithms, problems, tuneParameters = None, iterations = 500, timeout = 180\n    runs = 1, trace = False, dbprint= False, csv = True, savedata = False,\n     absTol =1.E-4, relTol = 1.E-5,  **kwargs):\n```\n\n- algorithms: algorithms list. *(algoList_simple is available )* \n- problems: problem list. See problems list in example4.py for syntax.   *(probList_base, probList_covx, probList_DCJBKM are available)*\n- tuneParameters = None: see tuneParameters section \n- iterations = 500: max iterations number \n- timeout = 180: time out in seconds\n- runs = 1: see random section \n- trace = False: see trace section \n- dbprint= False: see dbprint section \n- csv = True: write a report in csv format in csv folder\n- savedata = False: save data in data folder\n- absTol =1.E-4, relTol = 1.E-5: tolerance used in numpy allClose and isClose\n- **kwargs: python kwargs propagated to algorithms\n\ncall to algorMeter returns two pandas dataframe p1, p2. p2 is a success and fail summary count.\np1 is a detailed report with the following columns.\n\n- Problem  \n- Dim\n- Algorithm  \n- Status: Success, Fail or Error\n- Iterations  \n- f(XStar  \n- f(BKXStar)  \n- Delta: absolute difference between  f(XStar) and f(BKXStar)  \n- Seconds  \n- Start point  \n- XStar: minimum\n- BKXStar: best known minum\n- \\f1\tf2 gf1\tgf2: effective calls count\n- ... : other columns with count to counter.up utility (see below)\n\n\n## Stop and success condition\n\n```python\n    def stop(self) -> bool:\n        '''return True if experiment must stop. Override it if needed'''\n        return bool(np.isclose(self.fXk,self.fXkPrev,rtol=self.relTol,atol=self.absTol)  \n                  or np.allclose (self.gfXk,np.zeros(self.dimension),rtol=self.relTol,atol=self.absTol) )\n\n    def isSuccess(self) -> bool:\n        '''return True if experiment success. Override it if needed'''\n        return  self.isMinimum(self.XStar)\n \n```\n\ncan be overriden like in\n\n```python\n    def stop():\n        return status \n        ...\n\n    p.stop = stop\n    p.isSuccess = stop\n\n```\nAnother maybe more simple way is to call statement break in main loop.  \nSee example3.py\n## Problems function call optimization\n\nAlgorMeter embeds a specific feature devoted to optimize the number of function calls, so that multiple function  calls at the same point are accounted for just once, without storing intermediate results, with benefit in terms of algorithm coding.  So in algorithm implementation is not necessary to store the previous result in variables to reduce f1, f2, gf1, gf2 function calls. AlgorMeter cache 128 previous calls to obtain such automatic optimization.  \n\n## Problems ready to use\n\nImporting 'algormeter.libs' probList_base, probList_covx, probList_DCJBKM problems list are available.    \n **probList_DCJBKM** contains ten frequently used unconstrained DC optimization problems, where objective functions are presented as DC (Difference of Convex) functions:\n\ud835\udc53(\ud835\udc65)=\ud835\udc531(\ud835\udc65)\u2212\ud835\udc532(\ud835\udc65).\n [Joki, Bagirov](https://link.springer.com/article/10.1007/s10898-016-0488-3)\n\n **probList_covx**  contains  DemMal,Mifflin1, Miffilin2,LQ,QL,MAXQ,MAXL,CB2,CB3,MaxQuad, Rosen, Shor, TR48, A48 and Goffin test convex functions/problem\n\n **probList_no_covx**  contains special no convex functions: Rosenbrock, Crescent\n\n **probList_base** contains Parab, ParAbs, Acad simple functions for algorithms early development and test.  \n\n See 'ProblemsLib.pdf'\n\n### Counters\n\nInstruction like \n> counter.up('lb<0', cls='qp')  \n\nis used to count events in code, summerized in statistics at the end of experiment as a column, available in dataframe returned by call to algorMeter and in final csv.\nFor the code above a column with count of counter.up calls and head 'qp.lb>0' is produced.  \nAlso are automatically available columns 'f1', 'f2', 'gf1', 'gf1' with effective calls to f1, f2, gf1, gf2  \nSee example3.py\n\n### dbprint = True\n\nInstruction dbx.print produce print out only if algorMeter call has option dbprint == True\n> dbx.print('t:',t, 'Xprev:',Xprev, 'f(Xprev):',p.f(Xprev) ).  \n\nSee example3.py  \nNB: If dbprint = True python exceptions are not handled and raised.\n\n### Trace == True\n\nIf Default.TRACE = True a line with function values are shown as follows in the console for each iteration for algorithms analysis purpose.\n>  Acad-2 k:0,f:-0.420,x:[ 0.7 -1.3],gf:[ 1.4 -0.6],f1:2.670,gf1:[ 3.1 -2.9],f2:3.090,gf2:[ 1.7 -2.3]   \n > Acad-2 k:1,f:-1.816,x:[-1.0004 -0.5712],gf:[-8.3661e-04  8.5750e-01],f1:0.419,gf1:[-2.0013 -0.7137],f2:2.235,gf2:[-2.0004 -1.5712]  \n> Acad-2 k:2,f:-1.754,x:[-0.9995 -1.4962],gf:[ 9.6832e-04 -9.9250e-01],f1:2.361,gf1:[-1.9985 -3.4887],f2:4.115,gf2:[-1.9995 -2.4962]\n\nThese lines represent the path followed by the algorithm for the specific problem.  \nNB: If trace = True python exceptions are not handled and raised.  \nSee example3.py\n\n### tuneParameters\nSome time is necessary tune some parameter combinations.  Procede as follow (See example4.py):\n\n- Define and use module, non locals parameters in your algo code.\n- Define a list of lists with possible values of tuning parameters as follows:\n\n```python\ntpar = [ # [name, [values list]]\n    ('alpha', [1. + i for i in np.arange(.05,.9,.05)]),\n    # ('beta', [1. + i for i in np.arange(.05,.9,.05)]),\n]\n```\n\n- call algorMeter with csv = True and tuneParameters=<list of parameters values> like tuneParameters=tpar.\n- open csv file produced and analyze the performance of parameters combinations by looking column '# TuneParams'. Useful is a pivot table on such columns.\nSee example4.py\n## Random start point \n\nIf algorMeter parameter run is set with a number greater than 1, each algorithm is repeated on the same problem with random start point in range -1 to 1 for all dimensions.\nBy the method setRandom(center, size) random X can be set in [center-size, center+size] interval.  \nSee example5.py\n\n## Record data \n\nwith option data == True store in 'npy' folder one file in numpy format, for each experiment with X and Y=f(X) for all iterations.\nIt is a numpy array with:\n> X = data[:,:-1]  \nY = data[:,-1] \n\nFile name is like 'gradient,JB05-50.npy'.  \nThese files are read by viewer.py data visualizer.\n\n## Performance Profile\nPerformance profiles graphics, as developed by E. D. Dolan and J. J. More, are a powerful tool to assess the performance of optimization software. For this reason they are standard accepted within the optimization community. See example2.py\n\n```python\n    df, pv = algorMeter(algorithms = algorithms, ...)\n\n    perfProf(df, costs= ['f1','Iterations'] ) \n    # df: first pandas dataframe output of algormeter call\n    # costs: list of column labels in df\n\n    plt.show(block=True)\n```\nIt is possible to graph performance profiles by preparing a pandas data frame using a spreadsheet with the mandatory columns 'Problem','Dim','Algorithm','Status' and the columns costs that you want to draw\n```python\n    df = pd.read_excel(r'Path of Excel file\\File name.xlsx', sheet_name='your Excel sheet name')\n\n    perfProf(df, costs= ['cost1','cost2'] ) \n\n    plt.show(block=True)\n```\n## Minimize\n\nIn case you need to find the minimum of a problem/function by applying an algorithm developed with algormeter, the minimize method is available. (See example6.py):\n\n```python\n    p = MyProb(K) \n    found, x, y = p.minimize(myAlgo)\n```\n\n## Visualizer.py\n\nRunning visualizer.py produce or updates contour image in folder 'pics' for each experiment with dimension = 2 with data in folder 'npy'.\n\n# Examples index\n\n- example1.py: Simplest possible example \n- example2.py: Dolan, More performance profile\n- example3.py: dbx.print, trace,counter.up, counter.log, override stop, break  example\n- example4.py: algorithm parameters tuning \n- example5.py: multiple run of each problem with random start point\n- example6.py: minimize new problem with algometer algorithm\n\n\n# Contributing\n\nYou can download or fork the repository freely.  \nhttps://github.com/xedla/algormeter  \nIf you see a mistake you can send me a mail at pietrodalessandro@gmail.com \nIf you open up a ticket, please make sure it describes the problem or feature request fully.  \nAny suggestion are welcome.\n\n# License\n**If you use AlgorMeter for the preparation of a scientific paper, the citation with a link to this repository would be appreciated.**\n\nThis program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.\n\nThis program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY. \n\n# Installation\nAlgormeter is available as pypi pip package.\n```python\n    pip3 install algormeter\n```\n\n# Dependencies\nPython version at least\n- Python 3.10.6\n\nPackage installable with pip3\n- numpy\n- pandas\n- matplotlib\n\nAlgormeter plays well with [Visual Studio Code](https://code.visualstudio.com) and in jupyter\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Tool for developing, testing and measuring optimizers algorithms",
    "version": "1.1.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/xedla/algormeter/issues",
        "Homepage": "https://github.com/xedla/algormeter.git"
    },
    "split_keywords": [
        "convex-optimization",
        "difference-convex-function",
        "optimization-algorithms"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "6ec2a0a6018f8d7368353233c4950ca03a49b97114fd2965580941b15eb45e22",
                "md5": "e722fb07423f2c9452a621577ed24f3d",
                "sha256": "1172bc4eb75b49ba52a97be7416f1275312a700d893d860c15af5137a6e21bb9"
            },
            "downloads": -1,
            "filename": "algormeter-1.1.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "e722fb07423f2c9452a621577ed24f3d",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 30641,
            "upload_time": "2023-08-28T12:40:53",
            "upload_time_iso_8601": "2023-08-28T12:40:53.251825Z",
            "url": "https://files.pythonhosted.org/packages/6e/c2/a0a6018f8d7368353233c4950ca03a49b97114fd2965580941b15eb45e22/algormeter-1.1.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0ee74efe0b6d2cb5a7eb4571e72f49c8b8204a601f40aae53dd8bd3ca175dcaa",
                "md5": "6e152c2ec1ae3316c24c77e20a242d29",
                "sha256": "dad47be03d3d3031c70f3aceb534535db8aa866860a62002b8c974c2511234a7"
            },
            "downloads": -1,
            "filename": "algormeter-1.1.1.tar.gz",
            "has_sig": false,
            "md5_digest": "6e152c2ec1ae3316c24c77e20a242d29",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 900040,
            "upload_time": "2023-08-28T12:40:56",
            "upload_time_iso_8601": "2023-08-28T12:40:56.887394Z",
            "url": "https://files.pythonhosted.org/packages/0e/e7/4efe0b6d2cb5a7eb4571e72f49c8b8204a601f40aae53dd8bd3ca175dcaa/algormeter-1.1.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-28 12:40:56",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "xedla",
    "github_project": "algormeter",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "algormeter"
}
        
Elapsed time: 0.12879s