maadstml


Namemaadstml JSON
Version 3.48 PyPI version JSON
download
home_pagehttps://github.com/smaurice101/transactionalmachinelearning
SummaryMulti-Agent Accelerator for Data Science (MAADS): Transactional Machine Learning
upload_time2024-04-17 14:20:24
maintainerNone
docs_urlNone
authorSebastian Maurice
requires_pythonNone
licenseMIT License
keywords genai multi-agent transactional machine learning artificial intelligence chatgpt generative ai privategpt data streams data science optimization prescriptive analytics machine learning automl auto-ml artificial intelligence predictive analytics advanced analytics
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            **Multi-Agent Accelerator for Data Science Using Transactional Machine Learning (MAADSTML)**

*Revolutionizing Data Stream Science with Transactional Machine Learning*

**Overview**

*MAADSTML combines Artificial Intelligence, ChatGPT, PrivateGPT, Auto Machine Learning with Data Streams Integrated with Apache Kafka (or Redpanda) to create frictionless and elastic machine learning solutions.*  

This library allows users to harness the power of agent-based computing using hundreds of advanced linear and non-linear algorithms. Users can easily integrate Predictive Analytics, Prescriptive Analytics, Pre-Processing, and Optimization in any data stream solution by wrapping additional code around the functions below. It connects with **Apache KAFKA brokers** for cloud based computing using Kafka (or Redpanda) as the data backbone. 

If analysing MILLIONS of IoT devices, you can easily deploy thousands of VIPER/HPDE instances in Kubernetes Cluster in AWS/GCP/Azure. 

It uses VIPER as a **KAFKA connector and seamlessly combines Auto Machine Learning, with Real-Time Machine Learning, Real-Time Optimization and Real-Time Predictions** while publishing these insights in to a Kafka cluster in real-time at scale, while allowing users to consume these insights from anywhere, anytime and in any format. 

It also HPDE as the AutoML technology for TML.  Linux/Windows/Mac versions can be downloaded from [Github](https://github.com/smaurice101/transactionalmachinelearning)

It uses VIPERviz to visualize streaming insights over HTTP(S). Linux/Windows/Mac versions can be downloaded from [Github](https://github.com/smaurice101/transactionalmachinelearning)

MAADSTML details can be found in the book: [Transactional Machine Learning with Data Streams and AutoML](https://www.amazon.com/Transactional-Machine-Learning-Streams-AutoML/dp/1484270223)


To install this library a request should be made to **support@otics.ca** for a username and a MAADSTOKEN.  Once you have these credentials then install this Python library.

**Compatibility**
    - Python 3.8 or greater
    - Minimal Python skills needed

**Copyright**
   - Author: Sebastian Maurice, PhD
   
**Installation**
   - At the command prompt write:
     **pip install maadstml**
     - This assumes you have [Downloaded Python](https://www.python.org/downloads/) and installed it on your computer.  

**MAADS-VIPER Connector to Manage Apache KAFKA:** 
  - MAADS-VIPER python library connects to VIPER instances on any servers; VIPER manages Apache Kafka.  VIPER is REST based and cross-platform that can run on windows, linux, MAC, etc.. It also fully supports SSL/TLS encryption in Kafka brokers for producing and consuming.

**TML is integrated with PrivateGPT (https://github.com/imartinez/privateGPT), which is a production ready GPT, that is 100% Local, 100% Secure and 100% FREE GPT Access.
  - Users need to PULL and RUN one of the privateGPT Docker containers:
  - 	1. Docker Hub: maadsdocker/tml-privategpt-no-gpu-amd64 (without NVIDIA GPU for AMD64 Chip)
  -     2. Docker Hub: maadsdocker/tml-privategpt-with-gpu-amd64 (with NVIDIA GPU for AMD64 Chip)
  - 	3. Docker Hub: maadsdocker/tml-privategpt-no-gpu-arm64 (without NVIDIA GPU for ARM64 Chip)
  -     4. Docker Hub: maadsdocker/tml-privategpt-with-gpu-arm64 (with NVIDIA GPU for ARM64 Chip)
  - Additional details are here: https://github.com/smaurice101/raspberrypi/tree/main/privategpt
  - TML accesses privateGPT container using REST API. 
  - For PrivateGPT production deployments it is recommended that machines have the NVIDIA GPU as this will lead to significant performance improvements.

- **pgptingestdocs**
  - Set Context for PrivateGPT by ingesting PDFs or text documents.  All responses will then use these documents for context.  

- **pgptgetingestedembeddings**
  - After documents are ingested, you can retrieve the embeddings for the ingested documents.  These embeddings allow you to filter the documents for specific context.  

- **pgptchat**
  - Send any prompt to privateGPT (with or without context) and get back a response.  

- **pgptdeleteembeddings**
  - Delete embeddings.  

- **pgpthealth**
  - Check the health of the privateGPT http server.  

- **vipermirrorbrokers**
  - Migrate data streams from (mutiple) brokers to (multiple) brokers FAST!  In one simple function you have the 
    power to migrate from hundreds of brokers with hundreds of topics and partitions to any other brokers
	with ease.  Viper ensures no duplication of messages and translates offsets from last committed.  Every transaction 
	is logged, making data validation and auditability a snap.  You can also increase or decrease partitions and 
	apply filter to topics to copy over.  
	
- **viperstreamquery**
  - Query multiple streams with conditional statements.  For example, if you preprocessed multiple streams you can 
    query them in real-time and extract powerful insights.  You can use >, <, =, AND, OR. 

- **viperstreamquerybatch**
  - Query multiple streams with conditional statements.  For example, if you preprocessed multiple streams you can 
    query them in real-time and extract powerful insights.  You can use >, <, =, AND, OR. Batch allows you to query
	multiple IDs at once.

- **viperlisttopics** 
  - List all topics in Kafka brokers
 
- **viperdeactivatetopic**
  - Deactivate topics in kafka brokers and prevent unused algorithms from consuming storage and computing resources that cost money 

- **viperactivatetopic**
  - Activate topics in Kafka brokers 

- **vipercreatetopic**
  - Create topics in Kafka brokers 
  
- **viperstats**
  - List all stats from Kafka brokers allowing VIPER and KAFKA admins with a end-end view of who is producing data to algorithms, and who is consuming the insights from the algorithms including date/time stamp on the last reads/writes to topics, and how many bytes were read and written to topics and a lot more

- **vipersubscribeconsumer**
  - Admins can subscribe consumers to topics and consumers will immediately receive insights from topics.  This also gives admins more control of who is consuming the insights and allows them to ensures any issues are resolved quickly in case something happens to the algorithms.
  
- **viperunsubscribeconsumer**
  - Admins can unsubscribe consumers from receiving insights, this is important to ensure storage and compute resources are always used for active users.  For example, if a business user leaves your company or no longer needs the insights, by unsubscribing the consumer, the algorithm will STOP producing the insights.

- **viperhpdetraining**
  - Users can do real-time machine learning (RTML) on the data in Kafka topics. This is very powerful and useful for "transactional learnings" on the fly using our HPDE technology.  HPDE will find the optimal algorithm for the data in less than 60 seconds.  

- **viperhpdetrainingbatch**
  - Users can do real-time machine learning (RTML) on the data in Kafka topics. This is very powerful and useful for "transactional learnings" on the fly using our HPDE technology. 
    HPDE will find the optimal algorithm for the data in less than 60 seconds.  Batch allows you to perform ML on multiple IDs at once.

- **viperhpdepredict**
  - Using the optimal algorithm - users can do real-time predictions from streaming data into Kafka Topics.

- **viperhpdepredictprocess**
  - Using the optimal algorithm you can determine object ranking based on input data.  For example, if you want to know which human or machine is the 
    best or worst given input data then this function will return the best or worst human or machine.

- **viperhpdepredictbatch**
  - Using the optimal algorithm - users can do real-time predictions from streaming data into Kafka Topics. Batch allows you to perform predictions
    on multiple IDs at once.
  
- **viperhpdeoptimize**
  -  Users can even do optimization to MINIMIZE or MAXIMIZE the optimal algorithm to find the BEST values for the independent variables that will minimize or maximize the dependent variable.

- **viperhpdeoptimizebatch**
  -  Users can even do optimization to MINIMIZE or MAXIMIZE the optimal algorithm to find the BEST values for the independent variables that will minimize or maximize the dependent 
     variable. Batch allows you to optimize multiple IDs at once.

- **viperproducetotopic**
  - Users can produce to any topics by injesting from any data sources.

- **viperproducetotopicbulk**
  - Users can produce to any topics by injesting from any data sources.  Use this function to write bulk transactions at high speeds.  With the right architecture and
  network you can stream 1 million transactions per second (or more).
  
- **viperconsumefromtopic**
  - Users can consume from any topic and graph the data. 

- **viperconsumefromtopicbatch**
  - Users can consume from any topic and graph the data.  Batch allows you to consume from multiple IDs at once.
  
- **viperconsumefromstreamtopic**
  - Users can consume from a multiple stream of topics at once

- **vipercreateconsumergroup**
  - Admins can create a consumer group made up of any number of consumers.  You can add as many partitions for the group in the Kafka broker as well as specify the replication factor to ensure high availaibility and no disruption to users who consume insights from the topics.

- **viperconsumergroupconsumefromtopic**
  - Users who are part of the consumer group can consume from the group topic.

- **viperproducetotopicstream**
  - Users can join multiple topic streams and produce the combined results to another topic.
  
- **viperpreprocessproducetotopicstream**
  - Users can pre-process data streams using the following functions: MIN, MAX, AVG, COUNT, COUNTSTR, DIFF, DIFFMARGIN, SUM, MEDIAN, VARIANCE, OUTLIERS, OUTLIERSX-Y,VARIED, 
    ANOMPROB,ANOMPROBX-Y,ENTROPY, AUTOCORR, TREND, CONSISTENCY, IQR (InterQuartileRange), Midhinge, GM (Geometric mean), HM (Harmonic mean), Trimean, 
	CV (coefficient of Variation),Mad (Mean absolute deviation), Skewness, Kurtosis, Spikedetect, Unique, Uniquestr, Timediff: time should be in this 
	layout:2006-01-02T15:04:05, Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the 
    average time in seconds between consecutive dates.. Spikedetect uses a Zscore method to detect 
	spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.  Geodiff (returns distance in Kilometers between two lat/long points)
	
    Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from
	current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.
	You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype
	will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference
	between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype
	for data quality and data assurance programs for any number of data streams.
		
	Unique Checks numeric data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.
 
    Uniquestr Checks string data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.

    Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.
 
    Uniquestrcount Checks string data for duplication.  Returns count of unique strings.
	
    CONSISTENCY checks if the data all have consistent data types. Returns 1 for consistent data types, 0 otherwise.
	
	Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high 

    RAW for no processing.
	
    ANOMPROB=Anomaly Probability, it will run several algorithms on the data stream window to determine a probability percentage of 
	anomalous behaviour.  This can be cross-referenced with other process types. This is very useful if you want to extract aggregate 
	values that you can then use to build TML models and/or make decisions to prevent issues.  ENTROPY will compute the amount of information
	in the data stream.  AUTOCORR will run a autocorrelation regression: Y = Y (t-1), to indicate how previous value correlates with future 
    value.  TREND will run a linear regression of Y = f(Time), to determine if the data in the stream are increasing or decreasing.	

    ANOMPROBX-Y (similar to OUTLIERSX-Y), where X and Y are numbers or "n", if "n" means examine all anomalies for recurring patterns.
	They allow you to check if the anomalies in the streams are truly anomalies and not some
    pattern.  For example, if a IoT device shuts off and turns on again routinely, this may be picked up as an anomaly when in fact
    it is normal behaviour.  So, to ignore these cases, if ANOMPROB2-5, this tells Viper, check anomalies with patterns of 2-5 peaks.
    If the stream has two classes and these two classes are like 0 and 1000, and show a pattern, then they should not be considered an anomaly.
    Meaning, class=0, is the device shutting down, class=1000 is the device turning back on.  If ANOMPROB3-10, Viper will check for 
    patterns of classes 3 to 10 to see if they recur routinely.  This is very helpful to reduce false positives and false negatives.

- **viperpreprocessbatch**
  - This function is similar to *viperpreprocessproducetotopicstream* the only difference is you can specify multiple
    tmlids in Topicid field. This allows you to batch process multiple tmlids at once.  This is very useful if using
	kubernetes architecture.

- **vipercreatejointopicstreams**
  - Users can join multiple topic streams
  
- **vipercreatetrainingdata**
  - Users can create a training data set from the topic streams for Real-Time Machine Learning (RTML) on the fly.

- **vipermodifyconsumerdetails**
  - Users can modify consumer details on the topic.  When topics are created an admin must indicate name, email, location and description of the topic.  This helps to better manage the topic and if there are issues, the admin can contact the individual consuming from the topic.
  
- **vipermodifytopicdetails**
  - Users can modify details on the topic.  When topics are created an admin must indicate name, email, location and description of the topic.  This helps to better manage the topic and if there are issues, the admin can contact the developer of the algorithm and resolve issue quickly to ensure disruption to consumers is minimal.
 
- **vipergroupdeactivate**
  - Admins can deactive a consumer group, which will stop all insights being delivered to consumers in the group.
  
- **vipergroupactivate**
  - Admins can activate a group to re-start the insights.
 
- **viperdeletetopics**
  - Admins can delete topics in VIPER database and Kafka clusters.
		
- **viperanomalytrain**
  - Perform anomaly/peer group analysis on text or numeric data stream using advanced unsupervised learning. VIPER automatically joins 
    streams, and determines the peer group of "usual" behaviours using proprietary algorithms, which are then used to predict anomalies with 
	*viperanomalypredict* in real-time.  Users can use several parameters to fine tune the peer groups.  
	
	*VIPER is one of the very few, if not only, technology to do anomaly/peer group analysis using unsupervised learning on data streams 
	with Apache Kafka.*

- **viperanomalytrainbatch**
  - Batch allows you to perform anomaly training on multiple IDs at once.

- **viperanomalypredict**
  - Predicts anomalies for text or numeric data using the peer groups found with *viperanomalytrain*.  VIPER automatically joins streams
  and compares each value with the peer groups and determines if a value is anomalous in real-time.  Users can use several parameters to fine tune
  the analysis. 
  
  *VIPER is one of the very few, if not only, technology to do anomaly detection/predictions using unsupervised learning on data streams
  with Apache Kafka.*
		
- **viperanomalypredictbatch**
  - Batch allows you to perform anomaly prediction on multiple IDs at once.
				
- **viperstreamcorr**
  - Performs streaming correlations by joining multiple data streams with 2 variables.  Also performs cross-correlations with 4 variables.
    This is a powerful function and can offer important correlation signals between variables.   Will also correlate TEXT using 
    natural language processing (NLP).	

- **viperpreprocesscustomjson**
  - Immediately start processing ANY RAW JSON data in minutes.  This is useful if you want to start processing data quickly.  

- **viperstreamcluster**
  - Perform cluster analysis on streaming data.  This uses K-Means clustering with Euclidean or EuclideanSquared algorithms to compute 
    distance.  It is a very useful function if you want to determine common behaviours between devices, patients, or other entities.
	Users can also setup email alerts if specific clusters are found.

- **vipersearchanomaly**
  - Perform advanced analysis for user search.  This function is useful if you want to monitor what people are searching for, and determine
    if the searches are anamolous and differ from the peer group of "normal" search behaviour.

- **vipernlp**
  - Perform advanced natural language summary of PDFs.

- **viperchatgpt**
  - Start a conversation with ChatGPT in real-time and stream responses.

- **viperexractpdffields**
  - Extracts fields from PDF file

- **viperexractpdffieldbylabel**
  - Extracts fields from PDF file by label name.

- **videochatloadresponse**
  - Analyse videos with video chatgpt.  This is a powerful GPT LLM that will understand and reason with videos frame by frame.  
    It will also understand the spatio-temporal frames in the video.  Video gpt runs in a container. 

- **areyoubusy**
  - If deploying thousands of VIPER/HPDE binaries in a Kubernetes cluster - you can broadcast a 'areyoubusy' message to all VIPER and HPDE
    binaries, and they will return back the HOST/PORT if they are NOT busy with other tasks.  This is very convenient for dynamically managing  
	enormous load among VIPER/HPDE and allows you to dynamically assign HOST/PORT to **non-busy** VIPER/HPDE microservices.

**First import the Python library.**

**import maadstml**


**1. maadstml.viperstats(vipertoken,host,port=-999,brokerhost='',brokerport=-999,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.


*brokerhost* : string, optional

- Address where Kafka broker is running - if none is specified, the Kafka broker address in the VIPER.ENV file will be used.


*brokerport* : int, optional

- Port on which Kafka is listenting.

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: A JSON formatted object of all the Kafka broker information.

**2. maadstml.vipersubscribeconsumer(vipertoken,host,port,topic,companyname,contactname,contactemail,
		location,description,brokerhost='',brokerport=-999,groupid='',microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required

- Topic to subscribe to in Kafka broker

*companyname* : string, required

- Company name of consumer

*contactname* : string, required

- Contact name of consumer

*contactemail* : string, required

- Contact email of consumer

*location* : string, required

- Location of consumer

*description* : string, required

- Description of why consumer wants to subscribe to topic

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*groupid* : string, optional

- Subscribe consumer to group

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Consumer ID that the user must use to receive insights from topic.


**3. maadstml.viperunsubscribeconsumer(vipertoken,host,port,consumerid,brokerhost='',brokerport=-999,
	microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*consumerid* : string, required
       
- Consumer id to unsubscribe

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

RETURNS: Success/failure 

**4. maadstml.viperproducetotopic(vipertoken,host,port,topic,producerid,enabletls=0,delay=100,inputdata='',maadsalgokey='',
	maadstoken='',getoptimal=0,externalprediction='',subtopics='',topicid=-999,identifier='',array=0,brokerhost='',
	brokerport=-999,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required

- Topic or Topics to produce to.  You can separate multiple topics by a comma.  If using multiple topics, you must 
  have the same number of producer ids (separated by commas), and same number of externalprediction (separated by
  commas).  Producing to multiple topics at once is convenient for synchronizing the timing of 
  streams for machine learning.

*subtopic* : string, optional

- Enter sub-topic streams.  This is useful if you want to reduce the number of topics/partitions in Kafka by adding
  sub-topics in the main topic.  

*topicid* : int, optional

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, with 10 subtopic streams 
  you can assign a Topicid to each IoT device and each of the 10 subtopics will be associated to each IoT device.
  This way, you do not create 10,000 streams, but just 1 Main Topic stream, and VIPER will add the 10,000 streams
  in the one topic.  This will also drastically reduce the partition costs.  You can also create custom machine 
  learning models, predictions, and optimization for each 1000 IoT devices quickly: **It is very powerful.**

"array* : int, optional

- You can stream multiple variables at once, and use array=1 to specify that the streams are an array.
  This is similar to streaming 1 ROW in a database, and useful if you want to synchonize variables for machine learning.  
  For example, if a device produces 3 streams: stream A, stream B, stream C, and rather than streaming A, B, C separately
  you can add them to subtopic="A,B,C", and externalprediction="value_FOR_A,value_FOR_B,value_FOR_C", then specify
  array=1, then when you do machine learning on this data, the variables A, B, C are date/time synchronized
  and you can choose which variable is the depdendent variable in viperhpdetraining function.


*identifier* : string, optional

- You can add any string identifier for the device.  For examaple, DSN ID, IoT device id etc.. 

*producerid* : string, required
       
- Producer ID of topic to produce to in the Kafka broker

*enabletls* : int, optional
       
- Set to 1 if Kafka broker is enabled with SSL/TLS encryption, otherwise 0 for plaintext.

*delay*: int, optional

- Time in milliseconds from VIPER backsout from writing messages

*inputdata* : string, optional

- This is the inputdata for the optimal algorithm found by MAADS or HPDE

*maadsalgokey* : string, optional

- This should be the optimal algorithm key returned by maadstml.dotraining function.

*maadstoken* : string, optional
- If the topic is the name of the algorithm from MAADS, then a MAADSTOKEN must be specified to access the algorithm in the MAADS server

*getoptimal*: int, optional
- If you used the maadstml.OPTIMIZE function to optimize a MAADS algorithm, then if this is 1 it will only retrieve the optimal results in JSON format.

*externalprediction* : string, optional
- If you are using your own custom algorithms, then the output of your algorithm can be still used and fed into the Kafka topic.

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns the value produced or results retrieved from the optimization.

**4.1. maadstml.viperproducetotopicbulk(vipertoken,host,port,topic,producerid,inputdata,partitionsize=100,enabletls=1,delay=100,
        brokerhost='',brokerport=-999,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required

- Topic or Topics to produce to.  You can separate multiple topics by a comma.  If using multiple topics, you must 
  have the same number of producer ids (separated by commas), and same number of externalprediction (separated by
  commas).  Producing to multiple topics at once is convenient for synchronizing the timing of 
  streams for machine learning.

*producerid* : string, required
       
- Producer ID of topic to produce to in the Kafka broker.  Separate multiple producer ids with comma.

*inputdata* : string, required
       
- You can write multiple transactions to each topic.  Each group of transactions must be separated by a tilde.  
  Each transaction in the group must be separate by a comma.  The number of groups must match the producerids and 
  topics.  For example, if you are writing to two topics: topic1,topic2, then the inputdata should be:
  trans1,transn2,...,transnN~trans1,transn2,...,transnN.  The number of transactions and topics can be any number.
  This function can be very powerful if you need to analyse millions or billions of transactions very quickly.

*partitionsize* : int, optional

- This is the number of partitions of the inputdata.  For example, if your transactions=10000, then VIPER will 
  create partitions of size 100 (if partitionsize=100) resulting in 100 threads for concurrency.  The higher
  the partitionsize, the lower the number of threads.  If you want to streams lots of data fast, then a 
  partitionzie of 1 is the fastest but will come with overhead because more RAM and CPU will be consumed.

*enabletls* : int, optional
       
- Set to 1 if Kafka broker is enabled with SSL/TLS encryption, otherwise 0 for plaintext.

*delay*: int, optional

- Time in milliseconds from VIPER backsout from writing messages

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: None

**5. maadstml.viperconsumefromtopic(vipertoken,host,port,topic,consumerid,companyname,partition=-1,enabletls=0,delay=100,offset=0,
	brokerhost='',brokerport=-999,microserviceid='',topicid='-999',rollbackoffsets=0,preprocesstype='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topic to consume from in the Kafka broker

*preprocesstype* : string, optional

- If you only want to search for record that have a particular processtype, you can enter:
  MIN, MAX, AVG, COUNT, COUNTSTR, DIFF, DIFFMARGIN, SUM, MEDIAN, VARIANCE, OUTLIERS, OUTLIERSX-Y, VARIED, ANOMPROB,ANOMPROBX-Y,ENTROPY, 
  AUTOCORR, TREND, CONSISTENCY, Unique, Uniquestr, Geodiff (returns distance in Kilometers between two lat/long points)
  IQR (InterQuartileRange), Midhinge, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), 
  Mad (Mean absolute deviation), Skewness, Kurtosis, Spikedetect, Timediff: time should be in this layout:2006-01-02T15:04:05,
  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the 
  average time in seconds between consecutive dates.
  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.   

  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from
  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.
  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype
  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference
  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype
  for data quality and data assurance programs for any number of data streams.

  Unique Checks numeric data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.

  Uniquestr Checks string data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.

  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.
 
  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.

  CONSISTENCY checks if the data all have consistent data types. Returns 1 for consistent data types, 0 otherwise.
  
  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high 

  RAW for no processing.
  
  ANOMPROB=Anomaly probability,
  it will run several algorithms on the data stream window to determine a probaility of anomalous
  behaviour.  This can be cross-refenced with OUTLIERS.  It can be very powerful way to detection
  issues with devices.
  
  ANOMPROBX-Y (similar to OUTLIERSX-Y), where X and Y are numbers, or "n".  If "n", means examine all anomalies for patterns.
  They allow you to check if the anomalies in the streams are truly anomalies and not some
  pattern.  For example, if a IoT device shuts off and turns on again routinely, this may be picked up as an anomaly when in fact
  it is normal behaviour.  So, to ignore these cases, if ANOMPROB2-5, this tells Viper, check anomalies with patterns of 2-5 peaks.
  If the stream has two classes and these two classes are like 0 and 1000, and show a pattern, then they should not be considered an anomaly.
  Meaning, class=0, is the device shutting down, class=1000 is the device turning back on.  If ANOMPROB3-10, Viper will check for 
  patterns of classes 3 to 10 to see if they recur routinely.  This is very helpful to reduce false positives and false negatives.

  
*topicid* : string, optional

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can consume on a per device by entering
  its topicid  that you gave when you produced the topic stream. Or, you can read from multiple topicids at the same time.  
  For example, if you have 10 ids, then you can specify each one separated by a comma: 1,2,3,4,5,6,7,8,9,10
  VIPER will read topicids in parallel.  This can drastically speed up consumption of messages but will require more 
  CPU.

*rollbackoffsets* : int, optional, enter value between 0 and 100

- This will rollback the streams by this percentage.  For example, if using topicid, the main stream is rolled back by this
  percentage amount.

*consumerid* : string, required

- Consumer id associated with the topic

*companyname* : string, required

- Your company name

*partition* : int, optional

- set to Kafka partition number or -1 to autodect

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*offset*: int, optional

- Offset to start the reading from..if 0 then reading will start from the beginning of the topic. If -1, VIPER will automatically 
  go to the last offset.  Or, you can extract the LastOffet from the returned JSON and use this offset for your next call.  

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the contents read from the topic.

**5.1 maadstml.viperconsumefromtopicbatch(vipertoken,host,port,topic,consumerid,companyname,partition=-1,enabletls=0,delay=100,offset=0,
	brokerhost='',brokerport=-999,microserviceid='',topicid='-999',rollbackoffsets=0,preprocesstype='',timedelay=0,asynctimeout=120)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*asynctimeout* : int, optional
 
  -This is the timeout in seconds for the Python library async function.

*timedelay* : int, optional

 - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause 
   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated
   every 3600 seconds, it may make sense to set timedelay=3600
 
*topic* : string, required
       
- Topic to consume from in the Kafka broker

*preprocesstype* : string, optional

- If you only want to search for record that have a particular processtype, you can enter:
  MIN, MAX, AVG, COUNT, COUNTSTR, DIFF, DIFFMARGIN, SUM, MEDIAN, VARIANCE, OUTLIERS, OUTLIERSX-Y, VARIED, ANOMPROB,ANOMPROBX-Y,ENTROPY, AUTOCORR, TREND, 
  IQR (InterQuartileRange), Midhinge, CONSISTENCY, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), 
  Mad (Mean absolute deviation), Skewness, Kurtosis, Spikedetect, Unique, Uniquestr, Timediff: time should be in this layout:2006-01-02T15:04:05,
  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the 
  average time in seconds between consecutive dates. 
  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.   
  Geodiff (returns distance in Kilometers between two lat/long points)
  Unique Checks numeric data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.

  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from
  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.
  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype
  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference
  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype
  for data quality and data assurance programs for any number of data streams.

  Uniquestr Checks string data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.

  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.
 
  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.
  
  CONSISTENCY checks if the data all have consistent data types. Returns 1 for consistent data types, 0 otherwise.

  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high 

  RAW for no processing.

  ANOMPROB=Anomaly probability,
  it will run several algorithms on the data stream window to determine a probaility of anomalous
  behaviour.  This can be cross-refenced with OUTLIERS.  It can be very powerful way to detection
  issues with devices.
  
  ANOMPROBX-Y (similar to OUTLIERSX-Y), where X and Y are numbers, or "n".  If "n", means examine all anomalies for patterns.
  They allow you to check if the anomalies in the streams are truly anomalies and not some
  pattern.  For example, if a IoT device shuts off and turns on again routinely, this may be picked up as an anomaly when in fact
  it is normal behaviour.  So, to ignore these cases, if ANOMPROB2-5, this tells Viper, check anomalies with patterns of 2-5 peaks.
  If the stream has two classes and these two classes are like 0 and 1000, and show a pattern, then they should not be considered an anomaly.
  Meaning, class=0, is the device shutting down, class=1000 is the device turning back on.  If ANOMPROB3-10, Viper will check for 
  patterns of classes 3 to 10 to see if they recur routinely.  This is very helpful to reduce false positives and false negatives.

  
*topicid* : string, required

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can consume on a per device by entering
  its topicid  that you gave when you produced the topic stream. Or, you can read from multiple topicids at the same time.  
  For example, if you have 10 ids, then you can specify each one separated by a comma: 1,2,3,4,5,6,7,8,9,10
  VIPER will read topicids in parallel.  This can drastically speed up consumption of messages but will require more 
  CPU.  VIPER will consume continously from topic ids.

*rollbackoffsets* : int, optional, enter value between 0 and 100

- This will rollback the streams by this percentage.  For example, if using topicid, the main stream is rolled back by this
  percentage amount.

*consumerid* : string, required

- Consumer id associated with the topic

*companyname* : string, required

- Your company name

*partition* : int, optional

- set to Kafka partition number or -1 to autodect

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*offset*: int, optional

- Offset to start the reading from..if 0 then reading will start from the beginning of the topic. If -1, VIPER will automatically 
  go to the last offset.  Or, you can extract the LastOffet from the returned JSON and use this offset for your next call.  

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the contents read from the topic.

**6. maadstml.viperhpdepredict(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,
		hpdehost,inputdata,maxrows=0,algokey='',partition=-1,offset=-1,enabletls=1,delay=1000,hpdeport=-999,brokerhost='',
		brokerport=-999,timeout=120,usedeploy=0,microserviceid='',topicid=-999, maintopic='', streamstojoin='',
		array=0,pathtoalgos='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topicid* : int, optional

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, with 10 subtopic streams 
  you can assign a Topicid to each IoT device and each of the 10 subtopics will be associated to each IoT device.
  This way, you can do predictions for each IoT using its own custom ML model.
  
*pathtoalgos* : string, required

- Enter the full path to the root folder where the algorithms are stored.
  
*maintopic* : string, optional

-  This is the name of the topic that contains the sub-topic streams.

*array* : int, optional

- Set array=1 if you produced data (from viperproducetotopic) as an array.  

*streamstojoin* : string, optional

- These are the sub-topics you are streaming into maintopic.  To do predictions, VIPER will automatically join 
  these streams to create the input data for predictions for each Topicid.
  
*consumefrom* : string, required
       
- Topic to consume from in the Kafka broker

*produceto* : string, required

- Topic to produce results of the prediction to

*companyname* : string, required

- Your company name

*consumerid*: string, required

- Consumerid associated with the topic to consume from

*producerid*: string, required

- Producerid associated with the topic to produce to

*inputdata*: string, required

- This is a comma separated list of values that represent the independent variables in your algorithm. 
  The order must match the order of the independent variables in your algorithm. OR, you can enter a 
  data stream that contains the joined topics from *vipercreatejointopicstreams*.

*maxrows*: int, optional

- Use this to rollback the stream by maxrows offsets.  For example, if you want to make 1000 predictions
  then set maxrows=1000, and make 1000 predictions from the current offset of the independent variables.

*algokey*: string, optional

- If you know the algorithm key that was returned by VIPERHPDETRAIING then you can specify it here.
  Specifying the algokey can drastically speed up the predictions.

*partition* : int, optional

- If you know the kafka partition used to store data then specify it here.
  Most cases Kafka will dynamically store data in partitions, so you should
  use the default of -1 to let VIPER find it.
 
*offset* : int, optional

- Offset to start consuming data.  Usually you can use -1, and VIPER
  will get the last offset.
  
*hpdehost*: string, required

- Address of HPDE 

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encryted traffic, otherwise 0 for plaintext.

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*hpdeport*: int, required

- Port number HPDE is listening on 

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*timeout* : int, optional

 - Number of seconds that VIPER waits when trying to make a connection to HPDE.

*usedeploy* : int, optional

 - If 0 will use algorithm in test, else if 1 use in production algorithm. 
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the prediction.

**6.1 maadstml.viperhpdepredictbatch(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,
		hpdehost,inputdata,maxrows=0,algokey='',partition=-1,offset=-1,enabletls=1,delay=1000,hpdeport=-999,brokerhost='',
		brokerport=-999,timeout=120,usedeploy=0,microserviceid='',topicid="-999", maintopic='', streamstojoin='',
		array=0,timedelay=0,asynctimeout=120,pathtoalgos='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*asynctimeout* : int, optional
 
  -This is the timeout in seconds for the Python library async function.

*timedelay* : int, optional

 - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause 
   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated
   every 3600 seconds, it may make sense to set timedelay=3600

*topicid* : string, required

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, with 10 subtopic streams 
  you can assign a Topicid to each IoT device and each of the 10 subtopics will be associated to each IoT device.
  This way, you can do predictions for each IoT using its own custom ML model.  Separate multiple topicids by a 
  comma.  For example, topicid="1,2,3,4,5" and viper will process at once.
    
*pathtoalgos* : string, required

- Enter the full path to the root folder where the algorithms are stored.
	
*maintopic* : string, optional

-  This is the name of the topic that contains the sub-topic streams.

*array* : int, optional

- Set array=1 if you produced data (from viperproducetotopic) as an array.  

*streamstojoin* : string, optional

- These are the sub-topics you are streaming into maintopic.  To do predictions, VIPER will automatically join 
  these streams to create the input data for predictions for each Topicid.
  
*consumefrom* : string, required
       
- Topic to consume from in the Kafka broker

*produceto* : string, required

- Topic to produce results of the prediction to

*companyname* : string, required

- Your company name

*consumerid*: string, required

- Consumerid associated with the topic to consume from

*producerid*: string, required

- Producerid associated with the topic to produce to

*inputdata*: string, required

- This is a comma separated list of values that represent the independent variables in your algorithm. 
  The order must match the order of the independent variables in your algorithm. OR, you can enter a 
  data stream that contains the joined topics from *vipercreatejointopicstreams*.

*maxrows*: int, optional

- Use this to rollback the stream by maxrows offsets.  For example, if you want to make 1000 predictions
  then set maxrows=1000, and make 1000 predictions from the current offset of the independent variables.

*algokey*: string, optional

- If you know the algorithm key that was returned by VIPERHPDETRAIING then you can specify it here.
  Specifying the algokey can drastically speed up the predictions.

*partition* : int, optional

- If you know the kafka partition used to store data then specify it here.
  Most cases Kafka will dynamically store data in partitions, so you should
  use the default of -1 to let VIPER find it.
 
*offset* : int, optional

- Offset to start consuming data.  Usually you can use -1, and VIPER
  will get the last offset.
  
*hpdehost*: string, required

- Address of HPDE 

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encryted traffic, otherwise 0 for plaintext.

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*hpdeport*: int, required

- Port number HPDE is listening on 

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*timeout* : int, optional

 - Number of seconds that VIPER waits when trying to make a connection to HPDE.

*usedeploy* : int, optional

 - If 0 will use algorithm in test, else if 1 use in production algorithm. 
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the prediction.

**6.2. maadstml.viperhpdepredictprocess(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,hpdehost,inputdata,processtype,maxrows=0,
                     algokey='',partition=-1,offset=-1,enabletls=1,delay=1000,hpdeport=-999,brokerhost='',brokerport=9092,
                     timeout=120,usedeploy=0,microserviceid='',topicid=-999, maintopic='',
                     streamstojoin='',array=0,pathtoalgos='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topicid* : int, optional

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, with 10 subtopic streams 
  you can assign a Topicid to each IoT device and each of the 10 subtopics will be associated to each IoT device.
  This way, you can do predictions for each IoT using its own custom ML model.
  
*pathtoalgos* : string, required

- Enter the full path to the root folder where the algorithms are stored.
  
*maintopic* : string, optional

-  This is the name of the topic that contains the sub-topic streams.

*array* : int, optional

- Set array=1 if you produced data (from viperproducetotopic) as an array.  

*streamstojoin* : string, optional

- These are the sub-topics you are streaming into maintopic.  To do predictions, VIPER will automatically join 
  these streams to create the input data for predictions for each Topicid.
  
*consumefrom* : string, required
       
- Topic to consume from in the Kafka broker

*produceto* : string, required

- Topic to produce results of the prediction to

*companyname* : string, required

- Your company name

*consumerid*: string, required

- Consumerid associated with the topic to consume from

*producerid*: string, required

- Producerid associated with the topic to produce to

*inputdata*: string, required

- This is a comma separated list of values that represent the independent variables in your algorithm. 
  The order must match the order of the independent variables in your algorithm. OR, you can enter a 
  data stream that contains the joined topics from *vipercreatejointopicstreams*.

*processtype*: string, required

- This must be: max, min, avg, median, trend, all.  For example, to find the maximum or the best human or machine.
  Trend will compute the predictions are trending.  Avg is the average of all predictions.  Median is the median of
  predictions.  All will produce all predictions.  

*maxrows*: int, optional

- Use this to rollback the stream by maxrows offsets.  For example, if you want to make 1000 predictions
  then set maxrows=1000, and make 1000 predictions from the current offset of the independent variables.

*algokey*: string, optional

- If you know the algorithm key that was returned by VIPERHPDETRAIING then you can specify it here.
  Specifying the algokey can drastically speed up the predictions.

*partition* : int, optional

- If you know the kafka partition used to store data then specify it here.
  Most cases Kafka will dynamically store data in partitions, so you should
  use the default of -1 to let VIPER find it.
 
*offset* : int, optional

- Offset to start consuming data.  Usually you can use -1, and VIPER
  will get the last offset.
  
*hpdehost*: string, required

- Address of HPDE 

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encryted traffic, otherwise 0 for plaintext.

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*hpdeport*: int, required

- Port number HPDE is listening on 

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*timeout* : int, optional

 - Number of seconds that VIPER waits when trying to make a connection to HPDE.

*usedeploy* : int, optional

 - If 0 will use algorithm in test, else if 1 use in production algorithm. 
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the prediction.

**7. maadstml.viperhpdeoptimize(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,
		hpdehost,partition=-1,offset=-1,enabletls=0,delay=100,hpdeport=-999,usedeploy=0,ismin=1,constraints='best',
		stretchbounds=20,constrainttype=1,epsilon=10,brokerhost='',brokerport=-999,timeout=120,microserviceid='',topicid=-999)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*consumefrom* : string, required
       
- Topic to consume from in the Kafka broker

*topicid* : int, optional

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can perform
  mathematical optimization for each of the 1000 IoT devices using their specific algorithm.
  
*produceto* : string, required

- Topic to produce results of the prediction to

*companyname* : string, required

- Your company name

*consumerid*: string, required

- Consumerid associated with the topic to consume from

*producerid*: string, required

- Producerid associated with the topic to produce to

*hpdehost*: string, required

- Address of HPDE 

*partition* : int, optional

- If you know the kafka partition used to store data then specify it here.
  Most cases Kafka will dynamically store data in partitions, so you should
  use the default of -1 to let VIPER find it.
 
*offset* : int, optional

- Offset to start consuming data.  Usually you can use -1, and VIPER
  will get the last offset.
  
*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*hpdeport*: int, required

- Port number HPDE is listening on 

*usedeploy* : int, optional
 - If 0 will use algorithm in test, else if 1 use in production algorithm. 

*ismin* : int, optional
- If 1 then function is minimized, else if 0 the function is maximized

*constraints*: string, optional

- If "best" then HPDE will choose the best values of the independent variables to minmize or maximize the dependent variable.  
  Users can also specify their own constraints for each variable and must be in the following format: varname1:min:max,varname2:min:max,...

*stretchbounds*: int, optional

- A number between 0 and 100, this is the percentage to stretch the bounds on the constraints.

*constrainttype*: int, optional

- If 1 then HPDE uses the min/max of each variable for the bounds, if 2 HPDE will adjust the min/max by their standard deviation, 
  if 3 then HPDE uses stretchbounds to adjust the min/max for each variable.  

*epsilon*: int, optional

- Once HPDE finds a good local minima/maxima, it then uses this epsilon value to find the Global minima/maxima to ensure 
  you have the best values of the independent variables that minimize or maximize the dependent variable.
					 
*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*timeout* : int, optional

 - Number of seconds that VIPER waits when trying to make a connection to HPDE.

 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the optimization details and optimal values.

**7.1 maadstml.viperhpdeoptimizebatch(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,
		hpdehost,partition=-1,offset=-1,enabletls=0,delay=100,hpdeport=-999,usedeploy=0,ismin=1,constraints='best',
		stretchbounds=20,constrainttype=1,epsilon=10,brokerhost='',brokerport=-999,timeout=120,microserviceid='',topicid="-999",
		timedelay=0,asynctimeout=120)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*asynctimeout* : int, optional
 
  -This is the timeout in seconds for the Python library async function.

*timedelay* : int, optional

 - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause 
   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated
   every 3600 seconds, it may make sense to set timedelay=3600

*consumefrom* : string, required
       
- Topic to consume from in the Kafka broker

*topicid* : string, required

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can perform
  mathematical optimization for each of the 1000 IoT devices using their specific algorithm.  Separate 
  multiple topicids by a comma.
  
*produceto* : string, required

- Topic to produce results of the prediction to

*companyname* : string, required

- Your company name

*consumerid*: string, required

- Consumerid associated with the topic to consume from

*producerid*: string, required

- Producerid associated with the topic to produce to

*hpdehost*: string, required

- Address of HPDE 

*partition* : int, optional

- If you know the kafka partition used to store data then specify it here.
  Most cases Kafka will dynamically store data in partitions, so you should
  use the default of -1 to let VIPER find it.
 
*offset* : int, optional

- Offset to start consuming data.  Usually you can use -1, and VIPER
  will get the last offset.
  
*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*hpdeport*: int, required

- Port number HPDE is listening on 

*usedeploy* : int, optional
 - If 0 will use algorithm in test, else if 1 use in production algorithm. 

*ismin* : int, optional
- If 1 then function is minimized, else if 0 the function is maximized

*constraints*: string, optional

- If "best" then HPDE will choose the best values of the independent variables to minmize or maximize the dependent variable.  
  Users can also specify their own constraints for each variable and must be in the following format: varname1:min:max,varname2:min:max,...

*stretchbounds*: int, optional

- A number between 0 and 100, this is the percentage to stretch the bounds on the constraints.

*constrainttype*: int, optional

- If 1 then HPDE uses the min/max of each variable for the bounds, if 2 HPDE will adjust the min/max by their standard deviation, 
  if 3 then HPDE uses stretchbounds to adjust the min/max for each variable.  

*epsilon*: int, optional

- Once HPDE finds a good local minima/maxima, it then uses this epsilon value to find the Global minima/maxima to ensure 
  you have the best values of the independent variables that minimize or maximize the dependent variable.
					 
*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*timeout* : int, optional

 - Number of seconds that VIPER waits when trying to make a connection to HPDE.

 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the optimization details and optimal values.

**8. maadstml.viperhpdetraining(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,
                 hpdehost,viperconfigfile,enabletls=1,partition=-1,deploy=0,modelruns=50,modelsearchtuner=80,hpdeport=-999,
				 offset=-1,islogistic=0,brokerhost='', brokerport=-999,timeout=120,microserviceid='',topicid=-999,maintopic='',
                 independentvariables='',dependentvariable='',rollbackoffsets=0,fullpathtotrainingdata='',processlogic='',
				 identifier='',array=0,transformtype='',sendcoefto='',coeftoprocess='',coefsubtopicnames='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*transformtype* : string, optional

- You can transform the dependent and independent variables using: log-log, log-lin, lin-log, lin=linear, log=natural log 
  This may be useful if you want to compute price or demand elasticities.

*sendcoefto* : string, optional
 
- This is the name of the kafka topic that you want to stream the estimated parameters to.

*coeftoprocess* : string, optional

- This is the indexes of the estimated parameters.  For example, if the ML model has a constant and two estimated
  parameters, then coeftoprocess="0,1,2" means stream constant term (at index 0) and the two estmiated parameters at
  index 1, and 2.

*coefsubtopicnames* : string, optional

- This is the names for the estimated parameters.  For example, "constant,elasticity,elasticity2" would be streamed
  as kafka topics for *coeftoprocess*

*topicid* : int, optional

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can create individual 
  Machine Learning models for each IoT device in real-time.  This is a core functionality of TML solutions.
  
*array* : int, optional

- Set array=1 if the data you are consuming from is an array of multiple streams that you produced from 
  viperproducetotopic in an effort to synchronize data for training.

*maintopic* : string, optional

- This is the maintopic that contains the sub-topc streams.

*independentvariables* : string, optional

- These are the independent variables that are the subtopics.  

*dependentvariable* : string, optional

- This is the dependent variable in the subtopic streams.  

*rollbackoffsets*: int, optional

- This is the rollback percentage to create the training dataset.  VIPER will automatically create a training dataset
  using the independent and dependent variable streams.  

*fullpathtotrainingdata*: string, optional

- This is the FULL path where you want to store the training dataset.  VIPER will write file to disk. Make sure proper
  permissions are granted to VIPER.   For example, **c:/myfolder/mypath**

*processlogic* : string, optional

- You can dynamically build a classification model by specifying how you want to classify the dependent variable by
  indicating your conditions in the processlogic variable (this will take effect if islogistic=1). For example: 
  
  **processlogic='classification_name=my_prob:temperature=20.5,30:humidity=50,55'**, means the following:
   
   1. The name of the dependent variable is specified by **classification_name**
   2. Then you can specify the conditions on the streams. If your stream is Temperature and humidity,
      if Temperature is between 20.5 and 30, then my_prob=1, otherwise my_prob=0, and
	  if Humidity is between 50 and 55, then my_prob=1, otherwise my_prob=0
   3.  If you want to specify no upperbound you can use *n*, or *-n* for no lowerbound.
       For example, if **temperature=20.5,n**, means temperature >=20.5 then my_prob=1
	   If **humidity=-n,55**, means humidity<=55 then my_prob=1 

- This allows you to classify the dependent with any number of variables all in real-time!

*consumefrom* : string, required
       
- Topic to consume from in the Kafka broker

*produceto* : string, required

- Topic to produce results of the prediction to

*companyname* : string, required

- Your company name

*consumerid*: string, required

*identifier*: string, optional

- You can add any name or identifier like DSN ID

- Consumerid associated with the topic to consume from

*producerid*: string, required

- Producerid associated with the topic to produce to

*hpdehost*: string, required

- Address of HPDE 

*viperconfigfile* : string, required

- Full path to VIPER.ENV configuration file on server.

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.

*partition*: int, optional

- Partition used by kafka to store data. NOTE: Kafka will dynamically store data in partitions.
  Unless you know for sure the partition, you should use the default of -1 to let VIPER
  determine where your data is.

*deploy*: int, optional

- If deploy=1, this will deploy the algorithm to the Deploy folder.  This is useful if you do not
  want to use this algorithm in production, and just testing it.  If just testing, then set deploy=0 (default).  

*modelruns*: int, optional

- Number of iterations for model training

*modelsearchtuner*: int, optional

- An integer between 0-100, this variable will attempt to fine tune the model search space.  A number close to 0 means you will 
  have lots of models but their quality may be low, a number close to 100 (default=80) means you will have fewer models but their 
  quality will be higher

*hpdeport*: int, required

- Port number HPDE is listening on 

*offset* : int, optional

 - If 0 will use the training data from the beginning of the topic
 
*islogistic*: int, optional

- If is 1, the HPDE will switch to logistic modeling, else continous.

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*timeout* : int, optional

 - Number of seconds that VIPER waits when trying to make a connection to HPDE.
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the optimal algorithm that best fits your data.

**8.1 maadstml.viperhpdetrainingbatch(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,
                 hpdehost,viperconfigfile,enabletls=1,partition=-1,deploy=0,modelruns=50,modelsearchtuner=80,hpdeport=-999,
				 offset=-1,islogistic=0,brokerhost='', brokerport=-999,timeout=120,microserviceid='',topicid="-999",maintopic='',
                 independentvariables='',dependentvariable='',rollbackoffsets=0,fullpathtotrainingdata='',processlogic='',
				 identifier='',array=0,timedelay=0,asynctimeout=120)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*asynctimeout* : int, optional
 
  -This is the timeout in seconds for the Python library async function.

*timedelay* : int, optional

 - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause 
   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated
   every 3600 seconds, it may make sense to set timedelay=3600

*topicid* : string, required

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can create individual 
  Machine Learning models for each IoT device in real-time.  This is a core functionality of TML solutions.
  Separate multiple topic ids by comma.
  
*array* : int, optional

- Set array=1 if the data you are consuming from is an array of multiple streams that you produced from 
  viperproducetotopic in an effort to synchronize data for training.

*maintopic* : string, optional

- This is the maintopic that contains the sub-topc streams.

*independentvariables* : string, optional

- These are the independent variables that are the subtopics.  

*dependentvariable* : string, optional

- This is the dependent variable in the subtopic streams.  

*rollbackoffsets*: int, optional

- This is the rollback percentage to create the training dataset.  VIPER will automatically create a training dataset
  using the independent and dependent variable streams.  

*fullpathtotrainingdata*: string, optional

- This is the FULL path where you want to store the training dataset.  VIPER will write file to disk. Make sure proper
  permissions are granted to VIPER.   For example, **c:/myfolder/mypath**

*processlogic* : string, optional

- You can dynamically build a classification model by specifying how you want to classify the dependent variable by
  indicating your conditions in the processlogic variable (this will take effect if islogistic=1). For example: 
  
  **processlogic='classification_name=my_prob:temperature=20.5,30:humidity=50,55'**, means the following:
   
   1. The name of the dependent variable is specified by **classification_name**
   2. Then you can specify the conditions on the streams. If your stream is Temperature and humidity,
      if Temperature is between 20.5 and 30, then my_prob=1, otherwise my_prob=0, and
	  if Humidity is between 50 and 55, then my_prob=1, otherwise my_prob=0
   3.  If you want to specify no upperbound you can use *n*, or *-n* for no lowerbound.
       For example, if **temperature=20.5,n**, means temperature >=20.5 then my_prob=1
	   If **humidity=-n,55**, means humidity<=55 then my_prob=1 

- This allows you to classify the dependent with any number of variables all in real-time!

*consumefrom* : string, required
       
- Topic to consume from in the Kafka broker

*produceto* : string, required

- Topic to produce results of the prediction to

*companyname* : string, required

- Your company name

*consumerid*: string, required

*identifier*: string, optional

- You can add any name or identifier like DSN ID

- Consumerid associated with the topic to consume from

*producerid*: string, required

- Producerid associated with the topic to produce to

*hpdehost*: string, required

- Address of HPDE 

*viperconfigfile* : string, required

- Full path to VIPER.ENV configuration file on server.

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.

*partition*: int, optional

- Partition used by kafka to store data. NOTE: Kafka will dynamically store data in partitions.
  Unless you know for sure the partition, you should use the default of -1 to let VIPER
  determine where your data is.

*deploy*: int, optional

- If deploy=1, this will deploy the algorithm to the Deploy folder.  This is useful if you do not
  want to use this algorithm in production, and just testing it.  If just testing, then set deploy=0 (default).  

*modelruns*: int, optional

- Number of iterations for model training

*modelsearchtuner*: int, optional

- An integer between 0-100, this variable will attempt to fine tune the model search space.  A number close to 0 means you will 
  have lots of models but their quality may be low, a number close to 100 (default=80) means you will have fewer models but their 
  quality will be higher

*hpdeport*: int, required

- Port number HPDE is listening on 

*offset* : int, optional

 - If 0 will use the training data from the beginning of the topic
 
*islogistic*: int, optional

- If is 1, the HPDE will switch to logistic modeling, else continous.

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*timeout* : int, optional

 - Number of seconds that VIPER waits when trying to make a connection to HPDE.
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the optimal algorithm that best fits your data.

**9. maadstml.viperproducetotopicstream(vipertoken,host,port,topic,producerid,offset,maxrows=0,enabletls=0,delay=100,
	brokerhost='',brokerport=-999,microserviceid='',topicid=-999,mainstreamtopic='',streamstojoin='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topics to produce to in the Kafka broker - this is a topic that contains multiple topics, VIPER will consume from each topic and 
  write results to the produceto topic

*topicid* : int, optional

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can join these streams
  and produce it to one stream,

*mainstreamtopic*: string, optional

- This is the main stream topic that contain the subtopic streams.

*streamstojoin*: string, optional

- These are the streams you want to join and produce to mainstreamtopic.

*producerid* : string, required

- Producerid of the topic producing to  

*offset* : int
 
 - If 0 will use the stream data from the beginning of the topics, -1 will automatically go to last offset

*maxrows* : int, optional
 
 - If offset=-1, this number will rollback the streams by maxrows amount i.e. rollback=lastoffset-maxrows
 
*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise 0 for plaintext

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the optimal algorithm that best fits your data.

**10. maadstml.vipercreatetrainingdata(vipertoken,host,port,consumefrom,produceto,dependentvariable,
		independentvariables,consumerid,producerid,companyname,partition=-1,enabletls=0,delay=100,
		brokerhost='',brokerport=-999,microserviceid='',topicid=-999)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*consumefrom* : string, required
       
- Topic to consume from 

*topicid* : int, optional

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, with 10 subtopic streams 
  you can assign a Topicid to each IoT device and each of the 10 subtopics will be associated to each IoT device.
  You can create training dataset for each device.

*produceto* : string, required
       
- Topic to produce to 

*dependentvariable* : string, required
       
- Topic name of the dependentvariable 
 
*independentvariables* : string, required
       
- Topic names of the independentvariables - VIPER will automatically read the data streams.  
  Separate multiple variables by comma. 

*consumerid* : string, required

- Consumerid of the topic to consume to  

*producerid* : string, required

- Producerid of the topic producing to  
 
*partition* : int, optional

- This is the partition that Kafka stored the stream data.  Specifically, the streams you joined 
  from function *viperproducetotopicstream* will be stored in a partition by Kafka, if you 
  want to create a training dataset from these data, then you should use this partition.  This
  ensures you are using the right data to create a training dataset.
    
*companyname* : string, required

- Your company name  

*enabletls*: int, optional

- Set to 1 if Kafka broker is enabled for SSL/TLS encrypted traffic, otherwise set to 0 for plaintext.

*delay*: int, optional

- Time in milliseconds before VIPER backout from reading messages

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the training data set.

**11. maadstml.vipercreatetopic(vipertoken,host,port,topic,companyname,contactname,contactemail,location,
description,enabletls=0,brokerhost='',brokerport=-999,numpartitions=1,replication=1,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topic to create 

*companyname* : string, required

- Company name of consumer

*contactname* : string, required

- Contact name of consumer

*contactemail* : string, required

- Contact email of consumer

*location* : string, required

- Location of consumer

*description* : string, required

- Description of why consumer wants to subscribe to topic

*enabletls* : int, optional

- Set to 1 if Kafka is SSL/TLS enabled for encrypted traffic, otherwise 0 for no encryption (plain text)

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*numpartitions*: int, optional

- Number of the parititons to create in the Kafka broker - more parititons the faster Kafka will produce results.

*replication*: int, optional

- Specificies the number of brokers to replicate to - this is important for failover
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the producer id for the topic.

**12. maadstml.viperconsumefromstreamtopic(vipertoken,host,port,topic,consumerid,companyname,partition=-1,
        enabletls=0,delay=100,offset=0,brokerhost='',brokerport=-999,microserviceid='',topicid=-999)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topic to consume from 

*topicid* : int, optional

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can consume 
  for each device.

*consumerid* : string, required

- Consumerid associated with topic

*companyname* : string, required

- Your company name

*partition*: int, optional

- Set to a kafka partition number, or -1 to autodetect partition.

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*offset* : int, optional

- Offset to start reading from ..if 0 VIPER will read from the beginning

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the contents of all the topics read


**13. maadstml.vipercreatejointopicstreams(vipertoken,host,port,topic,topicstojoin,companyname,contactname,contactemail,
		description,location,enabletls=0,brokerhost='',brokerport=-999,replication=1,numpartitions=1,microserviceid='',
		topicid=-999)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topic to consume from 

*topicid* : int, optional

- Topicid represents an id for some entity.  Create a joined topic stream per topicid.

*topicstojoin* : string, required

- Enter two or more topics separated by a comma and VIPER will join them into one topic

*companyname* : string, required

- Company name of consumer

*contactname* : string, required

- Contact name of consumer

*contactemail* : string, required

- Contact email of consumer

*location* : string, required

- Location of consumer

*description* : string, required

- Description of why consumer wants to subscribe to topic

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled, otherwise set to 0 for plaintext.

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*numpartitions* : int, optional

- Number of partitions

*replication* : int, optional

- Replication factor

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the producerid of the joined streams
								
**14. maadstml.vipercreateconsumergroup(vipertoken,host,port,topic,groupname,companyname,contactname,contactemail,
		description,location,enabletls=1,brokerhost='',brokerport=-999,microserviceid='')**
		
**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topic to dd to the group, multiple (active) topics can be separated by comma 

*groupname* : string, required

- Enter the name of the group

*companyname* : string, required

- Company name of consumer

*contactname* : string, required

- Contact name of consumer

*contactemail* : string, required

- Contact email of consumer

*location* : string, required

- Location of consumer

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled, otherwise set to 0 for plaintext.

*description* : string, required

- Description of why consumer wants to subscribe to topic

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the groupid of the group.
								
**15. maadstml.viperconsumergroupconsumefromtopic(vipertoken,host,port,topic,consumerid,groupid,companyname,
		partition=-1,enabletls=0,delay=100,offset=0,rollbackoffset=0,brokerhost='',brokerport=-999,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topic to dd to the group, multiple (active) topics can be separated by comma 

*consumerid* : string, required

- Enter the consumerid associated with the topic

*groupid* : string, required

- Enter the groups id

*companyname* : string, required

- Enter the company name

*partition*: int, optional

- set to Kakfa partition number or -1 to autodetect

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled, otherwise set to 0 for plaintext.

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*offset* : int, optional

- Offset to start reading from.  If 0, will read from the beginning of topic, or -1 to automatically go to end of topic.

*rollbackoffset* : int, optional

- The number of offsets to rollback the data stream.

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the contents of the group.
    
**16. maadstml.vipermodifyconsumerdetails(vipertoken,host,port,topic,companyname,consumerid,contactname='',
contactemail='',location='',brokerhost='',brokerport=9092,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topic to dd to the group, multiple (active) topics can be separated by comma 

*consumerid* : string, required

- Enter the consumerid associated with the topic

*companyname* : string, required

- Enter the company name

*contactname* : string, optional

- Enter the contact name 

*contactemail* : string, optional
- Enter the contact email

*location* : string, optional

- Enter the location

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns success/failure

**17. maadstml.vipermodifytopicdetails(vipertoken,host,port,topic,companyname,partition=0,enabletls=1,
          isgroup=0,contactname='',contactemail='',location='',brokerhost='',brokerport=9092,microserviceid='')**
     
**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topic to dd to the group, multiple (active) topics can be separated by comma 

*companyname* : string, required

- Enter the company name

*partition* : int, optional

- You can change the partition in the Kafka topic.

*enabletls* : int, optional

- If enabletls=1, then SSL/TLS is enables in Kafka, otherwise if enabletls=0 it is not.

*isgroup* : int, optional

- This tells VIPER whether this is a group topic if isgroup=1, or a normal topic if isgroup=0

*contactname* : string, optional

- Enter the contact name 

*contactemail* : string, optional
- Enter the contact email

*location* : string, optional

- Enter the location

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns success/failure

**18. maadstml.viperactivatetopic(vipertoken,host,port,topic,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topic to activate

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns success/failure
    
**19. maadstml.viperdeactivatetopic(vipertoken,host,port,topic,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topic to deactivate

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns success/failure

**20. maadstml.vipergroupactivate(vipertoken,host,port,groupname,groupid,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*groupname* : string, required
       
- Name of the group

*groupid* : string, required
       
- ID of the group

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns success/failure
   
**21.  maadstml.vipergroupdeactivate(vipertoken,host,port,groupname,groupid,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*groupname* : string, required
       
- Name of the group

*groupid* : string, required
       
- ID of the group

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns success/failure
   
**22. maadstml.viperdeletetopics(vipertoken,host,port,topic,enabletls=1,brokerhost='',brokerport=9092,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topic to delete.  Separate multiple topics by a comma.

*enabletls* : int, optional

- If enabletls=1, then SSL/TLS is enable on Kafka, otherwise if enabletls=0, it is not.

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*microserviceid* : string, optional

- microservice to access viper
   
**23.  maadstml.balancebigdata(localcsvfile,numberofbins,maxrows,outputfile,bincutoff,distcutoff,startcolumn=0)**

**Parameters:**	

*localcsvfile* : string, required

- Local file, must be CSV formatted.

*numberofbins* : int, required

- The number of bins for the histogram. You can set to any value but 10 is usually fine.

*maxrows* :  int, required

- The number of rows to return, which will be a subset of your original data.

*outputfile* : string, required

- Your new data will be writted as CSV to this file.

*bincutoff* : float, required. 

-  This is the threshold percentage for the bins. Specifically, the data in each variable is allocated to bins, but many 
   times it will not fall in ALL of the bins.  By setting this percentage between 0 and 1, MAADS will choose variables that
   exceed this threshold to determine which variables have data that are well distributed across bins.  The variables
   with the most distributed values in the bins will drive the selection of the rows in your dataset that give the best
   distribution - this will be very important for MAADS training.  Usually 0.7 is good.

*distcutoff* : float, required. 

-  This is the threshold percentage for the distribution. Specifically, MAADS uses a Lilliefors statistic to determine whether 
   the data are well distributed.  The lower the number the better.  Usually 0.45 is good.
   
*startcolumn* : int, optional

- This tells MAADS which column to start from.  If you have DATE in the first column, you can tell MAADS to start from 1 (columns are zero-based)

RETURNS: Returns a detailed JSON object and new balaced dataset written to outputfile.

**24. maadstml.viperanomalytrain(vipertoken,host,port,consumefrom,produceto,producepeergroupto,produceridpeergroup,consumeridproduceto,
                      streamstoanalyse,companyname,consumerid,producerid,flags,hpdehost,viperconfigfile,
                      enabletls=1,partition=-1,hpdeport=-999,topicid=-999,maintopic='',rollbackoffsets=0,fullpathtotrainingdata='',
					  brokerhost='',brokerport=9092,delay=1000,timeout=120,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*consumefrom* : string, required
       
- Topic to consume from in the Kafka broker

*produceto* : string, required

- Topic to produce results of the prediction to

*topicid* : int, optional

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can perform anomaly detection/predictions
  for each device.

*maintopic* : string, optional

- This is the maintopic that contains the subtopic streams.

*rollbackoffsets*: int, optional

- This is the percentage to rollback the streams that you are analysing: streamstoanalyse

*fullpathtotrainingdata*: string, optional

- This is the full path to the training dataset to use to find peer groups.

*producepeergroupto* : string, required

- Topic to produce the peer group for anomaly comparisons 

*produceridpeergroup* : string, required

- Producerid for the peer group topic

*consumeridproduceto* : string, required

- Consumer id for the Produceto topic 

*streamstoanalyse* : string, required

- Comma separated list of streams to analyse for anomalies

*flags* : string, required

- These are flags that will be used to select the peer group for each stream.  The flags must have the following format:
  *topic=[topic name],topictype=[numeric or string],threshnumber=[a number between 0 and 10000, i.e. 200],
  lag=[a number between 1 and 20, i.e. 5],zthresh=[a number between 1 and 5, i.e. 2.5],influence=[a number between 0 and 1 i.e. 0.5]*
  
  *threshnumber*: decimal number to determine usual behaviour - only for numeric streams, numbers are compared to the centroid number, 
  a standardized distance is taken and all numbers below the thresholdnumeric are deemed as usual i.e. thresholdnumber=200, any value 
  below is close to the centroid  - you need to experiment with this number.
  
  *lag*: number of lags for the moving mean window, works to smooth the function i.e. lag=5
  
  *zthresh*: number of standard deviations from moving mean i.e. 3.5
  
  *influence*: strength in identifying outliers for both stationary and non-stationary data, i.e. influence=0 ignores outliers 
  when recalculating the new threshold, influence=1 is least robust.  Influence should be between (0,1), i.e. influence=0.5
  
  Flags must be provided for each topic.  Separate multiple flags by ~

*companyname* : string, required

- Your company name

*consumerid*: string, required

- Consumerid associated with the topic to consume from

*producerid*: string, required

- Producerid associated with the topic to produce to

*hpdehost*: string, required

- Address of HPDE 

*viperconfigfile* : string, required

- Full path to VIPER.ENV configuration file on server.

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.

*partition*: int, optional

- Partition used by kafka to store data. NOTE: Kafka will dynamically store data in partitions.
  Unless you know for sure the partition, you should use the default of -1 to let VIPER
  determine where your data is.

*hpdeport*: int, required

- Port number HPDE is listening on 

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*delay* : int, optional

- delay parameter to wait for Kafka to respond - in milliseconds.

*timeout* : int, optional

 - Number of seconds that VIPER waits when trying to make a connection to HPDE.
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the peer groups for all the streams.

**24.1 maadstml.viperanomalytrainbatch(vipertoken,host,port,consumefrom,produceto,producepeergroupto,produceridpeergroup,consumeridproduceto,
                      streamstoanalyse,companyname,consumerid,producerid,flags,hpdehost,viperconfigfile,
                      enabletls=1,partition=-1,hpdeport=-999,topicid="-999",maintopic='',rollbackoffsets=0,fullpathtotrainingdata='',
					  brokerhost='',brokerport=9092,delay=1000,timeout=120,microserviceid='',timedelay=0,asynctimeout=120)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*asynctimeout* : int, optional
 
  -This is the timeout in seconds for the Python library async function.

*timedelay* : int, optional

 - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause 
   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated
   every 3600 seconds, it may make sense to set timedelay=3600

*consumefrom* : string, required
       
- Topic to consume from in the Kafka broker

*produceto* : string, required

- Topic to produce results of the prediction to

*topicid* : string, required

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can perform anomaly detection/predictions
  for each device.  Separate multiple topicids by a comma.

*maintopic* : string, optional

- This is the maintopic that contains the subtopic streams.

*rollbackoffsets*: int, optional

- This is the percentage to rollback the streams that you are analysing: streamstoanalyse

*fullpathtotrainingdata*: string, optional

- This is the full path to the training dataset to use to find peer groups.

*producepeergroupto* : string, required

- Topic to produce the peer group for anomaly comparisons 

*produceridpeergroup* : string, required

- Producerid for the peer group topic

*consumeridproduceto* : string, required

- Consumer id for the Produceto topic 

*streamstoanalyse* : string, required

- Comma separated list of streams to analyse for anomalies

*flags* : string, required

- These are flags that will be used to select the peer group for each stream.  The flags must have the following format:
  *topic=[topic name],topictype=[numeric or string],threshnumber=[a number between 0 and 10000, i.e. 200],
  lag=[a number between 1 and 20, i.e. 5],zthresh=[a number between 1 and 5, i.e. 2.5],influence=[a number between 0 and 1 i.e. 0.5]*
  
  *threshnumber*: decimal number to determine usual behaviour - only for numeric streams, numbers are compared to the centroid number, 
  a standardized distance is taken and all numbers below the thresholdnumeric are deemed as usual i.e. thresholdnumber=200, any value 
  below is close to the centroid  - you need to experiment with this number.
  
  *lag*: number of lags for the moving mean window, works to smooth the function i.e. lag=5
  
  *zthresh*: number of standard deviations from moving mean i.e. 3.5
  
  *influence*: strength in identifying outliers for both stationary and non-stationary data, i.e. influence=0 ignores outliers 
  when recalculating the new threshold, influence=1 is least robust.  Influence should be between (0,1), i.e. influence=0.5
  
  Flags must be provided for each topic.  Separate multiple flags by ~

*companyname* : string, required

- Your company name

*consumerid*: string, required

- Consumerid associated with the topic to consume from

*producerid*: string, required

- Producerid associated with the topic to produce to

*hpdehost*: string, required

- Address of HPDE 

*viperconfigfile* : string, required

- Full path to VIPER.ENV configuration file on server.

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.

*partition*: int, optional

- Partition used by kafka to store data. NOTE: Kafka will dynamically store data in partitions.
  Unless you know for sure the partition, you should use the default of -1 to let VIPER
  determine where your data is.

*hpdeport*: int, required

- Port number HPDE is listening on 

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*delay* : int, optional

- delay parameter to wait for Kafka to respond - in milliseconds.

*timeout* : int, optional

 - Number of seconds that VIPER waits when trying to make a connection to HPDE.
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the peer groups for all the streams.


**25. maadstml.viperanomalypredict(vipertoken,host,port,consumefrom,produceto,consumeinputstream,produceinputstreamtest,produceridinputstreamtest,
                      streamstoanalyse,consumeridinputstream,companyname,consumerid,producerid,flags,hpdehost,viperconfigfile,
                      enabletls=1,partition=-1,hpdeport=-999,topicid=-999,maintopic='',rollbackoffsets=0,fullpathtopeergroupdata='',
					  brokerhost='',brokerport=9092,delay=1000,timeout=120,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*consumefrom* : string, required
       
- Topic to consume from in the Kafka broker

*produceto* : string, required

- Topic to produce results of the prediction to

*consumeinputstream* : string, required

- Topic of the input stream to test for anomalies

*produceinputstreamtest* : string, required

- Topic to store the input stream data for analysis

*produceridinputstreamtest* : string, required

- Producer id for the produceinputstreamtest topic 

*streamstoanalyse* : string, required

- Comma separated list of streams to analyse for anomalies

*flags* : string, required

- These are flags that will be used to select the peer group for each stream.  The flags must have the following format:
  *riskscore=[a number between 0 and 1]~complete=[and, or, pvalue i.e. p50 means streams over 50% that have an anomaly]~type=[and,or this will 
  determine what logic to apply to v and sc],topic=[topic name],topictype=[numeric or string],v=[v>some value, v<some value, or valueany],
  sc=[sc>some number, sc<some number - this is the score for the anomaly test]
  
  if using strings, the specify flags: type=[and,or],topic=[topic name],topictype=string,stringcontains=[0 or 1 - 1 will do a substring test, 
  0 will equate the strings],v2=[any text you want to test - use | for OR or ^ for AND],sc=[score value, sc<some value, sc>some value]
 
  *riskscore*: this the riskscore threshold.  A decimal number between 0 and 1, use this as a threshold to flag anomalies.

  *complete* : If using multiple streams, this will test each stream to see if the computed riskscore and perform an AND or OR on each risk value
  and take an average of the risk scores if using AND.  Otherwise if at least one stream exceeds the riskscore it will return.
  
  *type*: AND or OR - if using v or sc, this is used to apply the appropriate logic between v and sc.  For example, if type=or, then VIPER 
  will see if a test value is less than or greater than V, OR, standarzided value is less than or greater than sc.  
  
  *sc*: is a standarized variavice between the peer group value and test value.
  
  *v1*: is a user chosen value which can be used to test for a particular value.  For example, if you want to flag values less then 0, 
  then choose v<0 and VIPER will flag them as anomolous.

  *v2*: if analysing string streams, v2 can be strings you want to check for. For example, if I want to check for two
  strings: Failed and Attempt Failed, then set v2=Failed^Attempt Failed, where ^ tells VIPER to perform an AND operation.  
  If I want either to exist, 2=Failed|Attempt Failed, where | tells VIPER to perform an OR operation.

  *stringcontains* : if using string streams, and you want to see if a particular text value exists and flag it - then 
  if stringcontains=1, VIPER will test for substrings, otherwise it will equate the strings. 
  
  
  Flags must be provided for each topic.  Separate multiple flags by ~

*consumeridinputstream* : string, required

- Consumer id of the input stream topic: consumeinputstream

*companyname* : string, required

- Your company name

*consumerid*: string, required

- Consumerid associated with the topic to consume from

*producerid*: string, required

- Producerid associated with the topic to produce to

*hpdehost*: string, required

- Address of HPDE 

*viperconfigfile* : string, required

- Full path to VIPER.ENV configuration file on server.

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.

*partition*: int, optional

- Partition used by kafka to store data. NOTE: Kafka will dynamically store data in partitions.
  Unless you know for sure the partition, you should use the default of -1 to let VIPER
  determine where your data is.

*hpdeport*: int, required

- Port number HPDE is listening on 

*topicid* : int, optional

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can perform anomaly 
  prediction for each device.

*maintopic* : string, optional

- This is the maintopic that contains the subtopic streams.

*rollbackoffsets*: int, optional

- This is the percentage to rollback the streams that you are analysing: streamstoanalyse

*fullpathtopeergroupdata*: string, optional

- This is the full path to the peer group you found in viperanomalytrain; this will be used for anomaly detection.

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*delay* : int, optional

- delay parameter to wait for Kafka to respond - in milliseconds.

*timeout* : int, optional

 - Number of seconds that VIPER waits when trying to make a connection to HPDE.
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the peer groups for all the streams.

**25.1 maadstml.viperanomalypredictbatch(vipertoken,host,port,consumefrom,produceto,consumeinputstream,produceinputstreamtest,produceridinputstreamtest,
                      streamstoanalyse,consumeridinputstream,companyname,consumerid,producerid,flags,hpdehost,viperconfigfile,
                      enabletls=1,partition=-1,hpdeport=-999,topicid="-999",maintopic='',rollbackoffsets=0,fullpathtopeergroupdata='',
					  brokerhost='',brokerport=9092,delay=1000,timeout=120,microserviceid='',timedelay=0,asynctimeout=120)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*asynctimeout* : int, optional
 
  -This is the timeout in seconds for the Python library async function.

*timedelay* : int, optional

 - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause 
   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated
   every 3600 seconds, it may make sense to set timedelay=3600

*consumefrom* : string, required
       
- Topic to consume from in the Kafka broker

*produceto* : string, required

- Topic to produce results of the prediction to

*consumeinputstream* : string, required

- Topic of the input stream to test for anomalies

*produceinputstreamtest* : string, required

- Topic to store the input stream data for analysis

*produceridinputstreamtest* : string, required

- Producer id for the produceinputstreamtest topic 

*streamstoanalyse* : string, required

- Comma separated list of streams to analyse for anomalies

*flags* : string, required

- These are flags that will be used to select the peer group for each stream.  The flags must have the following format:
  *riskscore=[a number between 0 and 1]~complete=[and, or, pvalue i.e. p50 means streams over 50% that have an anomaly]~type=[and,or this will 
  determine what logic to apply to v and sc],topic=[topic name],topictype=[numeric or string],v=[v>some value, v<some value, or valueany],
  sc=[sc>some number, sc<some number - this is the score for the anomaly test]
  
  if using strings, the specify flags: type=[and,or],topic=[topic name],topictype=string,stringcontains=[0 or 1 - 1 will do a substring test, 
  0 will equate the strings],v2=[any text you want to test - use | for OR or ^ for AND],sc=[score value, sc<some value, sc>some value]
 
  *riskscore*: this the riskscore threshold.  A decimal number between 0 and 1, use this as a threshold to flag anomalies.

  *complete* : If using multiple streams, this will test each stream to see if the computed riskscore and perform an AND or OR on each risk value
  and take an average of the risk scores if using AND.  Otherwise if at least one stream exceeds the riskscore it will return.
  
  *type*: AND or OR - if using v or sc, this is used to apply the appropriate logic between v and sc.  For example, if type=or, then VIPER 
  will see if a test value is less than or greater than V, OR, standarzided value is less than or greater than sc.  
  
  *sc*: is a standarized variavice between the peer group value and test value.
  
  *v1*: is a user chosen value which can be used to test for a particular value.  For example, if you want to flag values less then 0, 
  then choose v<0 and VIPER will flag them as anomolous.

  *v2*: if analysing string streams, v2 can be strings you want to check for. For example, if I want to check for two
  strings: Failed and Attempt Failed, then set v2=Failed^Attempt Failed, where ^ tells VIPER to perform an AND operation.  
  If I want either to exist, 2=Failed|Attempt Failed, where | tells VIPER to perform an OR operation.

  *stringcontains* : if using string streams, and you want to see if a particular text value exists and flag it - then 
  if stringcontains=1, VIPER will test for substrings, otherwise it will equate the strings. 
  
  
  Flags must be provided for each topic.  Separate multiple flags by ~

*consumeridinputstream* : string, required

- Consumer id of the input stream topic: consumeinputstream

*companyname* : string, required

- Your company name

*consumerid*: string, required

- Consumerid associated with the topic to consume from

*producerid*: string, required

- Producerid associated with the topic to produce to

*hpdehost*: string, required

- Address of HPDE 

*viperconfigfile* : string, required

- Full path to VIPER.ENV configuration file on server.

*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.

*partition*: int, optional

- Partition used by kafka to store data. NOTE: Kafka will dynamically store data in partitions.
  Unless you know for sure the partition, you should use the default of -1 to let VIPER
  determine where your data is.

*hpdeport*: int, required

- Port number HPDE is listening on 

*topicid* : string, required

- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can perform anomaly 
  prediction for each device. Separate  multiple topic ids by a comma.

*maintopic* : string, optional

- This is the maintopic that contains the subtopic streams.

*rollbackoffsets*: int, optional

- This is the percentage to rollback the streams that you are analysing: streamstoanalyse

*fullpathtopeergroupdata*: string, optional

- This is the full path to the peer group you found in viperanomalytrain; this will be used for anomaly detection.

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file

*delay* : int, optional

- delay parameter to wait for Kafka to respond - in milliseconds.

*timeout* : int, optional

 - Number of seconds that VIPER waits when trying to make a connection to HPDE.
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: Returns a JSON object of the peer groups for all the streams.

**26. maadstml.viperpreprocessproducetotopicstream(VIPERTOKEN,host,port,topic,producerid,offset,maxrows=0,enabletls=0,delay=100,
                brokerhost='',brokerport=-999,microserviceid='',topicid=-999,streamstojoin='',preprocesslogic='',
				preprocessconditions='',identifier='',preprocesstopic='',array=0,saveasarray=0,rawdataoutput=0)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topics to produce to in the Kafka broker - this is a topic that contains multiple topics, VIPER will consume from each 
   topic and write the aggregated results back to this stream.

*array* : int, optional

- Set array=1 if you produced data (from viperproducetotopic) as an array.  

*rawdataoutput* : int, optional

- Set rawdataoutput=1 and the raw data used for preprocessing will be added to the output json.  

*preprocessconditions* : string, optional

- You can set conditions to aggregate functions: MIN, MAX, AVG, COUNT, COUNTSTR, DIFF, DIFFMARGIN, SUM, MEDIAN, VARIANCE, OUTLIERS, OUTLIERSX-Y, VARIED, 
  ANOMPROB,ANOMPROBX-Y, CONSISTENCY,
  ENTROPY, AUTOCORR, TREND, IQR (InterQuartileRange), Midhinge, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), 
  Mad (Mean absolute deviation),Skewness, Kurtosis, Spikedetect, Unique, Uniquestr, Timediff: time should be in this layout:2006-01-02T15:04:05,
  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the 
  average time in seconds between consecutive dates.
  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.
  Geodiff (returns distance in Kilometers between two lat/long points)
  Unique Checks numeric data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.

  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from
  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.
  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype
  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference
  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype
  for data quality and data assurance programs for any number of data streams.

  Uniquestr Checks string data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.

  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.
 
  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.
  
  CONSISTENCY checks if the data all have consistent data types. Returns 1 for consistent data types, 0 otherwise.

  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high 
  
  RAW for no processing.
  
  ANOMPROB=Anomaly Probability, it will run several algorithms on the data stream window to determine a probaility of anomalous
  behaviour.  This can be cross-refenced with OUTLIERS.  It can be very powerful way to detection
  issues with devices. VARIED will determine if the values in the window are all the same, or varied: it will return 1 for varied,
  0 if values are all the same.  This is useful if you want to know if something changed in the stream.
  
  ANOMPROBX-Y (similar to OUTLIERSX-Y), where X and Y are numbers or "n".  If "n" means examine all anomalies for patterns.
  They allow you to check if the anomalies in the streams are truly anomalies and not some
  pattern.  For example, if a IoT device shuts off and turns on again routinely, this may be picked up as an anomaly when in fact
  it is normal behaviour.  So, to ignore these cases, if ANOMPROB2-5, this tells Viper, check anomalies with patterns of 2-5 peaks.
  If the stream has two classes and these two classes are like 0 and 1000, and show a pattern, then they should not be considered an anomaly.
  Meaning, class=0, is the device shutting down, class=1000 is the device turning back on.  If ANOMPROB3-10, Viper will check for 
  patterns of classes 3 to 10 to see if they recur routinely.  This is very helpful to reduce false positives and false negatives.
  
  For example, preprocessconditions='humidity=55,60:temperature=34,n', and preprocesslogic='max,count', means
  Get the MAX value of values in humidity if humidity is between [55,60], and Count values in
  temperature if temperature >=34.  
  
*preprocesstopic* : string, optional

- You can specify a topic for the preprocessed message.  VIPER will automatically dump the preprocessed results to this topic. 
  
*identifier* : string, optional 

- Add any identifier like DSN ID. 

*producerid* : string, required

- Producerid of the topic producing to  

*offset* : int, optional
 
 - If 0 will use the stream data from the beginning of the topics, -1 will automatically go to last offset

*saveasarray* : int, optional

- Set to 1, to save the preprocessed jsons as a json array.  This is very helpful if you want to do machine learning
  or further query the preprocessed json because each processed json are time synchronized.  For example, if you want to compare
  different preprocessed streams the date/time of the data is synchronized to give you impacts of one
  stream on another.

*maxrows* : int, optional
 
 - If offset=-1, this number will rollback the streams by maxrows amount i.e. rollback=lastoffset-maxrows
 
*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise 0 for plaintext

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

*topicid* : int, optional

- This represents the IoT device number or any entity

*streamstojoin* : string, optional

- If you entered topicid, you need to enter the streams you want to pre-process

*preprocesslogic* : string, optional

- Here you need to specify how you want to pre-process the streams.  You can perform the following operations:
  MAX, MIN, AVG, COUNT, COUNTSTR, SUM, DIFF, DIFFMARGIN, VARIANCE, MEDIAN, OUTLIERS, OUTLIERSX-Y, VARIED, ANOMPROB, ANOMPROBX-Y, ENTROPY, 
  AUTOCORR, TREND, CONSISTENCY, Unique, Uniquestr, Geodiff (returns distance in Kilometers between two lat/long points),
  IQR (InterQuartileRange), Midhinge, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), 
  Mad (Mean absolute deviation), Skewness, Kurtosis, Spikedetect, Timediff: time should be in this layout:2006-01-02T15:04:05,
  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the 
  average time in seconds between consecutive dates.
  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.

  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from
  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.
  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype
  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference
  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype
  for data quality and data assurance programs for any number of data streams.
 
  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.
  
  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high 

  RAW for no processing.
  
  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.

  The order of the operation must match the 
  order of the stream.  If you specified topicid, you can perform TML on the new preprocessed stream append appending: 
  _preprocessed_processlogic
  For example, if streamstojoin="stream1,stream2,streams3", and preprocesslogic="min,max,diff", the new streams will be:
  stream1_preprocessed_Min, stream2_preprocessed_Max, stream3_preprocessed_Diff.

RETURNS: Returns preprocessed JSON.

**27. maadstml.areyoubusy(host,port)**

**Parameters:**	

*host* : string, required
 
- You can get the host by determining all the hosts that are listening in your machine.   
  You use this code: https://github.com/smaurice101/transactionalmachinelearning/blob/main/checkopenports


*port* : int, required
 
- You can get the port by determining all the ports that are listening in your machine. 
  You use this code: https://github.com/smaurice101/transactionalmachinelearning/blob/main/checkopenports 
  
RETURNS: Returns a list of available VIPER and HPDE with their HOST and PORT.

**28. maadstml.viperstreamquery(VIPERTOKEN,host,port,topic,producerid,offset=-1,maxrows=0,enabletls=1,delay=100,brokerhost='',
                                          brokerport=-999,microserviceid='',topicid=-999,streamstojoin='',preprocessconditions='',
                                          identifier='',preprocesstopic='',description='',array=0)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*topic* : string, required
       
- Topics to produce to in the Kafka broker - this is a topic that contains multiple topics, VIPER will consume from each 
   topic and write the aggregated results back to this stream.

*producerid* : string, required
       
- Producer id of topic


*offset* : int, optional
 
 - If 0 will use the stream data from the beginning of the topics, -1 will automatically go to last offset

*maxrows* : int, optional
 
 - If offset=-1, this number will rollback the streams by maxrows amount i.e. rollback=lastoffset-maxrows
 
*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise 0 for plaintext

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

*topicid* : int, optional

- This represents the IoT device number or any entity

*streamstojoin* : string, required

- Identify multiple streams to join, separate by comma.  For example, if you preprocessed Power, Current, Voltage:
 **streamstojoin="Power_preprocessed_Avg,Current_preprocessed_Min,Voltage_preprocessed_Avg,Current_preprocessed_Trend"**

*preprocessconditions* : string, required

 - You apply strict conditions to a MAX of 3 streams.  You can use >, <, =, AND, OR.  You can add as many conditions as you like.
   Separate multiple conditions by semi-colon. You **cannot mix** AND and OR.  For example, 
  **preprocessconditions='Power_preprocessed_Avg > 139000:Power_preprocessed_Avg < 1000 or Voltage_preprocessed_Avg > 120000 
  or Current_preprocessed_Min=0:Voltage_preprocessed_Avg > 120000 and Current_preprocessed_Trend>0'**
  
*identifier*: string, optional
 
 - Add an identifier text to the result.  This is a label, and useful if you want to identify the result for some IOT device.  
 
*preprocesstopic* : string, optional

 - The topic to produce the query results to.  
 
*description* : string, optional

 - You can give each query condition a description.  Separate multiple desction by semi-colon.  
 
*array* : int, optional

 - Set to 1 if you are reading a JSON ARRAY, otherwise 0.
 
RETURNS: 1 if the condition is TRUE (condition met), 0 if false (condition not met)

**28.1 maadstml.viperstreamquerybatch(VIPERTOKEN,host,port,topic,producerid,offset=-1,maxrows=0,enabletls=1,delay=100,brokerhost='',
                                          brokerport=-999,microserviceid='',topicid="-999",streamstojoin='',preprocessconditions='',
                                          identifier='',preprocesstopic='',description='',array=0,timedelay=0,asynctimeout=120)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*asynctimeout* : int, optional
 
  -This is the timeout in seconds for the Python library async function.

*timedelay* : int, optional

 - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause 
   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated
   every 3600 seconds, it may make sense to set timedelay=3600

*topic* : string, required
       
- Topics to produce to in the Kafka broker - this is a topic that contains multiple topics, VIPER will consume from each 
   topic and write the aggregated results back to this stream.

*producerid* : string, required
       
- Producer id of topic


*offset* : int, optional
 
 - If 0 will use the stream data from the beginning of the topics, -1 will automatically go to last offset

*maxrows* : int, optional
 
 - If offset=-1, this number will rollback the streams by maxrows amount i.e. rollback=lastoffset-maxrows
 
*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise 0 for plaintext

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

*topicid* : string, required

- This represents the IoT device number or any entity.  Separate multiple topic ids by a comma.

*streamstojoin* : string, required

- Identify multiple streams to join, separate by comma.  For example, if you preprocessed Power, Current, Voltage:
 **streamstojoin="Power_preprocessed_Avg,Current_preprocessed_Min,Voltage_preprocessed_Avg,Current_preprocessed_Trend"**

*preprocessconditions* : string, required

 - You apply strict conditions to a MAX of 3 streams.  You can use >, <, =, AND, OR.  You can add as many conditions as you like.
   Separate multiple conditions by semi-colon. You **cannot mix** AND and OR.  For example, 
  **preprocessconditions='Power_preprocessed_Avg > 139000:Power_preprocessed_Avg < 1000 or Voltage_preprocessed_Avg > 120000 
  or Current_preprocessed_Min=0:Voltage_preprocessed_Avg > 120000 and Current_preprocessed_Trend>0'**
  
*identifier*: string, optional
 
 - Add an identifier text to the result.  This is a label, and useful if you want to identify the result for some IOT device.  
 
*preprocesstopic* : string, optional

 - The topic to produce the query results to.  
 
*description* : string, optional

 - You can give each query condition a description.  Separate multiple desction by semi-colon.  
 
*array* : int, optional

 - Set to 1 if you are reading a JSON ARRAY, otherwise 0.
 
RETURNS: 1 if the condition is TRUE (condition met), 0 if false (condition not met)

**29. maadstml.viperpreprocessbatch(VIPERTOKEN,host,port,topic,producerid,offset,maxrows=0,enabletls=0,delay=100,
                brokerhost='',brokerport=-999,microserviceid='',topicid="-999",streamstojoin='',preprocesslogic='',
				preprocessconditions='',identifier='',preprocesstopic='',array=0,saveasarray=0,timedelay=0,asynctimeout=120,rawdataoutput=0)**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*asynctimeout* : int, optional
 
  -This is the timeout in seconds for the Python library async function.

*rawdataoutput* : int, optional
 
  -Set rawdataoutput=1 to output the raw preprocessing data to the Json.

*timedelay* : int, optional

 - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause 
   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated
   every 3600 seconds, it may make sense to set timedelay=3600

*topic* : string, required
       
- Topics to produce to in the Kafka broker - this is a topic that contains multiple topics, VIPER will consume from each 
   topic and write the aggregated results back to this stream.

*array* : int, optional

- Set array=1 if you produced data (from viperproducetotopic) as an array.  

*preprocessconditions* : string, optional

- You can set conditions to aggregate functions: MIN, MAX, AVG, COUNT, COUNTSTR, DIFF, SUM, MEDIAN, VARIANCE, OUTLIERS, OUTLIERSX-Y, VARIED, ANOMPROB,ANOMPROBX-Y,
  ENTROPY, AUTOCORR, TREND, IQR (InterQuartileRange), Midhinge, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), 
  Mad (Mean absolute deviation),Skewness, Kurtosis, Spikedetect, Unique, Uniquestr, Timediff: time should be in this layout:2006-01-02T15:04:05,
  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the 
  average time in seconds between consecutive dates.  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, 
  StD of 3.5 from mean and influence of 0.5.  Geodiff (returns distance in Kilometers between two lat/long points).

  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from
  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.
  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype
  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference
  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype
  for data quality and data assurance programs for any number of data streams.
  
  Unique Checks numeric data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.

  Uniquestr Checks string data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.
  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers. 
  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.
  
  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high 

  ANOMPROB=Anomaly Probability, it will run several algorithms on the data stream window to determine a probaility of anomalous
  behaviour.  This can be cross-refenced with OUTLIERS.  It can be very powerful way to detection
  issues with devices. VARIED will determine if the values in the window are all the same, or varied: it will return 1 for varied,
  0 if values are all the same.  This is useful if you want to know if something changed in the stream.
  
  ANOMPROBX-Y (similar to OUTLIERSX-Y), where X and Y are numbers or "n".  If "n" means examine all anomalies for patterns.
  They allow you to check if the anomalies in the streams are truly anomalies and not some
  pattern.  For example, if a IoT device shuts off and turns on again routinely, this may be picked up as an anomaly when in fact
  it is normal behaviour.  So, to ignore these cases, if ANOMPROB2-5, this tells Viper, check anomalies with patterns of 2-5 peaks.
  If the stream has two classes and these two classes are like 0 and 1000, and show a pattern, then they should not be considered an anomaly.
  Meaning, class=0, is the device shutting down, class=1000 is the device turning back on.  If ANOMPROB3-10, Viper will check for 
  patterns of classes 3 to 10 to see if they recur routinely.  This is very helpful to reduce false positives and false negatives.
  
  For example, preprocessconditions='humidity=55,60:temperature=34,n', and preprocesslogic='max,count', means
  Get the MAX value of values in humidity if humidity is between [55,60], and Count values in
  temperature if temperature >=34.  
  
*preprocesstopic* : string, optional

- You can specify a topic for the preprocessed message.  VIPER will automatically dump the preprocessed results to this topic. 
  
*identifier* : string, optional 

- Add any identifier like DSN ID. Note, for multiple identifiers per topicid, you can separate by pipe "|".

*producerid* : string, required

- Producerid of the topic producing to  

*offset* : int, optional
 
 - If 0 will use the stream data from the beginning of the topics, -1 will automatically go to last offset

*saveasarray* : int, optional

- Set to 1, to save the preprocessed jsons as a json array.  This is very helpful if you want to do machine learning
  or further query the preprocessed json because each processed json are time synchronized.  For example, if you want to compare
  different preprocessed streams the date/time of the data is synchronized to give you impacts of one
  stream on another.

*maxrows* : int, optional
 
 - If offset=-1, this number will rollback the streams by maxrows amount i.e. rollback=lastoffset-maxrows
 
*enabletls*: int, optional

- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise 0 for plaintext

*delay*: int, optional

- Time in milliseconds before VIPER backsout from reading messages

*brokerhost* : string, optional

- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file

*brokerport* : int, optional

- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file
 
*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

*topicid* : string, required

- This represents the IoT device number or any entity.  You can specify multiple ids 
  separated by a comma: topicid="1,2,4,5". 

*streamstojoin* : string, optional

- If you entered topicid, you need to enter the streams you want to pre-process

*preprocesslogic* : string, optional

- Here you need to specify how you want to pre-process the streams.  You can perform the following operations:
  MAX, MIN, AVG, COUNT, COUNTSTR, SUM, DIFF, VARIANCE, MEDIAN, OUTLIERS, OUTLIERSX-Y, VARIED, ANOMPROB, ANOMPROBX-Y, ENTROPY, AUTOCORR, TREND,
  IQR (InterQuartileRange), Midhinge, CONSISTENCY, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), 
  Mad (Mean absolute deviation), Skewness, Kurtosis, Spikedetect, Unique, Uniquestr, Timediff: time should be in this layout:2006-01-02T15:04:05,
  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the 
  average time in seconds between consecutive dates. 
  Geodiff (returns distance in Kilometers between two lat/long points).
  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.
  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.
  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.

  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from
  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.
  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype
  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference
  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype
  for data quality and data assurance programs for any number of data streams.

  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high 

  The order of the operation must match the 
  order of the stream.  If you specified topicid, you can perform TML on the new preprocessed stream append appending: 
  _preprocessed_processlogic
  For example, if streamstojoin="stream1,stream2,streams3", and preprocesslogic="min,max,diff", the new streams will be:
  stream1_preprocessed_Min, stream2_preprocessed_Max, stream3_preprocessed_Diff.

RETURNS: None.

**30. maadstml.viperlisttopics(vipertoken,host,port=-999,brokerhost='', brokerport=-999,microserviceid='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.


*brokerhost* : string, optional

- Address where Kafka broker is running - if none is specified, the Kafka broker address in the VIPER.ENV file will be used.


*brokerport* : int, optional

- Port on which Kafka is listenting.

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: A JSON formatted object of all the topics in the Kafka broker.


**31. maadstml.viperpreprocesscustomjson(VIPERTOKEN,host,port,topic,producerid,offset,jsoncriteria='',rawdataoutput=0,maxrows=0,
                   enabletls=0,delay=100,brokerhost='',brokerport=-999,microserviceid='',topicid=-999,streamstojoin='',preprocesslogic='',
                   preprocessconditions='',identifier='',preprocesstopic='',array=0,saveasarray=0,timedelay=0,asynctimeout=120,
                   usemysql=0,tmlfilepath='',pathtotmlattrs='')**

**Parameters:**	

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*topic* : string, required

- Topic containing the raw data to consume.

*producerid* : string, required

- Id of the Topic.

*offset* : int, required

- Offset to consume from.  Set to -1 if consuming the last offset of topic.

*jsoncriteria* : string, required

- This is the JSON path to the data you want to consume . It must be the following format: 

            *UID* is path to the main id. For example, Patient ID
			
			*filter* is the path to something that filter the jsons 
			
			*subtopic* is the path to the subtopics in the json (several paths can be specified)
			
			*values* is the path to the Values of the subtopics - Subtopic and Value must have 1-1 match
			
			*identifiers* is the path to any special identifiers for the subtopics
			
			*datetime* is the path to the datetime of the message
			
			*msgid* is the path to any msg id

*For example:*

     jsoncriteria='uid=subject.reference,filter:resourceType=Observation~\
                   subtopics=code.coding.0.code,component.0.code.coding.0.code,component.1.code.coding.0.code~\
                   values=valueQuantity.value,component.0.valueQuantity.value,component.1.valueQuantity.value~\
                   identifiers=code.coding.0.display,component.0.code.coding.0.display,component.1.code.coding.0.display~\
                   datetime=effectiveDateTime~\
                   msgid=id'

*rawdataoutput* : int, optional

- set to 1 if you want to output the raw data.  Note: This could involve a lot of data and Kafka may refuse to write to the topic.

*maxrows* : int, optional

- Number of offsets or percentage to roll back the data stream

*enabletls* : int, optional

- Set to 1 for TLS encrpyted traffic

*delay* : int, optional

- Delay to wait for Kafka to finish writing to topic

*topicid* : int, optional

- Since you are consuming raw data, this is not needed.  Topicid will be set for you.

*streamstojoin* : string, optional

- This is ignored for raw data.

*preprocesslogic* : string, optional

- Specify your preprocess algorithms. For example, You can set conditions to aggregate functions: MIN, MAX, AVG, COUNT, COUNTSTR, DIFF, 
  DIFFMARGIN, SUM, MEDIAN, VARIANCE, OUTLIERS, OUTLIERSX-Y, VARIED, 
  ANOMPROB,ANOMPROBX-Y, CONSISTENCY,
  ENTROPY, AUTOCORR, TREND, IQR (InterQuartileRange), Midhinge, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), 
  Mad (Mean absolute deviation),Skewness, Kurtosis, Spikedetect, Unique, Uniquestr, Timediff: time should be in this layout:2006-01-02T15:04:05,
  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the 
  average time in seconds between consecutive dates.
  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.
  Geodiff (returns distance in Kilometers between two lat/long points)
  Unique Checks numeric data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.

  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from
  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.
  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype
  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference
  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype
  for data quality and data assurance programs for any number of data streams.

  Uniquestr Checks string data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.

  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.
 
  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.
  
  CONSISTENCY checks if the data all have consistent data types. Returns 1 for consistent data types, 0 otherwise.

  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high 
  
  RAW for no processing.

*preprocessconditions* : string, optional

- Specify any preprocess conditions

*identifier* : string, optional

- Specify any text identifier

*preprocesstopic* : string, optional

- Specify the name of the topic to write preprocessed results.

*array* : int, optional

- Ignored for raw data - as jsoncriteria specifies json path

*saveasarray* : int, optional

- Set to 1 to save as json array

*timedelay* : int, optional

- Delay to wait for response from Kafka.

*asynctimeout* : int, optional

- Maximum delay for asyncio in Python library

*usemysql* : int, optional

- Set to 1 to specify whether MySQL is used to store TMLIDs.  This will be needed to track individual objects.

*tmlfilepath* : string, optional

- Ignored. 

*pathtotmlattrs* : string, optional

- Specifiy any attributes for the TMLID.  Here you can specify OEM, Latitude, Longitude, and Location JSON paths:

     pathtotmlattrs='oem=id,lat=subject.reference,long=component.0.code.coding.0.display,location=component.1.valueQuantity.value'

*port* : int, required

- Port on which VIPER is listenting.

*brokerhost* : string, optional

- Address where Kafka broker is running - if none is specified, the Kafka broker address in the VIPER.ENV file will be used.


*brokerport* : int, optional

- Port on which Kafka is listenting.

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: null

**32. maadstml.viperstreamcorr(vipertoken,host,port,topic,producerid,offset=-1,maxrows=0,enabletls=1,delay=100,brokerhost='',
                                 brokerport=-999,microserviceid='',topicid=-999,streamstojoin='',
                                 identifier='',preprocesstopic='',description='',array=0, wherecondition='',
                                 wheresearchkey='PreprocessIdentifier',rawdataoutput=1,threshhold=0,pvalue=0,
                                 identifierextractpos="",topcorrnum=5,jsoncriteria='',tmlfilepath='',usemysql=0,
                                 pathtotmlattrs='',mincorrvectorlen=5,writecorrstotopic='',outputtopicnames=0,nlp=0,
                                 correlationtype='',docrosscorr=0)**

**Parameters:**	Perform Stream correlations

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*topic* : string, required

- Topic containing the raw data to consume.

*producerid* : string, required

- Id of the Topic.

*wherecondition* : string, optional

- Specify the where condition.  For example, if you want to filter the data on "males", enter males.  You can
  specify exact match by using [males], or substring by using (males), or "not" includes by using {males}  

*correlationtype* : string, optional

-  Specify type of correlation you want to do.  Valid values are: kendall,spearman,pearson,ks
   You can specify some, or all (leave blank and ALL will be done), separated by comma. ks=kolmogorov-Smirnov test.

*docrosscorr* : int, optional

- Set to 1 if you want to do cross-correlations with 4 variables, not the normal 2-variable. 

*wheresearchkey* : string, optional

- Specify the where search key.  This key will be searched for "males".  

*description* : string, optional

- Specify a text description for this correlation.  

*identifierextractpos* : string, optional

- If doing correlation on data you have already preprocessed, you can extract the identifier from the identifier field
  in the preprocessed json. 

*offset* : int, required

- Offset to consume from.  Set to -1 if consuming the last offset of topic.

*mincorrvectorlen* : int, optional

- Minimum length of the data variables you are correlating.

*topcorrnum* : int, optional

- Top number of sorted correlations to output

*threshhold* : int, optional

- Threshold for the correlation coefficient.  Must range from 0-100.  All correlations will be greater than this number.

*pvalue* : int, optional

- Pvalue threshold for the p-values.  Must range from 0-100.  All p-values will be below this number.

*writecorrstotopic* : string, optional

- This is the name of the topic that Viper will write "individual" correlation results to.  

*outputtopicnames* : int, optional

- Set to 1 if you want to write out topic names.

*nlp* : int, optional

- Set to 1 if you want to correlate TEXT data by using natural language processing (NLP).

*jsoncriteria* : string, required

- This is the JSON path to the data you want to consume . It must be the following format: 

            *UID* is path to the main id. For example, Patient ID
			
			*filter* is the path to something that filter the jsons 
			
			*subtopic* is the path to the subtopics in the json (several paths can be specified)
			
			*values* is the path to the Values of the subtopics - Subtopic and Value must have 1-1 match
			
			*identifiers* is the path to any special identifiers for the subtopics
			
			*datetime* is the path to the datetime of the message
			
			*msgid* is the path to any msg id

*For example:*

     jsoncriteria='uid=subject.reference,filter:resourceType=Observation~\
                   subtopics=code.coding.0.code,component.0.code.coding.0.code,component.1.code.coding.0.code~\
                   values=valueQuantity.value,component.0.valueQuantity.value,component.1.valueQuantity.value~\
                   identifiers=code.coding.0.display,component.0.code.coding.0.display,component.1.code.coding.0.display~\
                   datetime=effectiveDateTime~\
                   msgid=id'

*rawdataoutput* : int, optional

- set to 1 if you want to output the raw data.  Note: This could involve a lot of data and Kafka may refuse to write to the topic.

*maxrows* : int, optional

- Number of offsets or percentage to roll back the data stream

*enabletls* : int, optional

- Set to 1 for TLS encrpyted traffic

*delay* : int, optional

- Delay to wait for Kafka to finish writing to topic

*topicid* : int, optional

- Since you are consuming raw data, this is not needed.  Topicid will be set for you.

*streamstojoin* : string, optional

- This is ignored for raw data.

*preprocesslogic* : string, optional

- Specify your preprocess algorithms. For example, min, max, variance, trend, anomprob, outliers, etc..

*preprocessconditions* : string, optional

- Specify any preprocess conditions

*identifier* : string, optional

- Specify any text identifier

*preprocesstopic* : string, optional

- Specify the name of the topic to write preprocessed results.

*array* : int, optional

- Ignored for raw data - as jsoncriteria specifies json path

*saveasarray* : int, optional

- Set to 1 to save as json array

*timedelay* : int, optional

- Delay to wait for response from Kafka.

*asynctimeout* : int, optional

- Maximum delay for asyncio in Python library

*usemysql* : int, optional

- Set to 1 to specify whether MySQL is used to store TMLIDs.  This will be needed to track individual objects.

*tmlfilepath* : string, optional

- Ignored. 

*pathtotmlattrs* : string, optional

- Specifiy any attributes for the TMLID.  Here you can specify OEM, Latitude, Longitude, and Location JSON paths:

     pathtotmlattrs='oem=id,lat=subject.reference,long=component.0.code.coding.0.display,location=component.1.valueQuantity.value'

*port* : int, required

- Port on which VIPER is listenting.

*brokerhost* : string, optional

- Address where Kafka broker is running - if none is specified, the Kafka broker address in the VIPER.ENV file will be used.


*brokerport* : int, optional

- Port on which Kafka is listenting.

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

RETURNS: null

**33. maadstml.viperstreamcluster(vipertoken,host,port,topic,producerid,offset=-1,maxrows=0,enabletls=1,delay=100,brokerhost='',
                                          brokerport=-999,microserviceid='',topicid=-999,iterations=1000, numclusters=8,
                                          distancealgo=1,description='',rawdataoutput=0,valuekey='',filterkey='',groupkey='',
                                          identifier='',datetimekey='',valueidentifier='',msgid='',valuecondition='',
                                          identifierextractpos='',preprocesstopic='',
                                          alertonclustersize=0,alertonsubjectpercentage=50,sendalertemailsto='',emailfrequencyinseconds=0,
                                          companyname='',analysisdescription='',identifierextractposlatitude=-1,
                                          identifierextractposlongitude=-1,identifierextractposlocation=-1,
                                          identifierextractjoinedidentifiers=-1,pdfformat='',minimumsubjects=2)**


**Parameters:**	Perform Stream correlations

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*topic* : string, required

- Topic containing the raw data to consume.

*port* : int, required

- Port on which VIPER is listenting.

*brokerhost* : string, optional

- Address where Kafka broker is running - if none is specified, the Kafka broker address in the VIPER.ENV file will be used.

*brokerport* : int, optional

- Port on which Kafka is listenting.

*alertonsubjectpercentage* : int, optional

- Set a value between 0-100 that specifies the percentage of subjects that exceed a threshold. 

*identifierextractjoinedidentifiers* : int, optional

 - Position of additional text in identfier field.

*pdfformat* : string, optional

- Speficy format text of the PDF to generate and emailed to users.  You can set title, signature, showpdfemaillist, and charttitle.

     pdfformat="title=This is a Transactional Machine Learning Auto-Generated PDF for Cluster Analysis For OTICS|signature=\
     Created by: OTICS, Toronto|showpdfemaillist=1|charttitle=Chart Shows Clusters of Patients with Similar Symptoms"

*minimumsubjects* : int, optional

- Sepecify minimum subjects in the cluster analysis.

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

*maxrows* : int, optional

- Number of offsets or percentage to roll back the data stream

*enabletls* : int, optional

- Set to 1 for TLS encrpyted traffic

*delay* : int, optional

- Delay to wait for Kafka to finish writing to topic

*producerid* : string, required

- Id of the Topic.

*topicid* : int, optional

- Ignored

*iterations* : int, optional

 - Number of iterations to compute clusters

*numclusters* : int, optional

 - Number of clusters you want.  Maximum is 20.

*distancealgo* : int, optional

 - Set to 1 for Euclidean, or 2 for EuclideanSquared.

*valuekey* : string, required

- JSON path to the value to cluster on 

*filterkey* : string, optional
 
 - JSON path to filter on.  Ex. Preprocesstype=Pearson, gets value from Key=Preprocesstype, and checks for value=Pearson

*groupkey* : string, optional
 
 - JSON path to group on a key.  Ex. Topicid, to group on TMLIDs

*valueidentifier* : string, optional
 
 - JSON path to text value IDs you correlated.

*msgid* : string, optional

 - JSON path for a unique message id
 
*valuecondition* : string, optional
 
 - A condition to filter numeric values on.  Ex. valuecondition="> .5", if valuekey is correlations, then all correlation > 0.5 are taken.
  
*identifierextractpos* : string, optional

 - The location of data to extract from the Identifier field.  Ex. identifierextractpos="1,2", will extract data from position 1 and 2.
 
*preprocesstopic* : string, required

 - Topic to produce results to 
 
*alertonclustersize* : int, optional

 - Size of the cluster to alert on.  Ex.  if this is 100, then when any cluster has more than 100 elements an email is sent.

*sendalertemailsto*: string, optional
 
 - List of email addresses to send alert to
 
*emailfrequencyinseconds* : int, optional

 - Seconds between emails. Ex. set to 3600, so emails will be sent every 1 hour if alert condition met.

*companyname* : string, optional
 
 - Your company name
 
*analysisdescription* : string, optional

 - A detailed description of the analysis.  This will be added to the PDF.

*identifierextractposlatitude* : int, optional

- Position for latitude in the Identifier field  

*identifierextractposlongitude* : int, optional

- Position for longitude in the Identifier field  

*identifierextractposlocation* : int, optional

- Position for location in the Identifier field  

RETURNS: null

**34. maadstml.vipersearchanomaly(vipertoken,host,port,topic,producerid,offset,jsoncriteria='',rawdataoutput=0,maxrows=0,enabletls=0,delay=100,
                       brokerhost='',brokerport=-999,microserviceid='',topicid=-999,identifier='',preprocesstopic='',
                       timedelay=0,asynctimeout=120,searchterms='',entitysearch='',tagsearch='',checkanomaly=1,testtopic='',
                       includeexclude=1,anomalythreshold=0,sendanomalyalertemail='',emailfrequency=3600)**

**Parameters:**	Perform Stream correlations

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*topic* : string, required

- Topic containing the raw data to consume.

*port* : int, required

- Port on which VIPER is listenting.

*brokerhost* : string, optional

- Address where Kafka broker is running - if none is specified, the Kafka broker address in the VIPER.ENV file will be used.

*brokerport* : int, optional

- Port on which Kafka is listenting.

*jsoncriteria* : string, optional

- Enter the JSON path to the search fields

*anomalythreshold* : int, optional

 - Threshold to meet to determine if search differs from the peer group.  This is a number between 0-100.  The lower the number
   the "more" this search differs from the peer group and likely anomalous.

*includeexclude* : int, optional

- Set to 1 if you want the search terms included in the user searches, 0 otherwise.

*sendanomalyalertemail* : string, optional

- List of email addresses to send alerts to: separate list by comma.

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

*maxrows* : int, optional

- Number of offsets or percentage to roll back the data stream

*enabletls* : int, optional

- Set to 1 for TLS encrpyted traffic

*delay* : int, optional

- Delay to wait for Kafka to finish writing to topic

*producerid* : string, required

- Id of the Topic.

*emailfrequency* : int, optional

- Frequency in seconds, between alert emails.

*testtopic* : string, optional

 - ignored 

*preprocesstopic* : string, required

 - Topic to produce results to 
 
*sendalertemailsto*: string, optional
 
 - List of email addresses to send alert to

*tagsearch* : string, optional

 - Search for tags in the search.  You can enter: 'superlative,noun,interjection,verb,pronoun'

*entitysearch* : string, optional

 - Search for entities in the search.  You can enter: 'person,gpe', where gpe=Geo-political entity
 
*searchterms* : string, optional

 - You can specify your own search terms.  Separate list by comma.
 
*emailfrequencyinseconds* : int, optional

 - Seconds between emails. Ex. set to 3600, so emails will be sent every 1 hour if alert condition met.

*companyname* : string, optional
 
 - Your company name
 
*topicid* : int, optional

 - ignored
 
*identifier* : string, optional

- identifier text

*checkanomaly* : int, optional

- Set to 1 to check for search anomaly.

*rawdataoutput* : int, optional

- ignored

RETURNS: null

**35. maadstml.vipermirrorbrokers(VIPERTOKEN,host,port,brokercloudusernamepassfrom,brokercloudusernamepassto,
         enabletlsfrom,enabletlsto,
         replicationfactorfrom,replicationfactorto,compressionfrom,compressionto,
         saslfrom,saslto,partitions,brokerlistfrom,brokerlistto,                                         
         topiclist,asynctimeout=300,microserviceid="",servicenamefrom="broker",
  		 servicenameto="broker",partitionchangeperc=0,replicationchange=0,filter="",rollbackoffset=0)**

**Parameters:**	Perform Data Stream migration across brokers - fast and simple.

*VIPERTOKEN* : string, required

- A token given to you by VIPER administrator.

*host* : string, required
       
- Indicates the url where the VIPER instance is located and listening.

*port* : int, required

- Port on which VIPER is listenting.

*brokercloudusernamepassfrom* : string, required

- This is a comma separated list of source broker username:password. For multiple brokers separate with comma, for example for 3 brokers:
  username:password,username:password,username:password

*brokercloudusernamepassto* : string, required

- This is a comma separated list of destination broker username:password. For multiple brokers separate with comma, for example for 3 brokers:
  username:password,username:password,username:password.  The number of source and destination brokers must match.

*enabletlsfrom* : string, required

- This is a colon separated list of whether source brokers require TLS: 1=TLS, 0=NoTLS. For multiple brokers separate with colon, 
  for example for 3 brokers: 1:0:1.  Some brokers may be On-Prem and do not need TLS.
  
*enabletlsto* : string, required

- This is a colon separated list of whether destination brokers require TLS: 1=TLS, 0=NoTLS. For multiple brokers separate with colon, 
  for example for 3 brokers: 1:0:1.  Some brokers may be On-Prem and do not need TLS.

*replicationfactorfrom* : string, optional

- This is a colon separated list of the replication factor of source brokers. For multiple brokers separate with colon, 
  for example for 3 brokers: 3:4:3, or leave blank to let VIPER decide.  
  
*replicationfactorto* : string, optional

- This is a colon separated list of the replication factor of destination brokers. For multiple brokers separate with colon, 
  for example for 3 brokers: 3:4:3, or leave blank to let VIPER decide.

*compressionfrom* : string, required

- This is a colon separated list of the compression type of source brokers: snappy, gzip, lz4. For multiple brokers separate with colon, 
  for example for 3 brokers: snappy:snappy:gzip.  
  
*compressionto* : string, required

- This is a colon separated list of the compression type of destination brokers: snappy, gzip, lz4. For multiple brokers separate with colon, 
  for example for 3 brokers: snappy:snappy:gzip.  

*saslfrom* : string, required

- This is a colon separated list of the SASL type: None, Plain, SCRAM256, SCRAM512 of source brokers. For multiple brokers separate with colon, 
  for example for 3 brokers: PLAIN:SCRAM256:SCRAM512.  
  
*saslto* : string, required

- This is a colon separated list of the SASL type: None, Plain, SCRAM256, SCRAM512 of destination brokers. For multiple brokers separate with colon, 
  for example for 3 brokers: PLAIN:SCRAM256:SCRAM512.  

*partitions* : string, optional

- If you are manually migrating topics you will need to specify the partitions of the topics in *topiclist*.  Otherwise, VIPER
  will automatically find topics and their partitions on the broker for you - this is recommended.

*brokerlistfrom* : string, required

- This is a list of source brokers: host:port. For multiple brokers separate with comma, for example for 3 brokers: host:port,host:port,host:port.  

*brokerlistto* : string, required

- This is a list of destination brokers: host:port. For multiple brokers separate with comma, for example for 3 brokers: host:port,host:port,host:port.  

*topiclist* : string, optional

- You can manually specify topics to migrate, separate multiple topics with a comma. Otherwise, Viper will automatically find topics
  on the broker for you - this is recommended.

*partitionchangeperc* : number, optional

- You can increase or decrease partitions on destination broker by specifying a percentage between 0-100, or -100-0.
  Minimum partition will always be 1.

*replicationchange* : ignored for now

- You can increase or decrease replication factor on destination broker by specifying a positive or negative number.
  Minimum partition will always be 2.

*filter* : string, optional

- You can specify a filter to choose only those topics that satisfy the filter.  Filters must have the 
  following format: "searchstring1,searchstring2,searchstring3,..:Logic=0 or 1:search position: 0,1,2".  For example, 
  Logic 0=AND, 1=OR, search position: 0=BeginsWith, 1=Any, 2=EndsWith

*asynctimeout* : number, optional

- This specifies the timeout in seconds for the python connection.

*microserviceid* : string, optional

- If you are routing connections to VIPER through a microservice then indicate it here.

*servicenamefrom* : string, optional

- You can specify the name of the source brokers.

*servicenameto* : string, optional

- You can specify the name of the destination brokers.

*rollbackoffset*: ignored

**36. maadstml.vipernlp(filename,maxsummarywords,maxkeywords)**

**Parameters:**	Perform NLP summarization of PDFs

*filename* : string, required

- Filename of PDF to summarize.

*maxsummarywords* : int, required
       
- Maximum amount of words in the summary.

*maxkeywords* : int, required

- Maximum amount of keywords to extract.

RETURNS: JSON string of summary.

**37. maadstml.viperchatgpt(openaikey,texttoanalyse,query, temperature,modelname)**

**Parameters:**	Start a conversation with ChatGPT

*openaikey* : string, required

- OpenAI API key

*texttoanalyse* : string, required
       
- Text you want ChatGPT to analyse

*query* : string, required

- Prompts for chatGPT.  For example, "What are key points in this text? What are the concerns or issues?"

*temperature* : float, required

- Temperature for chatgpt, must be between 0-1 i.e. 0.7

*modelname* : string, required

- ChatGPT model to use.  For example, text-davinci-002, text-curie-001, text-babbage-001.

RETURNS: ChatGPT response.

**38. maadstml.viperexractpdffields(pdffilename)**

**Parameters:**	Extract data from PDF

*pdffilename* : string, required

- PDF filename

RETURNS: JSON of PDF and writes JSON and XML files of PDF to disk.

**39. maadstml.viperexractpdffieldbylabel(pdffilename,labelname,arcotype)**

**Parameters:**	Extract data from PDF by PDF labels

*pdffilename* : string, required

- PDF filename

*labelname* : string, required

- Label name in the PDF filename to search for.

*pdffilename,labelname,arcotype* : string, required

- Acrobyte tag in PDF i.e. LTTextLineHorizontal

RETURNS: Value of the labelname - if any.

**40. maadstml.pgptingestdocs(docname,doctype, pgptip,pgptport,pgptendpoint)**

**Parameters:**	

*docname* : string, required

- A full-path to a PDF, or text file.

*doctype* : string, required
       
- This can be: binary, or text.

*pgptip* : string, required

- Your container IP - this is usually: http://127.0.0.1

*pgptport* : string, required

- Your container Port - this is usually: 8001.  This will be dependent on the docker run port forwarding command. See: https://github.com/smaurice101/raspberrypi/tree/main/privategpt

*pgptendpoint* : string, required

- This must be: /v1/ingest

RETURNS: JSON containing Document details, or ERROR. 

**41. maadstml.pgptgetingestedembeddings(docname,ip,port,endpoint)**

**Parameters:**	

*docname* : string, required

- A full-path to a PDF, or text file.

*ip* : string, required

- Your container IP - this is usually: http://127.0.0.1

*port* : string, required

- Your container Port - this is usually: 8001.  This will be dependent on the docker run port forwarding command. See: https://github.com/smaurice101/raspberrypi/tree/main/privategpt

*endpoint* : string, required

- This must be: /v1/ingest/list

RETURNS: Three variables: docids,docstr,docidsstr; these are the embeddings related to docname. Or, ERROR. 

**42. maadstml.pgptchat(prompt,context,docfilter,port,includesources,ip,endpoint)**

**Parameters:**	

*prompt* : string, required

- A prompt for privateGPT.

*context* : bool, required

- This can be True or False. If True, privateGPT will use context, if False, it will not.

*docfilter* : string array, required

- This is docidsstr, and can be retrieved from pgptgetingestedembeddings.  If context=True, and dockfilter is empty, then ALL documents are used for context. 

*port* : string, required

- Your container Port - this is usually: 8001.  This will be dependent on the docker run port forwarding command. See: https://github.com/smaurice101/raspberrypi/tree/main/privategpt

*includesources* : bool, required

- This can be True or False. If True, with context, privateGPT will return the sources in the response.

*ip* : string, required

- Your container IP - this is usually: http://127.0.0.1

*endpoint* : string, required

- This must be: /v1/completions

RETURNS: The response from privateGPT, or ERROR. 

**43. maadstml.pgptdeleteembeddings(docids, ip,port,endpoint)**

**Parameters:**	

*docids* : string array, required

- An array of doc ids.  This can be retrieved from  pgptgetingestedembeddings.

*port* : string, required

- Your container Port - this is usually: 8001.  This will be dependent on the docker run port forwarding command. See: https://github.com/smaurice101/raspberrypi/tree/main/privategpt

*ip* : string, required

- Your container IP - this is usually: http://127.0.0.1

*endpoint* : string, required

- This must be: /v1/ingest/

RETURNS: Null if successful, or ERROR. 

**44. maadstml.pgpthealth(ip,port,endpoint)**

**Parameters:**	

*port* : string, required

- Your container Port - this is usually: 8001.  This will be dependent on the docker run port forwarding command. See: https://github.com/smaurice101/raspberrypi/tree/main/privategpt

*ip* : string, required

- Your container IP - this is usually: http://127.0.0.1

*endpoint* : string, required

- This must be: /health

RETURNS: This will return a JSON of OK if the privateGPT server is running, or ERROR. 

**45. maadstml.videochatloadresponse(url,port,filename,prompt,responsefolder='videogpt_response',temperature=0.2,max_output_tokens=512)**

**Parameters:**	

*url* : string, required

- IP video chatgpt is listening on in the container - this is usually: http://127.0.0.1

*port* : string, required

- Port video chat gpt is listening on in the container i.e. 7800

*filename* : string, required

- This is the video filename to analyse i.e. with mp4 extension

*prompt* : string, required

- This is the prompt for video chat gpt. i.e. "what is the video about? Is there anaything strange in the video?"

*responsefolder* : string, optional

- This is the folder you want video chatgpt to write responses to 

*temperature* : float, optional

- Temperature determines how conservative video chat gpt is i.e. closer to 0 very conservative in responses

*max_output_tokens* : int, optional

- max_output_tokens determines tokens to return

RETURNS: The file name the response was written to by video chatgpt. 

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/smaurice101/transactionalmachinelearning",
    "name": "maadstml",
    "maintainer": null,
    "docs_url": null,
    "requires_python": null,
    "maintainer_email": null,
    "keywords": "genai, multi-agent, transactional machine learning, artificial intelligence, chatGPT, generative AI, privateGPT, data streams, data science, optimization, prescriptive analytics, machine learning, automl, auto-ml, artificial intelligence, predictive analytics, advanced analytics",
    "author": "Sebastian Maurice",
    "author_email": "sebastian.maurice@otics.ca",
    "download_url": "https://files.pythonhosted.org/packages/92/93/0c46f7c4f04885334c4d5a22a6088f79d104f95314ce28099fe9270b0091/maadstml-3.48.tar.gz",
    "platform": null,
    "description": "**Multi-Agent Accelerator for Data Science Using Transactional Machine Learning (MAADSTML)**\r\n\r\n*Revolutionizing Data Stream Science with Transactional Machine Learning*\r\n\r\n**Overview**\r\n\r\n*MAADSTML combines Artificial Intelligence, ChatGPT, PrivateGPT, Auto Machine Learning with Data Streams Integrated with Apache Kafka (or Redpanda) to create frictionless and elastic machine learning solutions.*  \r\n\r\nThis library allows users to harness the power of agent-based computing using hundreds of advanced linear and non-linear algorithms. Users can easily integrate Predictive Analytics, Prescriptive Analytics, Pre-Processing, and Optimization in any data stream solution by wrapping additional code around the functions below. It connects with **Apache KAFKA brokers** for cloud based computing using Kafka (or Redpanda) as the data backbone. \r\n\r\nIf analysing MILLIONS of IoT devices, you can easily deploy thousands of VIPER/HPDE instances in Kubernetes Cluster in AWS/GCP/Azure. \r\n\r\nIt uses VIPER as a **KAFKA connector and seamlessly combines Auto Machine Learning, with Real-Time Machine Learning, Real-Time Optimization and Real-Time Predictions** while publishing these insights in to a Kafka cluster in real-time at scale, while allowing users to consume these insights from anywhere, anytime and in any format. \r\n\r\nIt also HPDE as the AutoML technology for TML.  Linux/Windows/Mac versions can be downloaded from [Github](https://github.com/smaurice101/transactionalmachinelearning)\r\n\r\nIt uses VIPERviz to visualize streaming insights over HTTP(S). Linux/Windows/Mac versions can be downloaded from [Github](https://github.com/smaurice101/transactionalmachinelearning)\r\n\r\nMAADSTML details can be found in the book: [Transactional Machine Learning with Data Streams and AutoML](https://www.amazon.com/Transactional-Machine-Learning-Streams-AutoML/dp/1484270223)\r\n\r\n\r\nTo install this library a request should be made to **support@otics.ca** for a username and a MAADSTOKEN.  Once you have these credentials then install this Python library.\r\n\r\n**Compatibility**\r\n    - Python 3.8 or greater\r\n    - Minimal Python skills needed\r\n\r\n**Copyright**\r\n   - Author: Sebastian Maurice, PhD\r\n   \r\n**Installation**\r\n   - At the command prompt write:\r\n     **pip install maadstml**\r\n     - This assumes you have [Downloaded Python](https://www.python.org/downloads/) and installed it on your computer.  \r\n\r\n**MAADS-VIPER Connector to Manage Apache KAFKA:** \r\n  - MAADS-VIPER python library connects to VIPER instances on any servers; VIPER manages Apache Kafka.  VIPER is REST based and cross-platform that can run on windows, linux, MAC, etc.. It also fully supports SSL/TLS encryption in Kafka brokers for producing and consuming.\r\n\r\n**TML is integrated with PrivateGPT (https://github.com/imartinez/privateGPT), which is a production ready GPT, that is 100% Local, 100% Secure and 100% FREE GPT Access.\r\n  - Users need to PULL and RUN one of the privateGPT Docker containers:\r\n  - \t1. Docker Hub: maadsdocker/tml-privategpt-no-gpu-amd64 (without NVIDIA GPU for AMD64 Chip)\r\n  -     2. Docker Hub: maadsdocker/tml-privategpt-with-gpu-amd64 (with NVIDIA GPU for AMD64 Chip)\r\n  - \t3. Docker Hub: maadsdocker/tml-privategpt-no-gpu-arm64 (without NVIDIA GPU for ARM64 Chip)\r\n  -     4. Docker Hub: maadsdocker/tml-privategpt-with-gpu-arm64 (with NVIDIA GPU for ARM64 Chip)\r\n  - Additional details are here: https://github.com/smaurice101/raspberrypi/tree/main/privategpt\r\n  - TML accesses privateGPT container using REST API. \r\n  - For PrivateGPT production deployments it is recommended that machines have the NVIDIA GPU as this will lead to significant performance improvements.\r\n\r\n- **pgptingestdocs**\r\n  - Set Context for PrivateGPT by ingesting PDFs or text documents.  All responses will then use these documents for context.  \r\n\r\n- **pgptgetingestedembeddings**\r\n  - After documents are ingested, you can retrieve the embeddings for the ingested documents.  These embeddings allow you to filter the documents for specific context.  \r\n\r\n- **pgptchat**\r\n  - Send any prompt to privateGPT (with or without context) and get back a response.  \r\n\r\n- **pgptdeleteembeddings**\r\n  - Delete embeddings.  \r\n\r\n- **pgpthealth**\r\n  - Check the health of the privateGPT http server.  \r\n\r\n- **vipermirrorbrokers**\r\n  - Migrate data streams from (mutiple) brokers to (multiple) brokers FAST!  In one simple function you have the \r\n    power to migrate from hundreds of brokers with hundreds of topics and partitions to any other brokers\r\n\twith ease.  Viper ensures no duplication of messages and translates offsets from last committed.  Every transaction \r\n\tis logged, making data validation and auditability a snap.  You can also increase or decrease partitions and \r\n\tapply filter to topics to copy over.  \r\n\t\r\n- **viperstreamquery**\r\n  - Query multiple streams with conditional statements.  For example, if you preprocessed multiple streams you can \r\n    query them in real-time and extract powerful insights.  You can use >, <, =, AND, OR. \r\n\r\n- **viperstreamquerybatch**\r\n  - Query multiple streams with conditional statements.  For example, if you preprocessed multiple streams you can \r\n    query them in real-time and extract powerful insights.  You can use >, <, =, AND, OR. Batch allows you to query\r\n\tmultiple IDs at once.\r\n\r\n- **viperlisttopics** \r\n  - List all topics in Kafka brokers\r\n \r\n- **viperdeactivatetopic**\r\n  - Deactivate topics in kafka brokers and prevent unused algorithms from consuming storage and computing resources that cost money \r\n\r\n- **viperactivatetopic**\r\n  - Activate topics in Kafka brokers \r\n\r\n- **vipercreatetopic**\r\n  - Create topics in Kafka brokers \r\n  \r\n- **viperstats**\r\n  - List all stats from Kafka brokers allowing VIPER and KAFKA admins with a end-end view of who is producing data to algorithms, and who is consuming the insights from the algorithms including date/time stamp on the last reads/writes to topics, and how many bytes were read and written to topics and a lot more\r\n\r\n- **vipersubscribeconsumer**\r\n  - Admins can subscribe consumers to topics and consumers will immediately receive insights from topics.  This also gives admins more control of who is consuming the insights and allows them to ensures any issues are resolved quickly in case something happens to the algorithms.\r\n  \r\n- **viperunsubscribeconsumer**\r\n  - Admins can unsubscribe consumers from receiving insights, this is important to ensure storage and compute resources are always used for active users.  For example, if a business user leaves your company or no longer needs the insights, by unsubscribing the consumer, the algorithm will STOP producing the insights.\r\n\r\n- **viperhpdetraining**\r\n  - Users can do real-time machine learning (RTML) on the data in Kafka topics. This is very powerful and useful for \"transactional learnings\" on the fly using our HPDE technology.  HPDE will find the optimal algorithm for the data in less than 60 seconds.  \r\n\r\n- **viperhpdetrainingbatch**\r\n  - Users can do real-time machine learning (RTML) on the data in Kafka topics. This is very powerful and useful for \"transactional learnings\" on the fly using our HPDE technology. \r\n    HPDE will find the optimal algorithm for the data in less than 60 seconds.  Batch allows you to perform ML on multiple IDs at once.\r\n\r\n- **viperhpdepredict**\r\n  - Using the optimal algorithm - users can do real-time predictions from streaming data into Kafka Topics.\r\n\r\n- **viperhpdepredictprocess**\r\n  - Using the optimal algorithm you can determine object ranking based on input data.  For example, if you want to know which human or machine is the \r\n    best or worst given input data then this function will return the best or worst human or machine.\r\n\r\n- **viperhpdepredictbatch**\r\n  - Using the optimal algorithm - users can do real-time predictions from streaming data into Kafka Topics. Batch allows you to perform predictions\r\n    on multiple IDs at once.\r\n  \r\n- **viperhpdeoptimize**\r\n  -  Users can even do optimization to MINIMIZE or MAXIMIZE the optimal algorithm to find the BEST values for the independent variables that will minimize or maximize the dependent variable.\r\n\r\n- **viperhpdeoptimizebatch**\r\n  -  Users can even do optimization to MINIMIZE or MAXIMIZE the optimal algorithm to find the BEST values for the independent variables that will minimize or maximize the dependent \r\n     variable. Batch allows you to optimize multiple IDs at once.\r\n\r\n- **viperproducetotopic**\r\n  - Users can produce to any topics by injesting from any data sources.\r\n\r\n- **viperproducetotopicbulk**\r\n  - Users can produce to any topics by injesting from any data sources.  Use this function to write bulk transactions at high speeds.  With the right architecture and\r\n  network you can stream 1 million transactions per second (or more).\r\n  \r\n- **viperconsumefromtopic**\r\n  - Users can consume from any topic and graph the data. \r\n\r\n- **viperconsumefromtopicbatch**\r\n  - Users can consume from any topic and graph the data.  Batch allows you to consume from multiple IDs at once.\r\n  \r\n- **viperconsumefromstreamtopic**\r\n  - Users can consume from a multiple stream of topics at once\r\n\r\n- **vipercreateconsumergroup**\r\n  - Admins can create a consumer group made up of any number of consumers.  You can add as many partitions for the group in the Kafka broker as well as specify the replication factor to ensure high availaibility and no disruption to users who consume insights from the topics.\r\n\r\n- **viperconsumergroupconsumefromtopic**\r\n  - Users who are part of the consumer group can consume from the group topic.\r\n\r\n- **viperproducetotopicstream**\r\n  - Users can join multiple topic streams and produce the combined results to another topic.\r\n  \r\n- **viperpreprocessproducetotopicstream**\r\n  - Users can pre-process data streams using the following functions: MIN, MAX, AVG, COUNT, COUNTSTR, DIFF, DIFFMARGIN, SUM, MEDIAN, VARIANCE, OUTLIERS, OUTLIERSX-Y,VARIED, \r\n    ANOMPROB,ANOMPROBX-Y,ENTROPY, AUTOCORR, TREND, CONSISTENCY, IQR (InterQuartileRange), Midhinge, GM (Geometric mean), HM (Harmonic mean), Trimean, \r\n\tCV (coefficient of Variation),Mad (Mean absolute deviation), Skewness, Kurtosis, Spikedetect, Unique, Uniquestr, Timediff: time should be in this \r\n\tlayout:2006-01-02T15:04:05, Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the \r\n    average time in seconds between consecutive dates.. Spikedetect uses a Zscore method to detect \r\n\tspikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.  Geodiff (returns distance in Kilometers between two lat/long points)\r\n\t\r\n    Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from\r\n\tcurrent local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.\r\n\tYou can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype\r\n\twill compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference\r\n\tbetween the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype\r\n\tfor data quality and data assurance programs for any number of data streams.\r\n\t\t\r\n\tUnique Checks numeric data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.\r\n \r\n    Uniquestr Checks string data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.\r\n\r\n    Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.\r\n \r\n    Uniquestrcount Checks string data for duplication.  Returns count of unique strings.\r\n\t\r\n    CONSISTENCY checks if the data all have consistent data types. Returns 1 for consistent data types, 0 otherwise.\r\n\t\r\n\tMeanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high \r\n\r\n    RAW for no processing.\r\n\t\r\n    ANOMPROB=Anomaly Probability, it will run several algorithms on the data stream window to determine a probability percentage of \r\n\tanomalous behaviour.  This can be cross-referenced with other process types. This is very useful if you want to extract aggregate \r\n\tvalues that you can then use to build TML models and/or make decisions to prevent issues.  ENTROPY will compute the amount of information\r\n\tin the data stream.  AUTOCORR will run a autocorrelation regression: Y = Y (t-1), to indicate how previous value correlates with future \r\n    value.  TREND will run a linear regression of Y = f(Time), to determine if the data in the stream are increasing or decreasing.\t\r\n\r\n    ANOMPROBX-Y (similar to OUTLIERSX-Y), where X and Y are numbers or \"n\", if \"n\" means examine all anomalies for recurring patterns.\r\n\tThey allow you to check if the anomalies in the streams are truly anomalies and not some\r\n    pattern.  For example, if a IoT device shuts off and turns on again routinely, this may be picked up as an anomaly when in fact\r\n    it is normal behaviour.  So, to ignore these cases, if ANOMPROB2-5, this tells Viper, check anomalies with patterns of 2-5 peaks.\r\n    If the stream has two classes and these two classes are like 0 and 1000, and show a pattern, then they should not be considered an anomaly.\r\n    Meaning, class=0, is the device shutting down, class=1000 is the device turning back on.  If ANOMPROB3-10, Viper will check for \r\n    patterns of classes 3 to 10 to see if they recur routinely.  This is very helpful to reduce false positives and false negatives.\r\n\r\n- **viperpreprocessbatch**\r\n  - This function is similar to *viperpreprocessproducetotopicstream* the only difference is you can specify multiple\r\n    tmlids in Topicid field. This allows you to batch process multiple tmlids at once.  This is very useful if using\r\n\tkubernetes architecture.\r\n\r\n- **vipercreatejointopicstreams**\r\n  - Users can join multiple topic streams\r\n  \r\n- **vipercreatetrainingdata**\r\n  - Users can create a training data set from the topic streams for Real-Time Machine Learning (RTML) on the fly.\r\n\r\n- **vipermodifyconsumerdetails**\r\n  - Users can modify consumer details on the topic.  When topics are created an admin must indicate name, email, location and description of the topic.  This helps to better manage the topic and if there are issues, the admin can contact the individual consuming from the topic.\r\n  \r\n- **vipermodifytopicdetails**\r\n  - Users can modify details on the topic.  When topics are created an admin must indicate name, email, location and description of the topic.  This helps to better manage the topic and if there are issues, the admin can contact the developer of the algorithm and resolve issue quickly to ensure disruption to consumers is minimal.\r\n \r\n- **vipergroupdeactivate**\r\n  - Admins can deactive a consumer group, which will stop all insights being delivered to consumers in the group.\r\n  \r\n- **vipergroupactivate**\r\n  - Admins can activate a group to re-start the insights.\r\n \r\n- **viperdeletetopics**\r\n  - Admins can delete topics in VIPER database and Kafka clusters.\r\n\t\t\r\n- **viperanomalytrain**\r\n  - Perform anomaly/peer group analysis on text or numeric data stream using advanced unsupervised learning. VIPER automatically joins \r\n    streams, and determines the peer group of \"usual\" behaviours using proprietary algorithms, which are then used to predict anomalies with \r\n\t*viperanomalypredict* in real-time.  Users can use several parameters to fine tune the peer groups.  \r\n\t\r\n\t*VIPER is one of the very few, if not only, technology to do anomaly/peer group analysis using unsupervised learning on data streams \r\n\twith Apache Kafka.*\r\n\r\n- **viperanomalytrainbatch**\r\n  - Batch allows you to perform anomaly training on multiple IDs at once.\r\n\r\n- **viperanomalypredict**\r\n  - Predicts anomalies for text or numeric data using the peer groups found with *viperanomalytrain*.  VIPER automatically joins streams\r\n  and compares each value with the peer groups and determines if a value is anomalous in real-time.  Users can use several parameters to fine tune\r\n  the analysis. \r\n  \r\n  *VIPER is one of the very few, if not only, technology to do anomaly detection/predictions using unsupervised learning on data streams\r\n  with Apache Kafka.*\r\n\t\t\r\n- **viperanomalypredictbatch**\r\n  - Batch allows you to perform anomaly prediction on multiple IDs at once.\r\n\t\t\t\t\r\n- **viperstreamcorr**\r\n  - Performs streaming correlations by joining multiple data streams with 2 variables.  Also performs cross-correlations with 4 variables.\r\n    This is a powerful function and can offer important correlation signals between variables.   Will also correlate TEXT using \r\n    natural language processing (NLP).\t\r\n\r\n- **viperpreprocesscustomjson**\r\n  - Immediately start processing ANY RAW JSON data in minutes.  This is useful if you want to start processing data quickly.  \r\n\r\n- **viperstreamcluster**\r\n  - Perform cluster analysis on streaming data.  This uses K-Means clustering with Euclidean or EuclideanSquared algorithms to compute \r\n    distance.  It is a very useful function if you want to determine common behaviours between devices, patients, or other entities.\r\n\tUsers can also setup email alerts if specific clusters are found.\r\n\r\n- **vipersearchanomaly**\r\n  - Perform advanced analysis for user search.  This function is useful if you want to monitor what people are searching for, and determine\r\n    if the searches are anamolous and differ from the peer group of \"normal\" search behaviour.\r\n\r\n- **vipernlp**\r\n  - Perform advanced natural language summary of PDFs.\r\n\r\n- **viperchatgpt**\r\n  - Start a conversation with ChatGPT in real-time and stream responses.\r\n\r\n- **viperexractpdffields**\r\n  - Extracts fields from PDF file\r\n\r\n- **viperexractpdffieldbylabel**\r\n  - Extracts fields from PDF file by label name.\r\n\r\n- **videochatloadresponse**\r\n  - Analyse videos with video chatgpt.  This is a powerful GPT LLM that will understand and reason with videos frame by frame.  \r\n    It will also understand the spatio-temporal frames in the video.  Video gpt runs in a container. \r\n\r\n- **areyoubusy**\r\n  - If deploying thousands of VIPER/HPDE binaries in a Kubernetes cluster - you can broadcast a 'areyoubusy' message to all VIPER and HPDE\r\n    binaries, and they will return back the HOST/PORT if they are NOT busy with other tasks.  This is very convenient for dynamically managing  \r\n\tenormous load among VIPER/HPDE and allows you to dynamically assign HOST/PORT to **non-busy** VIPER/HPDE microservices.\r\n\r\n**First import the Python library.**\r\n\r\n**import maadstml**\r\n\r\n\r\n**1. maadstml.viperstats(vipertoken,host,port=-999,brokerhost='',brokerport=-999,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address where Kafka broker is running - if none is specified, the Kafka broker address in the VIPER.ENV file will be used.\r\n\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port on which Kafka is listenting.\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: A JSON formatted object of all the Kafka broker information.\r\n\r\n**2. maadstml.vipersubscribeconsumer(vipertoken,host,port,topic,companyname,contactname,contactemail,\r\n\t\tlocation,description,brokerhost='',brokerport=-999,groupid='',microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n\r\n- Topic to subscribe to in Kafka broker\r\n\r\n*companyname* : string, required\r\n\r\n- Company name of consumer\r\n\r\n*contactname* : string, required\r\n\r\n- Contact name of consumer\r\n\r\n*contactemail* : string, required\r\n\r\n- Contact email of consumer\r\n\r\n*location* : string, required\r\n\r\n- Location of consumer\r\n\r\n*description* : string, required\r\n\r\n- Description of why consumer wants to subscribe to topic\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*groupid* : string, optional\r\n\r\n- Subscribe consumer to group\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Consumer ID that the user must use to receive insights from topic.\r\n\r\n\r\n**3. maadstml.viperunsubscribeconsumer(vipertoken,host,port,consumerid,brokerhost='',brokerport=-999,\r\n\tmicroserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*consumerid* : string, required\r\n       \r\n- Consumer id to unsubscribe\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\nRETURNS: Success/failure \r\n\r\n**4. maadstml.viperproducetotopic(vipertoken,host,port,topic,producerid,enabletls=0,delay=100,inputdata='',maadsalgokey='',\r\n\tmaadstoken='',getoptimal=0,externalprediction='',subtopics='',topicid=-999,identifier='',array=0,brokerhost='',\r\n\tbrokerport=-999,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n\r\n- Topic or Topics to produce to.  You can separate multiple topics by a comma.  If using multiple topics, you must \r\n  have the same number of producer ids (separated by commas), and same number of externalprediction (separated by\r\n  commas).  Producing to multiple topics at once is convenient for synchronizing the timing of \r\n  streams for machine learning.\r\n\r\n*subtopic* : string, optional\r\n\r\n- Enter sub-topic streams.  This is useful if you want to reduce the number of topics/partitions in Kafka by adding\r\n  sub-topics in the main topic.  \r\n\r\n*topicid* : int, optional\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, with 10 subtopic streams \r\n  you can assign a Topicid to each IoT device and each of the 10 subtopics will be associated to each IoT device.\r\n  This way, you do not create 10,000 streams, but just 1 Main Topic stream, and VIPER will add the 10,000 streams\r\n  in the one topic.  This will also drastically reduce the partition costs.  You can also create custom machine \r\n  learning models, predictions, and optimization for each 1000 IoT devices quickly: **It is very powerful.**\r\n\r\n\"array* : int, optional\r\n\r\n- You can stream multiple variables at once, and use array=1 to specify that the streams are an array.\r\n  This is similar to streaming 1 ROW in a database, and useful if you want to synchonize variables for machine learning.  \r\n  For example, if a device produces 3 streams: stream A, stream B, stream C, and rather than streaming A, B, C separately\r\n  you can add them to subtopic=\"A,B,C\", and externalprediction=\"value_FOR_A,value_FOR_B,value_FOR_C\", then specify\r\n  array=1, then when you do machine learning on this data, the variables A, B, C are date/time synchronized\r\n  and you can choose which variable is the depdendent variable in viperhpdetraining function.\r\n\r\n\r\n*identifier* : string, optional\r\n\r\n- You can add any string identifier for the device.  For examaple, DSN ID, IoT device id etc.. \r\n\r\n*producerid* : string, required\r\n       \r\n- Producer ID of topic to produce to in the Kafka broker\r\n\r\n*enabletls* : int, optional\r\n       \r\n- Set to 1 if Kafka broker is enabled with SSL/TLS encryption, otherwise 0 for plaintext.\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds from VIPER backsout from writing messages\r\n\r\n*inputdata* : string, optional\r\n\r\n- This is the inputdata for the optimal algorithm found by MAADS or HPDE\r\n\r\n*maadsalgokey* : string, optional\r\n\r\n- This should be the optimal algorithm key returned by maadstml.dotraining function.\r\n\r\n*maadstoken* : string, optional\r\n- If the topic is the name of the algorithm from MAADS, then a MAADSTOKEN must be specified to access the algorithm in the MAADS server\r\n\r\n*getoptimal*: int, optional\r\n- If you used the maadstml.OPTIMIZE function to optimize a MAADS algorithm, then if this is 1 it will only retrieve the optimal results in JSON format.\r\n\r\n*externalprediction* : string, optional\r\n- If you are using your own custom algorithms, then the output of your algorithm can be still used and fed into the Kafka topic.\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns the value produced or results retrieved from the optimization.\r\n\r\n**4.1. maadstml.viperproducetotopicbulk(vipertoken,host,port,topic,producerid,inputdata,partitionsize=100,enabletls=1,delay=100,\r\n        brokerhost='',brokerport=-999,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n\r\n- Topic or Topics to produce to.  You can separate multiple topics by a comma.  If using multiple topics, you must \r\n  have the same number of producer ids (separated by commas), and same number of externalprediction (separated by\r\n  commas).  Producing to multiple topics at once is convenient for synchronizing the timing of \r\n  streams for machine learning.\r\n\r\n*producerid* : string, required\r\n       \r\n- Producer ID of topic to produce to in the Kafka broker.  Separate multiple producer ids with comma.\r\n\r\n*inputdata* : string, required\r\n       \r\n- You can write multiple transactions to each topic.  Each group of transactions must be separated by a tilde.  \r\n  Each transaction in the group must be separate by a comma.  The number of groups must match the producerids and \r\n  topics.  For example, if you are writing to two topics: topic1,topic2, then the inputdata should be:\r\n  trans1,transn2,...,transnN~trans1,transn2,...,transnN.  The number of transactions and topics can be any number.\r\n  This function can be very powerful if you need to analyse millions or billions of transactions very quickly.\r\n\r\n*partitionsize* : int, optional\r\n\r\n- This is the number of partitions of the inputdata.  For example, if your transactions=10000, then VIPER will \r\n  create partitions of size 100 (if partitionsize=100) resulting in 100 threads for concurrency.  The higher\r\n  the partitionsize, the lower the number of threads.  If you want to streams lots of data fast, then a \r\n  partitionzie of 1 is the fastest but will come with overhead because more RAM and CPU will be consumed.\r\n\r\n*enabletls* : int, optional\r\n       \r\n- Set to 1 if Kafka broker is enabled with SSL/TLS encryption, otherwise 0 for plaintext.\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds from VIPER backsout from writing messages\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: None\r\n\r\n**5. maadstml.viperconsumefromtopic(vipertoken,host,port,topic,consumerid,companyname,partition=-1,enabletls=0,delay=100,offset=0,\r\n\tbrokerhost='',brokerport=-999,microserviceid='',topicid='-999',rollbackoffsets=0,preprocesstype='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*preprocesstype* : string, optional\r\n\r\n- If you only want to search for record that have a particular processtype, you can enter:\r\n  MIN, MAX, AVG, COUNT, COUNTSTR, DIFF, DIFFMARGIN, SUM, MEDIAN, VARIANCE, OUTLIERS, OUTLIERSX-Y, VARIED, ANOMPROB,ANOMPROBX-Y,ENTROPY, \r\n  AUTOCORR, TREND, CONSISTENCY, Unique, Uniquestr, Geodiff (returns distance in Kilometers between two lat/long points)\r\n  IQR (InterQuartileRange), Midhinge, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), \r\n  Mad (Mean absolute deviation), Skewness, Kurtosis, Spikedetect, Timediff: time should be in this layout:2006-01-02T15:04:05,\r\n  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the \r\n  average time in seconds between consecutive dates.\r\n  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.   \r\n\r\n  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from\r\n  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.\r\n  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype\r\n  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference\r\n  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype\r\n  for data quality and data assurance programs for any number of data streams.\r\n\r\n  Unique Checks numeric data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.\r\n\r\n  Uniquestr Checks string data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.\r\n\r\n  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.\r\n \r\n  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.\r\n\r\n  CONSISTENCY checks if the data all have consistent data types. Returns 1 for consistent data types, 0 otherwise.\r\n  \r\n  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high \r\n\r\n  RAW for no processing.\r\n  \r\n  ANOMPROB=Anomaly probability,\r\n  it will run several algorithms on the data stream window to determine a probaility of anomalous\r\n  behaviour.  This can be cross-refenced with OUTLIERS.  It can be very powerful way to detection\r\n  issues with devices.\r\n  \r\n  ANOMPROBX-Y (similar to OUTLIERSX-Y), where X and Y are numbers, or \"n\".  If \"n\", means examine all anomalies for patterns.\r\n  They allow you to check if the anomalies in the streams are truly anomalies and not some\r\n  pattern.  For example, if a IoT device shuts off and turns on again routinely, this may be picked up as an anomaly when in fact\r\n  it is normal behaviour.  So, to ignore these cases, if ANOMPROB2-5, this tells Viper, check anomalies with patterns of 2-5 peaks.\r\n  If the stream has two classes and these two classes are like 0 and 1000, and show a pattern, then they should not be considered an anomaly.\r\n  Meaning, class=0, is the device shutting down, class=1000 is the device turning back on.  If ANOMPROB3-10, Viper will check for \r\n  patterns of classes 3 to 10 to see if they recur routinely.  This is very helpful to reduce false positives and false negatives.\r\n\r\n  \r\n*topicid* : string, optional\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can consume on a per device by entering\r\n  its topicid  that you gave when you produced the topic stream. Or, you can read from multiple topicids at the same time.  \r\n  For example, if you have 10 ids, then you can specify each one separated by a comma: 1,2,3,4,5,6,7,8,9,10\r\n  VIPER will read topicids in parallel.  This can drastically speed up consumption of messages but will require more \r\n  CPU.\r\n\r\n*rollbackoffsets* : int, optional, enter value between 0 and 100\r\n\r\n- This will rollback the streams by this percentage.  For example, if using topicid, the main stream is rolled back by this\r\n  percentage amount.\r\n\r\n*consumerid* : string, required\r\n\r\n- Consumer id associated with the topic\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*partition* : int, optional\r\n\r\n- set to Kafka partition number or -1 to autodect\r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*offset*: int, optional\r\n\r\n- Offset to start the reading from..if 0 then reading will start from the beginning of the topic. If -1, VIPER will automatically \r\n  go to the last offset.  Or, you can extract the LastOffet from the returned JSON and use this offset for your next call.  \r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the contents read from the topic.\r\n\r\n**5.1 maadstml.viperconsumefromtopicbatch(vipertoken,host,port,topic,consumerid,companyname,partition=-1,enabletls=0,delay=100,offset=0,\r\n\tbrokerhost='',brokerport=-999,microserviceid='',topicid='-999',rollbackoffsets=0,preprocesstype='',timedelay=0,asynctimeout=120)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*asynctimeout* : int, optional\r\n \r\n  -This is the timeout in seconds for the Python library async function.\r\n\r\n*timedelay* : int, optional\r\n\r\n - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause \r\n   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated\r\n   every 3600 seconds, it may make sense to set timedelay=3600\r\n \r\n*topic* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*preprocesstype* : string, optional\r\n\r\n- If you only want to search for record that have a particular processtype, you can enter:\r\n  MIN, MAX, AVG, COUNT, COUNTSTR, DIFF, DIFFMARGIN, SUM, MEDIAN, VARIANCE, OUTLIERS, OUTLIERSX-Y, VARIED, ANOMPROB,ANOMPROBX-Y,ENTROPY, AUTOCORR, TREND, \r\n  IQR (InterQuartileRange), Midhinge, CONSISTENCY, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), \r\n  Mad (Mean absolute deviation), Skewness, Kurtosis, Spikedetect, Unique, Uniquestr, Timediff: time should be in this layout:2006-01-02T15:04:05,\r\n  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the \r\n  average time in seconds between consecutive dates. \r\n  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.   \r\n  Geodiff (returns distance in Kilometers between two lat/long points)\r\n  Unique Checks numeric data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.\r\n\r\n  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from\r\n  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.\r\n  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype\r\n  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference\r\n  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype\r\n  for data quality and data assurance programs for any number of data streams.\r\n\r\n  Uniquestr Checks string data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.\r\n\r\n  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.\r\n \r\n  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.\r\n  \r\n  CONSISTENCY checks if the data all have consistent data types. Returns 1 for consistent data types, 0 otherwise.\r\n\r\n  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high \r\n\r\n  RAW for no processing.\r\n\r\n  ANOMPROB=Anomaly probability,\r\n  it will run several algorithms on the data stream window to determine a probaility of anomalous\r\n  behaviour.  This can be cross-refenced with OUTLIERS.  It can be very powerful way to detection\r\n  issues with devices.\r\n  \r\n  ANOMPROBX-Y (similar to OUTLIERSX-Y), where X and Y are numbers, or \"n\".  If \"n\", means examine all anomalies for patterns.\r\n  They allow you to check if the anomalies in the streams are truly anomalies and not some\r\n  pattern.  For example, if a IoT device shuts off and turns on again routinely, this may be picked up as an anomaly when in fact\r\n  it is normal behaviour.  So, to ignore these cases, if ANOMPROB2-5, this tells Viper, check anomalies with patterns of 2-5 peaks.\r\n  If the stream has two classes and these two classes are like 0 and 1000, and show a pattern, then they should not be considered an anomaly.\r\n  Meaning, class=0, is the device shutting down, class=1000 is the device turning back on.  If ANOMPROB3-10, Viper will check for \r\n  patterns of classes 3 to 10 to see if they recur routinely.  This is very helpful to reduce false positives and false negatives.\r\n\r\n  \r\n*topicid* : string, required\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can consume on a per device by entering\r\n  its topicid  that you gave when you produced the topic stream. Or, you can read from multiple topicids at the same time.  \r\n  For example, if you have 10 ids, then you can specify each one separated by a comma: 1,2,3,4,5,6,7,8,9,10\r\n  VIPER will read topicids in parallel.  This can drastically speed up consumption of messages but will require more \r\n  CPU.  VIPER will consume continously from topic ids.\r\n\r\n*rollbackoffsets* : int, optional, enter value between 0 and 100\r\n\r\n- This will rollback the streams by this percentage.  For example, if using topicid, the main stream is rolled back by this\r\n  percentage amount.\r\n\r\n*consumerid* : string, required\r\n\r\n- Consumer id associated with the topic\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*partition* : int, optional\r\n\r\n- set to Kafka partition number or -1 to autodect\r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*offset*: int, optional\r\n\r\n- Offset to start the reading from..if 0 then reading will start from the beginning of the topic. If -1, VIPER will automatically \r\n  go to the last offset.  Or, you can extract the LastOffet from the returned JSON and use this offset for your next call.  \r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the contents read from the topic.\r\n\r\n**6. maadstml.viperhpdepredict(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,\r\n\t\thpdehost,inputdata,maxrows=0,algokey='',partition=-1,offset=-1,enabletls=1,delay=1000,hpdeport=-999,brokerhost='',\r\n\t\tbrokerport=-999,timeout=120,usedeploy=0,microserviceid='',topicid=-999, maintopic='', streamstojoin='',\r\n\t\tarray=0,pathtoalgos='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topicid* : int, optional\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, with 10 subtopic streams \r\n  you can assign a Topicid to each IoT device and each of the 10 subtopics will be associated to each IoT device.\r\n  This way, you can do predictions for each IoT using its own custom ML model.\r\n  \r\n*pathtoalgos* : string, required\r\n\r\n- Enter the full path to the root folder where the algorithms are stored.\r\n  \r\n*maintopic* : string, optional\r\n\r\n-  This is the name of the topic that contains the sub-topic streams.\r\n\r\n*array* : int, optional\r\n\r\n- Set array=1 if you produced data (from viperproducetotopic) as an array.  \r\n\r\n*streamstojoin* : string, optional\r\n\r\n- These are the sub-topics you are streaming into maintopic.  To do predictions, VIPER will automatically join \r\n  these streams to create the input data for predictions for each Topicid.\r\n  \r\n*consumefrom* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*produceto* : string, required\r\n\r\n- Topic to produce results of the prediction to\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*consumerid*: string, required\r\n\r\n- Consumerid associated with the topic to consume from\r\n\r\n*producerid*: string, required\r\n\r\n- Producerid associated with the topic to produce to\r\n\r\n*inputdata*: string, required\r\n\r\n- This is a comma separated list of values that represent the independent variables in your algorithm. \r\n  The order must match the order of the independent variables in your algorithm. OR, you can enter a \r\n  data stream that contains the joined topics from *vipercreatejointopicstreams*.\r\n\r\n*maxrows*: int, optional\r\n\r\n- Use this to rollback the stream by maxrows offsets.  For example, if you want to make 1000 predictions\r\n  then set maxrows=1000, and make 1000 predictions from the current offset of the independent variables.\r\n\r\n*algokey*: string, optional\r\n\r\n- If you know the algorithm key that was returned by VIPERHPDETRAIING then you can specify it here.\r\n  Specifying the algokey can drastically speed up the predictions.\r\n\r\n*partition* : int, optional\r\n\r\n- If you know the kafka partition used to store data then specify it here.\r\n  Most cases Kafka will dynamically store data in partitions, so you should\r\n  use the default of -1 to let VIPER find it.\r\n \r\n*offset* : int, optional\r\n\r\n- Offset to start consuming data.  Usually you can use -1, and VIPER\r\n  will get the last offset.\r\n  \r\n*hpdehost*: string, required\r\n\r\n- Address of HPDE \r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encryted traffic, otherwise 0 for plaintext.\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*hpdeport*: int, required\r\n\r\n- Port number HPDE is listening on \r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*timeout* : int, optional\r\n\r\n - Number of seconds that VIPER waits when trying to make a connection to HPDE.\r\n\r\n*usedeploy* : int, optional\r\n\r\n - If 0 will use algorithm in test, else if 1 use in production algorithm. \r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the prediction.\r\n\r\n**6.1 maadstml.viperhpdepredictbatch(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,\r\n\t\thpdehost,inputdata,maxrows=0,algokey='',partition=-1,offset=-1,enabletls=1,delay=1000,hpdeport=-999,brokerhost='',\r\n\t\tbrokerport=-999,timeout=120,usedeploy=0,microserviceid='',topicid=\"-999\", maintopic='', streamstojoin='',\r\n\t\tarray=0,timedelay=0,asynctimeout=120,pathtoalgos='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*asynctimeout* : int, optional\r\n \r\n  -This is the timeout in seconds for the Python library async function.\r\n\r\n*timedelay* : int, optional\r\n\r\n - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause \r\n   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated\r\n   every 3600 seconds, it may make sense to set timedelay=3600\r\n\r\n*topicid* : string, required\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, with 10 subtopic streams \r\n  you can assign a Topicid to each IoT device and each of the 10 subtopics will be associated to each IoT device.\r\n  This way, you can do predictions for each IoT using its own custom ML model.  Separate multiple topicids by a \r\n  comma.  For example, topicid=\"1,2,3,4,5\" and viper will process at once.\r\n    \r\n*pathtoalgos* : string, required\r\n\r\n- Enter the full path to the root folder where the algorithms are stored.\r\n\t\r\n*maintopic* : string, optional\r\n\r\n-  This is the name of the topic that contains the sub-topic streams.\r\n\r\n*array* : int, optional\r\n\r\n- Set array=1 if you produced data (from viperproducetotopic) as an array.  \r\n\r\n*streamstojoin* : string, optional\r\n\r\n- These are the sub-topics you are streaming into maintopic.  To do predictions, VIPER will automatically join \r\n  these streams to create the input data for predictions for each Topicid.\r\n  \r\n*consumefrom* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*produceto* : string, required\r\n\r\n- Topic to produce results of the prediction to\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*consumerid*: string, required\r\n\r\n- Consumerid associated with the topic to consume from\r\n\r\n*producerid*: string, required\r\n\r\n- Producerid associated with the topic to produce to\r\n\r\n*inputdata*: string, required\r\n\r\n- This is a comma separated list of values that represent the independent variables in your algorithm. \r\n  The order must match the order of the independent variables in your algorithm. OR, you can enter a \r\n  data stream that contains the joined topics from *vipercreatejointopicstreams*.\r\n\r\n*maxrows*: int, optional\r\n\r\n- Use this to rollback the stream by maxrows offsets.  For example, if you want to make 1000 predictions\r\n  then set maxrows=1000, and make 1000 predictions from the current offset of the independent variables.\r\n\r\n*algokey*: string, optional\r\n\r\n- If you know the algorithm key that was returned by VIPERHPDETRAIING then you can specify it here.\r\n  Specifying the algokey can drastically speed up the predictions.\r\n\r\n*partition* : int, optional\r\n\r\n- If you know the kafka partition used to store data then specify it here.\r\n  Most cases Kafka will dynamically store data in partitions, so you should\r\n  use the default of -1 to let VIPER find it.\r\n \r\n*offset* : int, optional\r\n\r\n- Offset to start consuming data.  Usually you can use -1, and VIPER\r\n  will get the last offset.\r\n  \r\n*hpdehost*: string, required\r\n\r\n- Address of HPDE \r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encryted traffic, otherwise 0 for plaintext.\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*hpdeport*: int, required\r\n\r\n- Port number HPDE is listening on \r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*timeout* : int, optional\r\n\r\n - Number of seconds that VIPER waits when trying to make a connection to HPDE.\r\n\r\n*usedeploy* : int, optional\r\n\r\n - If 0 will use algorithm in test, else if 1 use in production algorithm. \r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the prediction.\r\n\r\n**6.2. maadstml.viperhpdepredictprocess(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,hpdehost,inputdata,processtype,maxrows=0,\r\n                     algokey='',partition=-1,offset=-1,enabletls=1,delay=1000,hpdeport=-999,brokerhost='',brokerport=9092,\r\n                     timeout=120,usedeploy=0,microserviceid='',topicid=-999, maintopic='',\r\n                     streamstojoin='',array=0,pathtoalgos='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topicid* : int, optional\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, with 10 subtopic streams \r\n  you can assign a Topicid to each IoT device and each of the 10 subtopics will be associated to each IoT device.\r\n  This way, you can do predictions for each IoT using its own custom ML model.\r\n  \r\n*pathtoalgos* : string, required\r\n\r\n- Enter the full path to the root folder where the algorithms are stored.\r\n  \r\n*maintopic* : string, optional\r\n\r\n-  This is the name of the topic that contains the sub-topic streams.\r\n\r\n*array* : int, optional\r\n\r\n- Set array=1 if you produced data (from viperproducetotopic) as an array.  \r\n\r\n*streamstojoin* : string, optional\r\n\r\n- These are the sub-topics you are streaming into maintopic.  To do predictions, VIPER will automatically join \r\n  these streams to create the input data for predictions for each Topicid.\r\n  \r\n*consumefrom* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*produceto* : string, required\r\n\r\n- Topic to produce results of the prediction to\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*consumerid*: string, required\r\n\r\n- Consumerid associated with the topic to consume from\r\n\r\n*producerid*: string, required\r\n\r\n- Producerid associated with the topic to produce to\r\n\r\n*inputdata*: string, required\r\n\r\n- This is a comma separated list of values that represent the independent variables in your algorithm. \r\n  The order must match the order of the independent variables in your algorithm. OR, you can enter a \r\n  data stream that contains the joined topics from *vipercreatejointopicstreams*.\r\n\r\n*processtype*: string, required\r\n\r\n- This must be: max, min, avg, median, trend, all.  For example, to find the maximum or the best human or machine.\r\n  Trend will compute the predictions are trending.  Avg is the average of all predictions.  Median is the median of\r\n  predictions.  All will produce all predictions.  \r\n\r\n*maxrows*: int, optional\r\n\r\n- Use this to rollback the stream by maxrows offsets.  For example, if you want to make 1000 predictions\r\n  then set maxrows=1000, and make 1000 predictions from the current offset of the independent variables.\r\n\r\n*algokey*: string, optional\r\n\r\n- If you know the algorithm key that was returned by VIPERHPDETRAIING then you can specify it here.\r\n  Specifying the algokey can drastically speed up the predictions.\r\n\r\n*partition* : int, optional\r\n\r\n- If you know the kafka partition used to store data then specify it here.\r\n  Most cases Kafka will dynamically store data in partitions, so you should\r\n  use the default of -1 to let VIPER find it.\r\n \r\n*offset* : int, optional\r\n\r\n- Offset to start consuming data.  Usually you can use -1, and VIPER\r\n  will get the last offset.\r\n  \r\n*hpdehost*: string, required\r\n\r\n- Address of HPDE \r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encryted traffic, otherwise 0 for plaintext.\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*hpdeport*: int, required\r\n\r\n- Port number HPDE is listening on \r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*timeout* : int, optional\r\n\r\n - Number of seconds that VIPER waits when trying to make a connection to HPDE.\r\n\r\n*usedeploy* : int, optional\r\n\r\n - If 0 will use algorithm in test, else if 1 use in production algorithm. \r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the prediction.\r\n\r\n**7. maadstml.viperhpdeoptimize(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,\r\n\t\thpdehost,partition=-1,offset=-1,enabletls=0,delay=100,hpdeport=-999,usedeploy=0,ismin=1,constraints='best',\r\n\t\tstretchbounds=20,constrainttype=1,epsilon=10,brokerhost='',brokerport=-999,timeout=120,microserviceid='',topicid=-999)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*consumefrom* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*topicid* : int, optional\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can perform\r\n  mathematical optimization for each of the 1000 IoT devices using their specific algorithm.\r\n  \r\n*produceto* : string, required\r\n\r\n- Topic to produce results of the prediction to\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*consumerid*: string, required\r\n\r\n- Consumerid associated with the topic to consume from\r\n\r\n*producerid*: string, required\r\n\r\n- Producerid associated with the topic to produce to\r\n\r\n*hpdehost*: string, required\r\n\r\n- Address of HPDE \r\n\r\n*partition* : int, optional\r\n\r\n- If you know the kafka partition used to store data then specify it here.\r\n  Most cases Kafka will dynamically store data in partitions, so you should\r\n  use the default of -1 to let VIPER find it.\r\n \r\n*offset* : int, optional\r\n\r\n- Offset to start consuming data.  Usually you can use -1, and VIPER\r\n  will get the last offset.\r\n  \r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*hpdeport*: int, required\r\n\r\n- Port number HPDE is listening on \r\n\r\n*usedeploy* : int, optional\r\n - If 0 will use algorithm in test, else if 1 use in production algorithm. \r\n\r\n*ismin* : int, optional\r\n- If 1 then function is minimized, else if 0 the function is maximized\r\n\r\n*constraints*: string, optional\r\n\r\n- If \"best\" then HPDE will choose the best values of the independent variables to minmize or maximize the dependent variable.  \r\n  Users can also specify their own constraints for each variable and must be in the following format: varname1:min:max,varname2:min:max,...\r\n\r\n*stretchbounds*: int, optional\r\n\r\n- A number between 0 and 100, this is the percentage to stretch the bounds on the constraints.\r\n\r\n*constrainttype*: int, optional\r\n\r\n- If 1 then HPDE uses the min/max of each variable for the bounds, if 2 HPDE will adjust the min/max by their standard deviation, \r\n  if 3 then HPDE uses stretchbounds to adjust the min/max for each variable.  \r\n\r\n*epsilon*: int, optional\r\n\r\n- Once HPDE finds a good local minima/maxima, it then uses this epsilon value to find the Global minima/maxima to ensure \r\n  you have the best values of the independent variables that minimize or maximize the dependent variable.\r\n\t\t\t\t\t \r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*timeout* : int, optional\r\n\r\n - Number of seconds that VIPER waits when trying to make a connection to HPDE.\r\n\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the optimization details and optimal values.\r\n\r\n**7.1 maadstml.viperhpdeoptimizebatch(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,\r\n\t\thpdehost,partition=-1,offset=-1,enabletls=0,delay=100,hpdeport=-999,usedeploy=0,ismin=1,constraints='best',\r\n\t\tstretchbounds=20,constrainttype=1,epsilon=10,brokerhost='',brokerport=-999,timeout=120,microserviceid='',topicid=\"-999\",\r\n\t\ttimedelay=0,asynctimeout=120)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*asynctimeout* : int, optional\r\n \r\n  -This is the timeout in seconds for the Python library async function.\r\n\r\n*timedelay* : int, optional\r\n\r\n - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause \r\n   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated\r\n   every 3600 seconds, it may make sense to set timedelay=3600\r\n\r\n*consumefrom* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*topicid* : string, required\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can perform\r\n  mathematical optimization for each of the 1000 IoT devices using their specific algorithm.  Separate \r\n  multiple topicids by a comma.\r\n  \r\n*produceto* : string, required\r\n\r\n- Topic to produce results of the prediction to\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*consumerid*: string, required\r\n\r\n- Consumerid associated with the topic to consume from\r\n\r\n*producerid*: string, required\r\n\r\n- Producerid associated with the topic to produce to\r\n\r\n*hpdehost*: string, required\r\n\r\n- Address of HPDE \r\n\r\n*partition* : int, optional\r\n\r\n- If you know the kafka partition used to store data then specify it here.\r\n  Most cases Kafka will dynamically store data in partitions, so you should\r\n  use the default of -1 to let VIPER find it.\r\n \r\n*offset* : int, optional\r\n\r\n- Offset to start consuming data.  Usually you can use -1, and VIPER\r\n  will get the last offset.\r\n  \r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*hpdeport*: int, required\r\n\r\n- Port number HPDE is listening on \r\n\r\n*usedeploy* : int, optional\r\n - If 0 will use algorithm in test, else if 1 use in production algorithm. \r\n\r\n*ismin* : int, optional\r\n- If 1 then function is minimized, else if 0 the function is maximized\r\n\r\n*constraints*: string, optional\r\n\r\n- If \"best\" then HPDE will choose the best values of the independent variables to minmize or maximize the dependent variable.  \r\n  Users can also specify their own constraints for each variable and must be in the following format: varname1:min:max,varname2:min:max,...\r\n\r\n*stretchbounds*: int, optional\r\n\r\n- A number between 0 and 100, this is the percentage to stretch the bounds on the constraints.\r\n\r\n*constrainttype*: int, optional\r\n\r\n- If 1 then HPDE uses the min/max of each variable for the bounds, if 2 HPDE will adjust the min/max by their standard deviation, \r\n  if 3 then HPDE uses stretchbounds to adjust the min/max for each variable.  \r\n\r\n*epsilon*: int, optional\r\n\r\n- Once HPDE finds a good local minima/maxima, it then uses this epsilon value to find the Global minima/maxima to ensure \r\n  you have the best values of the independent variables that minimize or maximize the dependent variable.\r\n\t\t\t\t\t \r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*timeout* : int, optional\r\n\r\n - Number of seconds that VIPER waits when trying to make a connection to HPDE.\r\n\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the optimization details and optimal values.\r\n\r\n**8. maadstml.viperhpdetraining(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,\r\n                 hpdehost,viperconfigfile,enabletls=1,partition=-1,deploy=0,modelruns=50,modelsearchtuner=80,hpdeport=-999,\r\n\t\t\t\t offset=-1,islogistic=0,brokerhost='', brokerport=-999,timeout=120,microserviceid='',topicid=-999,maintopic='',\r\n                 independentvariables='',dependentvariable='',rollbackoffsets=0,fullpathtotrainingdata='',processlogic='',\r\n\t\t\t\t identifier='',array=0,transformtype='',sendcoefto='',coeftoprocess='',coefsubtopicnames='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*transformtype* : string, optional\r\n\r\n- You can transform the dependent and independent variables using: log-log, log-lin, lin-log, lin=linear, log=natural log \r\n  This may be useful if you want to compute price or demand elasticities.\r\n\r\n*sendcoefto* : string, optional\r\n \r\n- This is the name of the kafka topic that you want to stream the estimated parameters to.\r\n\r\n*coeftoprocess* : string, optional\r\n\r\n- This is the indexes of the estimated parameters.  For example, if the ML model has a constant and two estimated\r\n  parameters, then coeftoprocess=\"0,1,2\" means stream constant term (at index 0) and the two estmiated parameters at\r\n  index 1, and 2.\r\n\r\n*coefsubtopicnames* : string, optional\r\n\r\n- This is the names for the estimated parameters.  For example, \"constant,elasticity,elasticity2\" would be streamed\r\n  as kafka topics for *coeftoprocess*\r\n\r\n*topicid* : int, optional\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can create individual \r\n  Machine Learning models for each IoT device in real-time.  This is a core functionality of TML solutions.\r\n  \r\n*array* : int, optional\r\n\r\n- Set array=1 if the data you are consuming from is an array of multiple streams that you produced from \r\n  viperproducetotopic in an effort to synchronize data for training.\r\n\r\n*maintopic* : string, optional\r\n\r\n- This is the maintopic that contains the sub-topc streams.\r\n\r\n*independentvariables* : string, optional\r\n\r\n- These are the independent variables that are the subtopics.  \r\n\r\n*dependentvariable* : string, optional\r\n\r\n- This is the dependent variable in the subtopic streams.  \r\n\r\n*rollbackoffsets*: int, optional\r\n\r\n- This is the rollback percentage to create the training dataset.  VIPER will automatically create a training dataset\r\n  using the independent and dependent variable streams.  \r\n\r\n*fullpathtotrainingdata*: string, optional\r\n\r\n- This is the FULL path where you want to store the training dataset.  VIPER will write file to disk. Make sure proper\r\n  permissions are granted to VIPER.   For example, **c:/myfolder/mypath**\r\n\r\n*processlogic* : string, optional\r\n\r\n- You can dynamically build a classification model by specifying how you want to classify the dependent variable by\r\n  indicating your conditions in the processlogic variable (this will take effect if islogistic=1). For example: \r\n  \r\n  **processlogic='classification_name=my_prob:temperature=20.5,30:humidity=50,55'**, means the following:\r\n   \r\n   1. The name of the dependent variable is specified by **classification_name**\r\n   2. Then you can specify the conditions on the streams. If your stream is Temperature and humidity,\r\n      if Temperature is between 20.5 and 30, then my_prob=1, otherwise my_prob=0, and\r\n\t  if Humidity is between 50 and 55, then my_prob=1, otherwise my_prob=0\r\n   3.  If you want to specify no upperbound you can use *n*, or *-n* for no lowerbound.\r\n       For example, if **temperature=20.5,n**, means temperature >=20.5 then my_prob=1\r\n\t   If **humidity=-n,55**, means humidity<=55 then my_prob=1 \r\n\r\n- This allows you to classify the dependent with any number of variables all in real-time!\r\n\r\n*consumefrom* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*produceto* : string, required\r\n\r\n- Topic to produce results of the prediction to\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*consumerid*: string, required\r\n\r\n*identifier*: string, optional\r\n\r\n- You can add any name or identifier like DSN ID\r\n\r\n- Consumerid associated with the topic to consume from\r\n\r\n*producerid*: string, required\r\n\r\n- Producerid associated with the topic to produce to\r\n\r\n*hpdehost*: string, required\r\n\r\n- Address of HPDE \r\n\r\n*viperconfigfile* : string, required\r\n\r\n- Full path to VIPER.ENV configuration file on server.\r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.\r\n\r\n*partition*: int, optional\r\n\r\n- Partition used by kafka to store data. NOTE: Kafka will dynamically store data in partitions.\r\n  Unless you know for sure the partition, you should use the default of -1 to let VIPER\r\n  determine where your data is.\r\n\r\n*deploy*: int, optional\r\n\r\n- If deploy=1, this will deploy the algorithm to the Deploy folder.  This is useful if you do not\r\n  want to use this algorithm in production, and just testing it.  If just testing, then set deploy=0 (default).  \r\n\r\n*modelruns*: int, optional\r\n\r\n- Number of iterations for model training\r\n\r\n*modelsearchtuner*: int, optional\r\n\r\n- An integer between 0-100, this variable will attempt to fine tune the model search space.  A number close to 0 means you will \r\n  have lots of models but their quality may be low, a number close to 100 (default=80) means you will have fewer models but their \r\n  quality will be higher\r\n\r\n*hpdeport*: int, required\r\n\r\n- Port number HPDE is listening on \r\n\r\n*offset* : int, optional\r\n\r\n - If 0 will use the training data from the beginning of the topic\r\n \r\n*islogistic*: int, optional\r\n\r\n- If is 1, the HPDE will switch to logistic modeling, else continous.\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*timeout* : int, optional\r\n\r\n - Number of seconds that VIPER waits when trying to make a connection to HPDE.\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the optimal algorithm that best fits your data.\r\n\r\n**8.1 maadstml.viperhpdetrainingbatch(vipertoken,host,port,consumefrom,produceto,companyname,consumerid,producerid,\r\n                 hpdehost,viperconfigfile,enabletls=1,partition=-1,deploy=0,modelruns=50,modelsearchtuner=80,hpdeport=-999,\r\n\t\t\t\t offset=-1,islogistic=0,brokerhost='', brokerport=-999,timeout=120,microserviceid='',topicid=\"-999\",maintopic='',\r\n                 independentvariables='',dependentvariable='',rollbackoffsets=0,fullpathtotrainingdata='',processlogic='',\r\n\t\t\t\t identifier='',array=0,timedelay=0,asynctimeout=120)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*asynctimeout* : int, optional\r\n \r\n  -This is the timeout in seconds for the Python library async function.\r\n\r\n*timedelay* : int, optional\r\n\r\n - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause \r\n   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated\r\n   every 3600 seconds, it may make sense to set timedelay=3600\r\n\r\n*topicid* : string, required\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can create individual \r\n  Machine Learning models for each IoT device in real-time.  This is a core functionality of TML solutions.\r\n  Separate multiple topic ids by comma.\r\n  \r\n*array* : int, optional\r\n\r\n- Set array=1 if the data you are consuming from is an array of multiple streams that you produced from \r\n  viperproducetotopic in an effort to synchronize data for training.\r\n\r\n*maintopic* : string, optional\r\n\r\n- This is the maintopic that contains the sub-topc streams.\r\n\r\n*independentvariables* : string, optional\r\n\r\n- These are the independent variables that are the subtopics.  \r\n\r\n*dependentvariable* : string, optional\r\n\r\n- This is the dependent variable in the subtopic streams.  \r\n\r\n*rollbackoffsets*: int, optional\r\n\r\n- This is the rollback percentage to create the training dataset.  VIPER will automatically create a training dataset\r\n  using the independent and dependent variable streams.  \r\n\r\n*fullpathtotrainingdata*: string, optional\r\n\r\n- This is the FULL path where you want to store the training dataset.  VIPER will write file to disk. Make sure proper\r\n  permissions are granted to VIPER.   For example, **c:/myfolder/mypath**\r\n\r\n*processlogic* : string, optional\r\n\r\n- You can dynamically build a classification model by specifying how you want to classify the dependent variable by\r\n  indicating your conditions in the processlogic variable (this will take effect if islogistic=1). For example: \r\n  \r\n  **processlogic='classification_name=my_prob:temperature=20.5,30:humidity=50,55'**, means the following:\r\n   \r\n   1. The name of the dependent variable is specified by **classification_name**\r\n   2. Then you can specify the conditions on the streams. If your stream is Temperature and humidity,\r\n      if Temperature is between 20.5 and 30, then my_prob=1, otherwise my_prob=0, and\r\n\t  if Humidity is between 50 and 55, then my_prob=1, otherwise my_prob=0\r\n   3.  If you want to specify no upperbound you can use *n*, or *-n* for no lowerbound.\r\n       For example, if **temperature=20.5,n**, means temperature >=20.5 then my_prob=1\r\n\t   If **humidity=-n,55**, means humidity<=55 then my_prob=1 \r\n\r\n- This allows you to classify the dependent with any number of variables all in real-time!\r\n\r\n*consumefrom* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*produceto* : string, required\r\n\r\n- Topic to produce results of the prediction to\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*consumerid*: string, required\r\n\r\n*identifier*: string, optional\r\n\r\n- You can add any name or identifier like DSN ID\r\n\r\n- Consumerid associated with the topic to consume from\r\n\r\n*producerid*: string, required\r\n\r\n- Producerid associated with the topic to produce to\r\n\r\n*hpdehost*: string, required\r\n\r\n- Address of HPDE \r\n\r\n*viperconfigfile* : string, required\r\n\r\n- Full path to VIPER.ENV configuration file on server.\r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.\r\n\r\n*partition*: int, optional\r\n\r\n- Partition used by kafka to store data. NOTE: Kafka will dynamically store data in partitions.\r\n  Unless you know for sure the partition, you should use the default of -1 to let VIPER\r\n  determine where your data is.\r\n\r\n*deploy*: int, optional\r\n\r\n- If deploy=1, this will deploy the algorithm to the Deploy folder.  This is useful if you do not\r\n  want to use this algorithm in production, and just testing it.  If just testing, then set deploy=0 (default).  \r\n\r\n*modelruns*: int, optional\r\n\r\n- Number of iterations for model training\r\n\r\n*modelsearchtuner*: int, optional\r\n\r\n- An integer between 0-100, this variable will attempt to fine tune the model search space.  A number close to 0 means you will \r\n  have lots of models but their quality may be low, a number close to 100 (default=80) means you will have fewer models but their \r\n  quality will be higher\r\n\r\n*hpdeport*: int, required\r\n\r\n- Port number HPDE is listening on \r\n\r\n*offset* : int, optional\r\n\r\n - If 0 will use the training data from the beginning of the topic\r\n \r\n*islogistic*: int, optional\r\n\r\n- If is 1, the HPDE will switch to logistic modeling, else continous.\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*timeout* : int, optional\r\n\r\n - Number of seconds that VIPER waits when trying to make a connection to HPDE.\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the optimal algorithm that best fits your data.\r\n\r\n**9. maadstml.viperproducetotopicstream(vipertoken,host,port,topic,producerid,offset,maxrows=0,enabletls=0,delay=100,\r\n\tbrokerhost='',brokerport=-999,microserviceid='',topicid=-999,mainstreamtopic='',streamstojoin='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topics to produce to in the Kafka broker - this is a topic that contains multiple topics, VIPER will consume from each topic and \r\n  write results to the produceto topic\r\n\r\n*topicid* : int, optional\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can join these streams\r\n  and produce it to one stream,\r\n\r\n*mainstreamtopic*: string, optional\r\n\r\n- This is the main stream topic that contain the subtopic streams.\r\n\r\n*streamstojoin*: string, optional\r\n\r\n- These are the streams you want to join and produce to mainstreamtopic.\r\n\r\n*producerid* : string, required\r\n\r\n- Producerid of the topic producing to  \r\n\r\n*offset* : int\r\n \r\n - If 0 will use the stream data from the beginning of the topics, -1 will automatically go to last offset\r\n\r\n*maxrows* : int, optional\r\n \r\n - If offset=-1, this number will rollback the streams by maxrows amount i.e. rollback=lastoffset-maxrows\r\n \r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise 0 for plaintext\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the optimal algorithm that best fits your data.\r\n\r\n**10. maadstml.vipercreatetrainingdata(vipertoken,host,port,consumefrom,produceto,dependentvariable,\r\n\t\tindependentvariables,consumerid,producerid,companyname,partition=-1,enabletls=0,delay=100,\r\n\t\tbrokerhost='',brokerport=-999,microserviceid='',topicid=-999)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*consumefrom* : string, required\r\n       \r\n- Topic to consume from \r\n\r\n*topicid* : int, optional\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, with 10 subtopic streams \r\n  you can assign a Topicid to each IoT device and each of the 10 subtopics will be associated to each IoT device.\r\n  You can create training dataset for each device.\r\n\r\n*produceto* : string, required\r\n       \r\n- Topic to produce to \r\n\r\n*dependentvariable* : string, required\r\n       \r\n- Topic name of the dependentvariable \r\n \r\n*independentvariables* : string, required\r\n       \r\n- Topic names of the independentvariables - VIPER will automatically read the data streams.  \r\n  Separate multiple variables by comma. \r\n\r\n*consumerid* : string, required\r\n\r\n- Consumerid of the topic to consume to  \r\n\r\n*producerid* : string, required\r\n\r\n- Producerid of the topic producing to  \r\n \r\n*partition* : int, optional\r\n\r\n- This is the partition that Kafka stored the stream data.  Specifically, the streams you joined \r\n  from function *viperproducetotopicstream* will be stored in a partition by Kafka, if you \r\n  want to create a training dataset from these data, then you should use this partition.  This\r\n  ensures you are using the right data to create a training dataset.\r\n    \r\n*companyname* : string, required\r\n\r\n- Your company name  \r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is enabled for SSL/TLS encrypted traffic, otherwise set to 0 for plaintext.\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backout from reading messages\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the training data set.\r\n\r\n**11. maadstml.vipercreatetopic(vipertoken,host,port,topic,companyname,contactname,contactemail,location,\r\ndescription,enabletls=0,brokerhost='',brokerport=-999,numpartitions=1,replication=1,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topic to create \r\n\r\n*companyname* : string, required\r\n\r\n- Company name of consumer\r\n\r\n*contactname* : string, required\r\n\r\n- Contact name of consumer\r\n\r\n*contactemail* : string, required\r\n\r\n- Contact email of consumer\r\n\r\n*location* : string, required\r\n\r\n- Location of consumer\r\n\r\n*description* : string, required\r\n\r\n- Description of why consumer wants to subscribe to topic\r\n\r\n*enabletls* : int, optional\r\n\r\n- Set to 1 if Kafka is SSL/TLS enabled for encrypted traffic, otherwise 0 for no encryption (plain text)\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*numpartitions*: int, optional\r\n\r\n- Number of the parititons to create in the Kafka broker - more parititons the faster Kafka will produce results.\r\n\r\n*replication*: int, optional\r\n\r\n- Specificies the number of brokers to replicate to - this is important for failover\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the producer id for the topic.\r\n\r\n**12. maadstml.viperconsumefromstreamtopic(vipertoken,host,port,topic,consumerid,companyname,partition=-1,\r\n        enabletls=0,delay=100,offset=0,brokerhost='',brokerport=-999,microserviceid='',topicid=-999)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topic to consume from \r\n\r\n*topicid* : int, optional\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can consume \r\n  for each device.\r\n\r\n*consumerid* : string, required\r\n\r\n- Consumerid associated with topic\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*partition*: int, optional\r\n\r\n- Set to a kafka partition number, or -1 to autodetect partition.\r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*offset* : int, optional\r\n\r\n- Offset to start reading from ..if 0 VIPER will read from the beginning\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the contents of all the topics read\r\n\r\n\r\n**13. maadstml.vipercreatejointopicstreams(vipertoken,host,port,topic,topicstojoin,companyname,contactname,contactemail,\r\n\t\tdescription,location,enabletls=0,brokerhost='',brokerport=-999,replication=1,numpartitions=1,microserviceid='',\r\n\t\ttopicid=-999)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topic to consume from \r\n\r\n*topicid* : int, optional\r\n\r\n- Topicid represents an id for some entity.  Create a joined topic stream per topicid.\r\n\r\n*topicstojoin* : string, required\r\n\r\n- Enter two or more topics separated by a comma and VIPER will join them into one topic\r\n\r\n*companyname* : string, required\r\n\r\n- Company name of consumer\r\n\r\n*contactname* : string, required\r\n\r\n- Contact name of consumer\r\n\r\n*contactemail* : string, required\r\n\r\n- Contact email of consumer\r\n\r\n*location* : string, required\r\n\r\n- Location of consumer\r\n\r\n*description* : string, required\r\n\r\n- Description of why consumer wants to subscribe to topic\r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled, otherwise set to 0 for plaintext.\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*numpartitions* : int, optional\r\n\r\n- Number of partitions\r\n\r\n*replication* : int, optional\r\n\r\n- Replication factor\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the producerid of the joined streams\r\n\t\t\t\t\t\t\t\t\r\n**14. maadstml.vipercreateconsumergroup(vipertoken,host,port,topic,groupname,companyname,contactname,contactemail,\r\n\t\tdescription,location,enabletls=1,brokerhost='',brokerport=-999,microserviceid='')**\r\n\t\t\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topic to dd to the group, multiple (active) topics can be separated by comma \r\n\r\n*groupname* : string, required\r\n\r\n- Enter the name of the group\r\n\r\n*companyname* : string, required\r\n\r\n- Company name of consumer\r\n\r\n*contactname* : string, required\r\n\r\n- Contact name of consumer\r\n\r\n*contactemail* : string, required\r\n\r\n- Contact email of consumer\r\n\r\n*location* : string, required\r\n\r\n- Location of consumer\r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled, otherwise set to 0 for plaintext.\r\n\r\n*description* : string, required\r\n\r\n- Description of why consumer wants to subscribe to topic\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the groupid of the group.\r\n\t\t\t\t\t\t\t\t\r\n**15. maadstml.viperconsumergroupconsumefromtopic(vipertoken,host,port,topic,consumerid,groupid,companyname,\r\n\t\tpartition=-1,enabletls=0,delay=100,offset=0,rollbackoffset=0,brokerhost='',brokerport=-999,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topic to dd to the group, multiple (active) topics can be separated by comma \r\n\r\n*consumerid* : string, required\r\n\r\n- Enter the consumerid associated with the topic\r\n\r\n*groupid* : string, required\r\n\r\n- Enter the groups id\r\n\r\n*companyname* : string, required\r\n\r\n- Enter the company name\r\n\r\n*partition*: int, optional\r\n\r\n- set to Kakfa partition number or -1 to autodetect\r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled, otherwise set to 0 for plaintext.\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*offset* : int, optional\r\n\r\n- Offset to start reading from.  If 0, will read from the beginning of topic, or -1 to automatically go to end of topic.\r\n\r\n*rollbackoffset* : int, optional\r\n\r\n- The number of offsets to rollback the data stream.\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the contents of the group.\r\n    \r\n**16. maadstml.vipermodifyconsumerdetails(vipertoken,host,port,topic,companyname,consumerid,contactname='',\r\ncontactemail='',location='',brokerhost='',brokerport=9092,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topic to dd to the group, multiple (active) topics can be separated by comma \r\n\r\n*consumerid* : string, required\r\n\r\n- Enter the consumerid associated with the topic\r\n\r\n*companyname* : string, required\r\n\r\n- Enter the company name\r\n\r\n*contactname* : string, optional\r\n\r\n- Enter the contact name \r\n\r\n*contactemail* : string, optional\r\n- Enter the contact email\r\n\r\n*location* : string, optional\r\n\r\n- Enter the location\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns success/failure\r\n\r\n**17. maadstml.vipermodifytopicdetails(vipertoken,host,port,topic,companyname,partition=0,enabletls=1,\r\n          isgroup=0,contactname='',contactemail='',location='',brokerhost='',brokerport=9092,microserviceid='')**\r\n     \r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topic to dd to the group, multiple (active) topics can be separated by comma \r\n\r\n*companyname* : string, required\r\n\r\n- Enter the company name\r\n\r\n*partition* : int, optional\r\n\r\n- You can change the partition in the Kafka topic.\r\n\r\n*enabletls* : int, optional\r\n\r\n- If enabletls=1, then SSL/TLS is enables in Kafka, otherwise if enabletls=0 it is not.\r\n\r\n*isgroup* : int, optional\r\n\r\n- This tells VIPER whether this is a group topic if isgroup=1, or a normal topic if isgroup=0\r\n\r\n*contactname* : string, optional\r\n\r\n- Enter the contact name \r\n\r\n*contactemail* : string, optional\r\n- Enter the contact email\r\n\r\n*location* : string, optional\r\n\r\n- Enter the location\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns success/failure\r\n\r\n**18. maadstml.viperactivatetopic(vipertoken,host,port,topic,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topic to activate\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns success/failure\r\n    \r\n**19. maadstml.viperdeactivatetopic(vipertoken,host,port,topic,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topic to deactivate\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns success/failure\r\n\r\n**20. maadstml.vipergroupactivate(vipertoken,host,port,groupname,groupid,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*groupname* : string, required\r\n       \r\n- Name of the group\r\n\r\n*groupid* : string, required\r\n       \r\n- ID of the group\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns success/failure\r\n   \r\n**21.  maadstml.vipergroupdeactivate(vipertoken,host,port,groupname,groupid,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*groupname* : string, required\r\n       \r\n- Name of the group\r\n\r\n*groupid* : string, required\r\n       \r\n- ID of the group\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns success/failure\r\n   \r\n**22. maadstml.viperdeletetopics(vipertoken,host,port,topic,enabletls=1,brokerhost='',brokerport=9092,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topic to delete.  Separate multiple topics by a comma.\r\n\r\n*enabletls* : int, optional\r\n\r\n- If enabletls=1, then SSL/TLS is enable on Kafka, otherwise if enabletls=0, it is not.\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*microserviceid* : string, optional\r\n\r\n- microservice to access viper\r\n   \r\n**23.  maadstml.balancebigdata(localcsvfile,numberofbins,maxrows,outputfile,bincutoff,distcutoff,startcolumn=0)**\r\n\r\n**Parameters:**\t\r\n\r\n*localcsvfile* : string, required\r\n\r\n- Local file, must be CSV formatted.\r\n\r\n*numberofbins* : int, required\r\n\r\n- The number of bins for the histogram. You can set to any value but 10 is usually fine.\r\n\r\n*maxrows* :  int, required\r\n\r\n- The number of rows to return, which will be a subset of your original data.\r\n\r\n*outputfile* : string, required\r\n\r\n- Your new data will be writted as CSV to this file.\r\n\r\n*bincutoff* : float, required. \r\n\r\n-  This is the threshold percentage for the bins. Specifically, the data in each variable is allocated to bins, but many \r\n   times it will not fall in ALL of the bins.  By setting this percentage between 0 and 1, MAADS will choose variables that\r\n   exceed this threshold to determine which variables have data that are well distributed across bins.  The variables\r\n   with the most distributed values in the bins will drive the selection of the rows in your dataset that give the best\r\n   distribution - this will be very important for MAADS training.  Usually 0.7 is good.\r\n\r\n*distcutoff* : float, required. \r\n\r\n-  This is the threshold percentage for the distribution. Specifically, MAADS uses a Lilliefors statistic to determine whether \r\n   the data are well distributed.  The lower the number the better.  Usually 0.45 is good.\r\n   \r\n*startcolumn* : int, optional\r\n\r\n- This tells MAADS which column to start from.  If you have DATE in the first column, you can tell MAADS to start from 1 (columns are zero-based)\r\n\r\nRETURNS: Returns a detailed JSON object and new balaced dataset written to outputfile.\r\n\r\n**24. maadstml.viperanomalytrain(vipertoken,host,port,consumefrom,produceto,producepeergroupto,produceridpeergroup,consumeridproduceto,\r\n                      streamstoanalyse,companyname,consumerid,producerid,flags,hpdehost,viperconfigfile,\r\n                      enabletls=1,partition=-1,hpdeport=-999,topicid=-999,maintopic='',rollbackoffsets=0,fullpathtotrainingdata='',\r\n\t\t\t\t\t  brokerhost='',brokerport=9092,delay=1000,timeout=120,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*consumefrom* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*produceto* : string, required\r\n\r\n- Topic to produce results of the prediction to\r\n\r\n*topicid* : int, optional\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can perform anomaly detection/predictions\r\n  for each device.\r\n\r\n*maintopic* : string, optional\r\n\r\n- This is the maintopic that contains the subtopic streams.\r\n\r\n*rollbackoffsets*: int, optional\r\n\r\n- This is the percentage to rollback the streams that you are analysing: streamstoanalyse\r\n\r\n*fullpathtotrainingdata*: string, optional\r\n\r\n- This is the full path to the training dataset to use to find peer groups.\r\n\r\n*producepeergroupto* : string, required\r\n\r\n- Topic to produce the peer group for anomaly comparisons \r\n\r\n*produceridpeergroup* : string, required\r\n\r\n- Producerid for the peer group topic\r\n\r\n*consumeridproduceto* : string, required\r\n\r\n- Consumer id for the Produceto topic \r\n\r\n*streamstoanalyse* : string, required\r\n\r\n- Comma separated list of streams to analyse for anomalies\r\n\r\n*flags* : string, required\r\n\r\n- These are flags that will be used to select the peer group for each stream.  The flags must have the following format:\r\n  *topic=[topic name],topictype=[numeric or string],threshnumber=[a number between 0 and 10000, i.e. 200],\r\n  lag=[a number between 1 and 20, i.e. 5],zthresh=[a number between 1 and 5, i.e. 2.5],influence=[a number between 0 and 1 i.e. 0.5]*\r\n  \r\n  *threshnumber*: decimal number to determine usual behaviour - only for numeric streams, numbers are compared to the centroid number, \r\n  a standardized distance is taken and all numbers below the thresholdnumeric are deemed as usual i.e. thresholdnumber=200, any value \r\n  below is close to the centroid  - you need to experiment with this number.\r\n  \r\n  *lag*: number of lags for the moving mean window, works to smooth the function i.e. lag=5\r\n  \r\n  *zthresh*: number of standard deviations from moving mean i.e. 3.5\r\n  \r\n  *influence*: strength in identifying outliers for both stationary and non-stationary data, i.e. influence=0 ignores outliers \r\n  when recalculating the new threshold, influence=1 is least robust.  Influence should be between (0,1), i.e. influence=0.5\r\n  \r\n  Flags must be provided for each topic.  Separate multiple flags by ~\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*consumerid*: string, required\r\n\r\n- Consumerid associated with the topic to consume from\r\n\r\n*producerid*: string, required\r\n\r\n- Producerid associated with the topic to produce to\r\n\r\n*hpdehost*: string, required\r\n\r\n- Address of HPDE \r\n\r\n*viperconfigfile* : string, required\r\n\r\n- Full path to VIPER.ENV configuration file on server.\r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.\r\n\r\n*partition*: int, optional\r\n\r\n- Partition used by kafka to store data. NOTE: Kafka will dynamically store data in partitions.\r\n  Unless you know for sure the partition, you should use the default of -1 to let VIPER\r\n  determine where your data is.\r\n\r\n*hpdeport*: int, required\r\n\r\n- Port number HPDE is listening on \r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*delay* : int, optional\r\n\r\n- delay parameter to wait for Kafka to respond - in milliseconds.\r\n\r\n*timeout* : int, optional\r\n\r\n - Number of seconds that VIPER waits when trying to make a connection to HPDE.\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the peer groups for all the streams.\r\n\r\n**24.1 maadstml.viperanomalytrainbatch(vipertoken,host,port,consumefrom,produceto,producepeergroupto,produceridpeergroup,consumeridproduceto,\r\n                      streamstoanalyse,companyname,consumerid,producerid,flags,hpdehost,viperconfigfile,\r\n                      enabletls=1,partition=-1,hpdeport=-999,topicid=\"-999\",maintopic='',rollbackoffsets=0,fullpathtotrainingdata='',\r\n\t\t\t\t\t  brokerhost='',brokerport=9092,delay=1000,timeout=120,microserviceid='',timedelay=0,asynctimeout=120)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*asynctimeout* : int, optional\r\n \r\n  -This is the timeout in seconds for the Python library async function.\r\n\r\n*timedelay* : int, optional\r\n\r\n - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause \r\n   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated\r\n   every 3600 seconds, it may make sense to set timedelay=3600\r\n\r\n*consumefrom* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*produceto* : string, required\r\n\r\n- Topic to produce results of the prediction to\r\n\r\n*topicid* : string, required\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can perform anomaly detection/predictions\r\n  for each device.  Separate multiple topicids by a comma.\r\n\r\n*maintopic* : string, optional\r\n\r\n- This is the maintopic that contains the subtopic streams.\r\n\r\n*rollbackoffsets*: int, optional\r\n\r\n- This is the percentage to rollback the streams that you are analysing: streamstoanalyse\r\n\r\n*fullpathtotrainingdata*: string, optional\r\n\r\n- This is the full path to the training dataset to use to find peer groups.\r\n\r\n*producepeergroupto* : string, required\r\n\r\n- Topic to produce the peer group for anomaly comparisons \r\n\r\n*produceridpeergroup* : string, required\r\n\r\n- Producerid for the peer group topic\r\n\r\n*consumeridproduceto* : string, required\r\n\r\n- Consumer id for the Produceto topic \r\n\r\n*streamstoanalyse* : string, required\r\n\r\n- Comma separated list of streams to analyse for anomalies\r\n\r\n*flags* : string, required\r\n\r\n- These are flags that will be used to select the peer group for each stream.  The flags must have the following format:\r\n  *topic=[topic name],topictype=[numeric or string],threshnumber=[a number between 0 and 10000, i.e. 200],\r\n  lag=[a number between 1 and 20, i.e. 5],zthresh=[a number between 1 and 5, i.e. 2.5],influence=[a number between 0 and 1 i.e. 0.5]*\r\n  \r\n  *threshnumber*: decimal number to determine usual behaviour - only for numeric streams, numbers are compared to the centroid number, \r\n  a standardized distance is taken and all numbers below the thresholdnumeric are deemed as usual i.e. thresholdnumber=200, any value \r\n  below is close to the centroid  - you need to experiment with this number.\r\n  \r\n  *lag*: number of lags for the moving mean window, works to smooth the function i.e. lag=5\r\n  \r\n  *zthresh*: number of standard deviations from moving mean i.e. 3.5\r\n  \r\n  *influence*: strength in identifying outliers for both stationary and non-stationary data, i.e. influence=0 ignores outliers \r\n  when recalculating the new threshold, influence=1 is least robust.  Influence should be between (0,1), i.e. influence=0.5\r\n  \r\n  Flags must be provided for each topic.  Separate multiple flags by ~\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*consumerid*: string, required\r\n\r\n- Consumerid associated with the topic to consume from\r\n\r\n*producerid*: string, required\r\n\r\n- Producerid associated with the topic to produce to\r\n\r\n*hpdehost*: string, required\r\n\r\n- Address of HPDE \r\n\r\n*viperconfigfile* : string, required\r\n\r\n- Full path to VIPER.ENV configuration file on server.\r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.\r\n\r\n*partition*: int, optional\r\n\r\n- Partition used by kafka to store data. NOTE: Kafka will dynamically store data in partitions.\r\n  Unless you know for sure the partition, you should use the default of -1 to let VIPER\r\n  determine where your data is.\r\n\r\n*hpdeport*: int, required\r\n\r\n- Port number HPDE is listening on \r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*delay* : int, optional\r\n\r\n- delay parameter to wait for Kafka to respond - in milliseconds.\r\n\r\n*timeout* : int, optional\r\n\r\n - Number of seconds that VIPER waits when trying to make a connection to HPDE.\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the peer groups for all the streams.\r\n\r\n\r\n**25. maadstml.viperanomalypredict(vipertoken,host,port,consumefrom,produceto,consumeinputstream,produceinputstreamtest,produceridinputstreamtest,\r\n                      streamstoanalyse,consumeridinputstream,companyname,consumerid,producerid,flags,hpdehost,viperconfigfile,\r\n                      enabletls=1,partition=-1,hpdeport=-999,topicid=-999,maintopic='',rollbackoffsets=0,fullpathtopeergroupdata='',\r\n\t\t\t\t\t  brokerhost='',brokerport=9092,delay=1000,timeout=120,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*consumefrom* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*produceto* : string, required\r\n\r\n- Topic to produce results of the prediction to\r\n\r\n*consumeinputstream* : string, required\r\n\r\n- Topic of the input stream to test for anomalies\r\n\r\n*produceinputstreamtest* : string, required\r\n\r\n- Topic to store the input stream data for analysis\r\n\r\n*produceridinputstreamtest* : string, required\r\n\r\n- Producer id for the produceinputstreamtest topic \r\n\r\n*streamstoanalyse* : string, required\r\n\r\n- Comma separated list of streams to analyse for anomalies\r\n\r\n*flags* : string, required\r\n\r\n- These are flags that will be used to select the peer group for each stream.  The flags must have the following format:\r\n  *riskscore=[a number between 0 and 1]~complete=[and, or, pvalue i.e. p50 means streams over 50% that have an anomaly]~type=[and,or this will \r\n  determine what logic to apply to v and sc],topic=[topic name],topictype=[numeric or string],v=[v>some value, v<some value, or valueany],\r\n  sc=[sc>some number, sc<some number - this is the score for the anomaly test]\r\n  \r\n  if using strings, the specify flags: type=[and,or],topic=[topic name],topictype=string,stringcontains=[0 or 1 - 1 will do a substring test, \r\n  0 will equate the strings],v2=[any text you want to test - use | for OR or ^ for AND],sc=[score value, sc<some value, sc>some value]\r\n \r\n  *riskscore*: this the riskscore threshold.  A decimal number between 0 and 1, use this as a threshold to flag anomalies.\r\n\r\n  *complete* : If using multiple streams, this will test each stream to see if the computed riskscore and perform an AND or OR on each risk value\r\n  and take an average of the risk scores if using AND.  Otherwise if at least one stream exceeds the riskscore it will return.\r\n  \r\n  *type*: AND or OR - if using v or sc, this is used to apply the appropriate logic between v and sc.  For example, if type=or, then VIPER \r\n  will see if a test value is less than or greater than V, OR, standarzided value is less than or greater than sc.  \r\n  \r\n  *sc*: is a standarized variavice between the peer group value and test value.\r\n  \r\n  *v1*: is a user chosen value which can be used to test for a particular value.  For example, if you want to flag values less then 0, \r\n  then choose v<0 and VIPER will flag them as anomolous.\r\n\r\n  *v2*: if analysing string streams, v2 can be strings you want to check for. For example, if I want to check for two\r\n  strings: Failed and Attempt Failed, then set v2=Failed^Attempt Failed, where ^ tells VIPER to perform an AND operation.  \r\n  If I want either to exist, 2=Failed|Attempt Failed, where | tells VIPER to perform an OR operation.\r\n\r\n  *stringcontains* : if using string streams, and you want to see if a particular text value exists and flag it - then \r\n  if stringcontains=1, VIPER will test for substrings, otherwise it will equate the strings. \r\n  \r\n  \r\n  Flags must be provided for each topic.  Separate multiple flags by ~\r\n\r\n*consumeridinputstream* : string, required\r\n\r\n- Consumer id of the input stream topic: consumeinputstream\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*consumerid*: string, required\r\n\r\n- Consumerid associated with the topic to consume from\r\n\r\n*producerid*: string, required\r\n\r\n- Producerid associated with the topic to produce to\r\n\r\n*hpdehost*: string, required\r\n\r\n- Address of HPDE \r\n\r\n*viperconfigfile* : string, required\r\n\r\n- Full path to VIPER.ENV configuration file on server.\r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.\r\n\r\n*partition*: int, optional\r\n\r\n- Partition used by kafka to store data. NOTE: Kafka will dynamically store data in partitions.\r\n  Unless you know for sure the partition, you should use the default of -1 to let VIPER\r\n  determine where your data is.\r\n\r\n*hpdeport*: int, required\r\n\r\n- Port number HPDE is listening on \r\n\r\n*topicid* : int, optional\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can perform anomaly \r\n  prediction for each device.\r\n\r\n*maintopic* : string, optional\r\n\r\n- This is the maintopic that contains the subtopic streams.\r\n\r\n*rollbackoffsets*: int, optional\r\n\r\n- This is the percentage to rollback the streams that you are analysing: streamstoanalyse\r\n\r\n*fullpathtopeergroupdata*: string, optional\r\n\r\n- This is the full path to the peer group you found in viperanomalytrain; this will be used for anomaly detection.\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*delay* : int, optional\r\n\r\n- delay parameter to wait for Kafka to respond - in milliseconds.\r\n\r\n*timeout* : int, optional\r\n\r\n - Number of seconds that VIPER waits when trying to make a connection to HPDE.\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the peer groups for all the streams.\r\n\r\n**25.1 maadstml.viperanomalypredictbatch(vipertoken,host,port,consumefrom,produceto,consumeinputstream,produceinputstreamtest,produceridinputstreamtest,\r\n                      streamstoanalyse,consumeridinputstream,companyname,consumerid,producerid,flags,hpdehost,viperconfigfile,\r\n                      enabletls=1,partition=-1,hpdeport=-999,topicid=\"-999\",maintopic='',rollbackoffsets=0,fullpathtopeergroupdata='',\r\n\t\t\t\t\t  brokerhost='',brokerport=9092,delay=1000,timeout=120,microserviceid='',timedelay=0,asynctimeout=120)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*asynctimeout* : int, optional\r\n \r\n  -This is the timeout in seconds for the Python library async function.\r\n\r\n*timedelay* : int, optional\r\n\r\n - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause \r\n   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated\r\n   every 3600 seconds, it may make sense to set timedelay=3600\r\n\r\n*consumefrom* : string, required\r\n       \r\n- Topic to consume from in the Kafka broker\r\n\r\n*produceto* : string, required\r\n\r\n- Topic to produce results of the prediction to\r\n\r\n*consumeinputstream* : string, required\r\n\r\n- Topic of the input stream to test for anomalies\r\n\r\n*produceinputstreamtest* : string, required\r\n\r\n- Topic to store the input stream data for analysis\r\n\r\n*produceridinputstreamtest* : string, required\r\n\r\n- Producer id for the produceinputstreamtest topic \r\n\r\n*streamstoanalyse* : string, required\r\n\r\n- Comma separated list of streams to analyse for anomalies\r\n\r\n*flags* : string, required\r\n\r\n- These are flags that will be used to select the peer group for each stream.  The flags must have the following format:\r\n  *riskscore=[a number between 0 and 1]~complete=[and, or, pvalue i.e. p50 means streams over 50% that have an anomaly]~type=[and,or this will \r\n  determine what logic to apply to v and sc],topic=[topic name],topictype=[numeric or string],v=[v>some value, v<some value, or valueany],\r\n  sc=[sc>some number, sc<some number - this is the score for the anomaly test]\r\n  \r\n  if using strings, the specify flags: type=[and,or],topic=[topic name],topictype=string,stringcontains=[0 or 1 - 1 will do a substring test, \r\n  0 will equate the strings],v2=[any text you want to test - use | for OR or ^ for AND],sc=[score value, sc<some value, sc>some value]\r\n \r\n  *riskscore*: this the riskscore threshold.  A decimal number between 0 and 1, use this as a threshold to flag anomalies.\r\n\r\n  *complete* : If using multiple streams, this will test each stream to see if the computed riskscore and perform an AND or OR on each risk value\r\n  and take an average of the risk scores if using AND.  Otherwise if at least one stream exceeds the riskscore it will return.\r\n  \r\n  *type*: AND or OR - if using v or sc, this is used to apply the appropriate logic between v and sc.  For example, if type=or, then VIPER \r\n  will see if a test value is less than or greater than V, OR, standarzided value is less than or greater than sc.  \r\n  \r\n  *sc*: is a standarized variavice between the peer group value and test value.\r\n  \r\n  *v1*: is a user chosen value which can be used to test for a particular value.  For example, if you want to flag values less then 0, \r\n  then choose v<0 and VIPER will flag them as anomolous.\r\n\r\n  *v2*: if analysing string streams, v2 can be strings you want to check for. For example, if I want to check for two\r\n  strings: Failed and Attempt Failed, then set v2=Failed^Attempt Failed, where ^ tells VIPER to perform an AND operation.  \r\n  If I want either to exist, 2=Failed|Attempt Failed, where | tells VIPER to perform an OR operation.\r\n\r\n  *stringcontains* : if using string streams, and you want to see if a particular text value exists and flag it - then \r\n  if stringcontains=1, VIPER will test for substrings, otherwise it will equate the strings. \r\n  \r\n  \r\n  Flags must be provided for each topic.  Separate multiple flags by ~\r\n\r\n*consumeridinputstream* : string, required\r\n\r\n- Consumer id of the input stream topic: consumeinputstream\r\n\r\n*companyname* : string, required\r\n\r\n- Your company name\r\n\r\n*consumerid*: string, required\r\n\r\n- Consumerid associated with the topic to consume from\r\n\r\n*producerid*: string, required\r\n\r\n- Producerid associated with the topic to produce to\r\n\r\n*hpdehost*: string, required\r\n\r\n- Address of HPDE \r\n\r\n*viperconfigfile* : string, required\r\n\r\n- Full path to VIPER.ENV configuration file on server.\r\n\r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise set to 0 for plaintext.\r\n\r\n*partition*: int, optional\r\n\r\n- Partition used by kafka to store data. NOTE: Kafka will dynamically store data in partitions.\r\n  Unless you know for sure the partition, you should use the default of -1 to let VIPER\r\n  determine where your data is.\r\n\r\n*hpdeport*: int, required\r\n\r\n- Port number HPDE is listening on \r\n\r\n*topicid* : string, required\r\n\r\n- Topicid represents an id for some entity.  For example, if you have 1000 IoT devices, you can perform anomaly \r\n  prediction for each device. Separate  multiple topic ids by a comma.\r\n\r\n*maintopic* : string, optional\r\n\r\n- This is the maintopic that contains the subtopic streams.\r\n\r\n*rollbackoffsets*: int, optional\r\n\r\n- This is the percentage to rollback the streams that you are analysing: streamstoanalyse\r\n\r\n*fullpathtopeergroupdata*: string, optional\r\n\r\n- This is the full path to the peer group you found in viperanomalytrain; this will be used for anomaly detection.\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n\r\n*delay* : int, optional\r\n\r\n- delay parameter to wait for Kafka to respond - in milliseconds.\r\n\r\n*timeout* : int, optional\r\n\r\n - Number of seconds that VIPER waits when trying to make a connection to HPDE.\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: Returns a JSON object of the peer groups for all the streams.\r\n\r\n**26. maadstml.viperpreprocessproducetotopicstream(VIPERTOKEN,host,port,topic,producerid,offset,maxrows=0,enabletls=0,delay=100,\r\n                brokerhost='',brokerport=-999,microserviceid='',topicid=-999,streamstojoin='',preprocesslogic='',\r\n\t\t\t\tpreprocessconditions='',identifier='',preprocesstopic='',array=0,saveasarray=0,rawdataoutput=0)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topics to produce to in the Kafka broker - this is a topic that contains multiple topics, VIPER will consume from each \r\n   topic and write the aggregated results back to this stream.\r\n\r\n*array* : int, optional\r\n\r\n- Set array=1 if you produced data (from viperproducetotopic) as an array.  \r\n\r\n*rawdataoutput* : int, optional\r\n\r\n- Set rawdataoutput=1 and the raw data used for preprocessing will be added to the output json.  \r\n\r\n*preprocessconditions* : string, optional\r\n\r\n- You can set conditions to aggregate functions: MIN, MAX, AVG, COUNT, COUNTSTR, DIFF, DIFFMARGIN, SUM, MEDIAN, VARIANCE, OUTLIERS, OUTLIERSX-Y, VARIED, \r\n  ANOMPROB,ANOMPROBX-Y, CONSISTENCY,\r\n  ENTROPY, AUTOCORR, TREND, IQR (InterQuartileRange), Midhinge, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), \r\n  Mad (Mean absolute deviation),Skewness, Kurtosis, Spikedetect, Unique, Uniquestr, Timediff: time should be in this layout:2006-01-02T15:04:05,\r\n  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the \r\n  average time in seconds between consecutive dates.\r\n  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.\r\n  Geodiff (returns distance in Kilometers between two lat/long points)\r\n  Unique Checks numeric data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.\r\n\r\n  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from\r\n  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.\r\n  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype\r\n  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference\r\n  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype\r\n  for data quality and data assurance programs for any number of data streams.\r\n\r\n  Uniquestr Checks string data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.\r\n\r\n  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.\r\n \r\n  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.\r\n  \r\n  CONSISTENCY checks if the data all have consistent data types. Returns 1 for consistent data types, 0 otherwise.\r\n\r\n  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high \r\n  \r\n  RAW for no processing.\r\n  \r\n  ANOMPROB=Anomaly Probability, it will run several algorithms on the data stream window to determine a probaility of anomalous\r\n  behaviour.  This can be cross-refenced with OUTLIERS.  It can be very powerful way to detection\r\n  issues with devices. VARIED will determine if the values in the window are all the same, or varied: it will return 1 for varied,\r\n  0 if values are all the same.  This is useful if you want to know if something changed in the stream.\r\n  \r\n  ANOMPROBX-Y (similar to OUTLIERSX-Y), where X and Y are numbers or \"n\".  If \"n\" means examine all anomalies for patterns.\r\n  They allow you to check if the anomalies in the streams are truly anomalies and not some\r\n  pattern.  For example, if a IoT device shuts off and turns on again routinely, this may be picked up as an anomaly when in fact\r\n  it is normal behaviour.  So, to ignore these cases, if ANOMPROB2-5, this tells Viper, check anomalies with patterns of 2-5 peaks.\r\n  If the stream has two classes and these two classes are like 0 and 1000, and show a pattern, then they should not be considered an anomaly.\r\n  Meaning, class=0, is the device shutting down, class=1000 is the device turning back on.  If ANOMPROB3-10, Viper will check for \r\n  patterns of classes 3 to 10 to see if they recur routinely.  This is very helpful to reduce false positives and false negatives.\r\n  \r\n  For example, preprocessconditions='humidity=55,60:temperature=34,n', and preprocesslogic='max,count', means\r\n  Get the MAX value of values in humidity if humidity is between [55,60], and Count values in\r\n  temperature if temperature >=34.  \r\n  \r\n*preprocesstopic* : string, optional\r\n\r\n- You can specify a topic for the preprocessed message.  VIPER will automatically dump the preprocessed results to this topic. \r\n  \r\n*identifier* : string, optional \r\n\r\n- Add any identifier like DSN ID. \r\n\r\n*producerid* : string, required\r\n\r\n- Producerid of the topic producing to  \r\n\r\n*offset* : int, optional\r\n \r\n - If 0 will use the stream data from the beginning of the topics, -1 will automatically go to last offset\r\n\r\n*saveasarray* : int, optional\r\n\r\n- Set to 1, to save the preprocessed jsons as a json array.  This is very helpful if you want to do machine learning\r\n  or further query the preprocessed json because each processed json are time synchronized.  For example, if you want to compare\r\n  different preprocessed streams the date/time of the data is synchronized to give you impacts of one\r\n  stream on another.\r\n\r\n*maxrows* : int, optional\r\n \r\n - If offset=-1, this number will rollback the streams by maxrows amount i.e. rollback=lastoffset-maxrows\r\n \r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise 0 for plaintext\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\n*topicid* : int, optional\r\n\r\n- This represents the IoT device number or any entity\r\n\r\n*streamstojoin* : string, optional\r\n\r\n- If you entered topicid, you need to enter the streams you want to pre-process\r\n\r\n*preprocesslogic* : string, optional\r\n\r\n- Here you need to specify how you want to pre-process the streams.  You can perform the following operations:\r\n  MAX, MIN, AVG, COUNT, COUNTSTR, SUM, DIFF, DIFFMARGIN, VARIANCE, MEDIAN, OUTLIERS, OUTLIERSX-Y, VARIED, ANOMPROB, ANOMPROBX-Y, ENTROPY, \r\n  AUTOCORR, TREND, CONSISTENCY, Unique, Uniquestr, Geodiff (returns distance in Kilometers between two lat/long points),\r\n  IQR (InterQuartileRange), Midhinge, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), \r\n  Mad (Mean absolute deviation), Skewness, Kurtosis, Spikedetect, Timediff: time should be in this layout:2006-01-02T15:04:05,\r\n  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the \r\n  average time in seconds between consecutive dates.\r\n  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.\r\n\r\n  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from\r\n  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.\r\n  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype\r\n  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference\r\n  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype\r\n  for data quality and data assurance programs for any number of data streams.\r\n \r\n  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.\r\n  \r\n  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high \r\n\r\n  RAW for no processing.\r\n  \r\n  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.\r\n\r\n  The order of the operation must match the \r\n  order of the stream.  If you specified topicid, you can perform TML on the new preprocessed stream append appending: \r\n  _preprocessed_processlogic\r\n  For example, if streamstojoin=\"stream1,stream2,streams3\", and preprocesslogic=\"min,max,diff\", the new streams will be:\r\n  stream1_preprocessed_Min, stream2_preprocessed_Max, stream3_preprocessed_Diff.\r\n\r\nRETURNS: Returns preprocessed JSON.\r\n\r\n**27. maadstml.areyoubusy(host,port)**\r\n\r\n**Parameters:**\t\r\n\r\n*host* : string, required\r\n \r\n- You can get the host by determining all the hosts that are listening in your machine.   \r\n  You use this code: https://github.com/smaurice101/transactionalmachinelearning/blob/main/checkopenports\r\n\r\n\r\n*port* : int, required\r\n \r\n- You can get the port by determining all the ports that are listening in your machine. \r\n  You use this code: https://github.com/smaurice101/transactionalmachinelearning/blob/main/checkopenports \r\n  \r\nRETURNS: Returns a list of available VIPER and HPDE with their HOST and PORT.\r\n\r\n**28. maadstml.viperstreamquery(VIPERTOKEN,host,port,topic,producerid,offset=-1,maxrows=0,enabletls=1,delay=100,brokerhost='',\r\n                                          brokerport=-999,microserviceid='',topicid=-999,streamstojoin='',preprocessconditions='',\r\n                                          identifier='',preprocesstopic='',description='',array=0)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*topic* : string, required\r\n       \r\n- Topics to produce to in the Kafka broker - this is a topic that contains multiple topics, VIPER will consume from each \r\n   topic and write the aggregated results back to this stream.\r\n\r\n*producerid* : string, required\r\n       \r\n- Producer id of topic\r\n\r\n\r\n*offset* : int, optional\r\n \r\n - If 0 will use the stream data from the beginning of the topics, -1 will automatically go to last offset\r\n\r\n*maxrows* : int, optional\r\n \r\n - If offset=-1, this number will rollback the streams by maxrows amount i.e. rollback=lastoffset-maxrows\r\n \r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise 0 for plaintext\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\n*topicid* : int, optional\r\n\r\n- This represents the IoT device number or any entity\r\n\r\n*streamstojoin* : string, required\r\n\r\n- Identify multiple streams to join, separate by comma.  For example, if you preprocessed Power, Current, Voltage:\r\n **streamstojoin=\"Power_preprocessed_Avg,Current_preprocessed_Min,Voltage_preprocessed_Avg,Current_preprocessed_Trend\"**\r\n\r\n*preprocessconditions* : string, required\r\n\r\n - You apply strict conditions to a MAX of 3 streams.  You can use >, <, =, AND, OR.  You can add as many conditions as you like.\r\n   Separate multiple conditions by semi-colon. You **cannot mix** AND and OR.  For example, \r\n  **preprocessconditions='Power_preprocessed_Avg > 139000:Power_preprocessed_Avg < 1000 or Voltage_preprocessed_Avg > 120000 \r\n  or Current_preprocessed_Min=0:Voltage_preprocessed_Avg > 120000 and Current_preprocessed_Trend>0'**\r\n  \r\n*identifier*: string, optional\r\n \r\n - Add an identifier text to the result.  This is a label, and useful if you want to identify the result for some IOT device.  \r\n \r\n*preprocesstopic* : string, optional\r\n\r\n - The topic to produce the query results to.  \r\n \r\n*description* : string, optional\r\n\r\n - You can give each query condition a description.  Separate multiple desction by semi-colon.  \r\n \r\n*array* : int, optional\r\n\r\n - Set to 1 if you are reading a JSON ARRAY, otherwise 0.\r\n \r\nRETURNS: 1 if the condition is TRUE (condition met), 0 if false (condition not met)\r\n\r\n**28.1 maadstml.viperstreamquerybatch(VIPERTOKEN,host,port,topic,producerid,offset=-1,maxrows=0,enabletls=1,delay=100,brokerhost='',\r\n                                          brokerport=-999,microserviceid='',topicid=\"-999\",streamstojoin='',preprocessconditions='',\r\n                                          identifier='',preprocesstopic='',description='',array=0,timedelay=0,asynctimeout=120)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*asynctimeout* : int, optional\r\n \r\n  -This is the timeout in seconds for the Python library async function.\r\n\r\n*timedelay* : int, optional\r\n\r\n - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause \r\n   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated\r\n   every 3600 seconds, it may make sense to set timedelay=3600\r\n\r\n*topic* : string, required\r\n       \r\n- Topics to produce to in the Kafka broker - this is a topic that contains multiple topics, VIPER will consume from each \r\n   topic and write the aggregated results back to this stream.\r\n\r\n*producerid* : string, required\r\n       \r\n- Producer id of topic\r\n\r\n\r\n*offset* : int, optional\r\n \r\n - If 0 will use the stream data from the beginning of the topics, -1 will automatically go to last offset\r\n\r\n*maxrows* : int, optional\r\n \r\n - If offset=-1, this number will rollback the streams by maxrows amount i.e. rollback=lastoffset-maxrows\r\n \r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise 0 for plaintext\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\n*topicid* : string, required\r\n\r\n- This represents the IoT device number or any entity.  Separate multiple topic ids by a comma.\r\n\r\n*streamstojoin* : string, required\r\n\r\n- Identify multiple streams to join, separate by comma.  For example, if you preprocessed Power, Current, Voltage:\r\n **streamstojoin=\"Power_preprocessed_Avg,Current_preprocessed_Min,Voltage_preprocessed_Avg,Current_preprocessed_Trend\"**\r\n\r\n*preprocessconditions* : string, required\r\n\r\n - You apply strict conditions to a MAX of 3 streams.  You can use >, <, =, AND, OR.  You can add as many conditions as you like.\r\n   Separate multiple conditions by semi-colon. You **cannot mix** AND and OR.  For example, \r\n  **preprocessconditions='Power_preprocessed_Avg > 139000:Power_preprocessed_Avg < 1000 or Voltage_preprocessed_Avg > 120000 \r\n  or Current_preprocessed_Min=0:Voltage_preprocessed_Avg > 120000 and Current_preprocessed_Trend>0'**\r\n  \r\n*identifier*: string, optional\r\n \r\n - Add an identifier text to the result.  This is a label, and useful if you want to identify the result for some IOT device.  \r\n \r\n*preprocesstopic* : string, optional\r\n\r\n - The topic to produce the query results to.  \r\n \r\n*description* : string, optional\r\n\r\n - You can give each query condition a description.  Separate multiple desction by semi-colon.  \r\n \r\n*array* : int, optional\r\n\r\n - Set to 1 if you are reading a JSON ARRAY, otherwise 0.\r\n \r\nRETURNS: 1 if the condition is TRUE (condition met), 0 if false (condition not met)\r\n\r\n**29. maadstml.viperpreprocessbatch(VIPERTOKEN,host,port,topic,producerid,offset,maxrows=0,enabletls=0,delay=100,\r\n                brokerhost='',brokerport=-999,microserviceid='',topicid=\"-999\",streamstojoin='',preprocesslogic='',\r\n\t\t\t\tpreprocessconditions='',identifier='',preprocesstopic='',array=0,saveasarray=0,timedelay=0,asynctimeout=120,rawdataoutput=0)**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*asynctimeout* : int, optional\r\n \r\n  -This is the timeout in seconds for the Python library async function.\r\n\r\n*rawdataoutput* : int, optional\r\n \r\n  -Set rawdataoutput=1 to output the raw preprocessing data to the Json.\r\n\r\n*timedelay* : int, optional\r\n\r\n - Timedelay is in SECONDS. Because batch runs continuously in the background, this will cause Viper to pause \r\n   *timedelay* seconds when reading and writing to Kafka.  For example, if the raw data is being generated\r\n   every 3600 seconds, it may make sense to set timedelay=3600\r\n\r\n*topic* : string, required\r\n       \r\n- Topics to produce to in the Kafka broker - this is a topic that contains multiple topics, VIPER will consume from each \r\n   topic and write the aggregated results back to this stream.\r\n\r\n*array* : int, optional\r\n\r\n- Set array=1 if you produced data (from viperproducetotopic) as an array.  \r\n\r\n*preprocessconditions* : string, optional\r\n\r\n- You can set conditions to aggregate functions: MIN, MAX, AVG, COUNT, COUNTSTR, DIFF, SUM, MEDIAN, VARIANCE, OUTLIERS, OUTLIERSX-Y, VARIED, ANOMPROB,ANOMPROBX-Y,\r\n  ENTROPY, AUTOCORR, TREND, IQR (InterQuartileRange), Midhinge, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), \r\n  Mad (Mean absolute deviation),Skewness, Kurtosis, Spikedetect, Unique, Uniquestr, Timediff: time should be in this layout:2006-01-02T15:04:05,\r\n  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the \r\n  average time in seconds between consecutive dates.  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, \r\n  StD of 3.5 from mean and influence of 0.5.  Geodiff (returns distance in Kilometers between two lat/long points).\r\n\r\n  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from\r\n  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.\r\n  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype\r\n  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference\r\n  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype\r\n  for data quality and data assurance programs for any number of data streams.\r\n  \r\n  Unique Checks numeric data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.\r\n\r\n  Uniquestr Checks string data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.\r\n  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers. \r\n  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.\r\n  \r\n  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high \r\n\r\n  ANOMPROB=Anomaly Probability, it will run several algorithms on the data stream window to determine a probaility of anomalous\r\n  behaviour.  This can be cross-refenced with OUTLIERS.  It can be very powerful way to detection\r\n  issues with devices. VARIED will determine if the values in the window are all the same, or varied: it will return 1 for varied,\r\n  0 if values are all the same.  This is useful if you want to know if something changed in the stream.\r\n  \r\n  ANOMPROBX-Y (similar to OUTLIERSX-Y), where X and Y are numbers or \"n\".  If \"n\" means examine all anomalies for patterns.\r\n  They allow you to check if the anomalies in the streams are truly anomalies and not some\r\n  pattern.  For example, if a IoT device shuts off and turns on again routinely, this may be picked up as an anomaly when in fact\r\n  it is normal behaviour.  So, to ignore these cases, if ANOMPROB2-5, this tells Viper, check anomalies with patterns of 2-5 peaks.\r\n  If the stream has two classes and these two classes are like 0 and 1000, and show a pattern, then they should not be considered an anomaly.\r\n  Meaning, class=0, is the device shutting down, class=1000 is the device turning back on.  If ANOMPROB3-10, Viper will check for \r\n  patterns of classes 3 to 10 to see if they recur routinely.  This is very helpful to reduce false positives and false negatives.\r\n  \r\n  For example, preprocessconditions='humidity=55,60:temperature=34,n', and preprocesslogic='max,count', means\r\n  Get the MAX value of values in humidity if humidity is between [55,60], and Count values in\r\n  temperature if temperature >=34.  \r\n  \r\n*preprocesstopic* : string, optional\r\n\r\n- You can specify a topic for the preprocessed message.  VIPER will automatically dump the preprocessed results to this topic. \r\n  \r\n*identifier* : string, optional \r\n\r\n- Add any identifier like DSN ID. Note, for multiple identifiers per topicid, you can separate by pipe \"|\".\r\n\r\n*producerid* : string, required\r\n\r\n- Producerid of the topic producing to  \r\n\r\n*offset* : int, optional\r\n \r\n - If 0 will use the stream data from the beginning of the topics, -1 will automatically go to last offset\r\n\r\n*saveasarray* : int, optional\r\n\r\n- Set to 1, to save the preprocessed jsons as a json array.  This is very helpful if you want to do machine learning\r\n  or further query the preprocessed json because each processed json are time synchronized.  For example, if you want to compare\r\n  different preprocessed streams the date/time of the data is synchronized to give you impacts of one\r\n  stream on another.\r\n\r\n*maxrows* : int, optional\r\n \r\n - If offset=-1, this number will rollback the streams by maxrows amount i.e. rollback=lastoffset-maxrows\r\n \r\n*enabletls*: int, optional\r\n\r\n- Set to 1 if Kafka broker is SSL/TLS enabled for encrypted traffic, otherwise 0 for plaintext\r\n\r\n*delay*: int, optional\r\n\r\n- Time in milliseconds before VIPER backsout from reading messages\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address of Kafka broker - if none is specified it will use broker address in VIPER.ENV file\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port Kafka is listening on - if none is specified it will use port in the VIPER.ENV file\r\n \r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\n*topicid* : string, required\r\n\r\n- This represents the IoT device number or any entity.  You can specify multiple ids \r\n  separated by a comma: topicid=\"1,2,4,5\". \r\n\r\n*streamstojoin* : string, optional\r\n\r\n- If you entered topicid, you need to enter the streams you want to pre-process\r\n\r\n*preprocesslogic* : string, optional\r\n\r\n- Here you need to specify how you want to pre-process the streams.  You can perform the following operations:\r\n  MAX, MIN, AVG, COUNT, COUNTSTR, SUM, DIFF, VARIANCE, MEDIAN, OUTLIERS, OUTLIERSX-Y, VARIED, ANOMPROB, ANOMPROBX-Y, ENTROPY, AUTOCORR, TREND,\r\n  IQR (InterQuartileRange), Midhinge, CONSISTENCY, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), \r\n  Mad (Mean absolute deviation), Skewness, Kurtosis, Spikedetect, Unique, Uniquestr, Timediff: time should be in this layout:2006-01-02T15:04:05,\r\n  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the \r\n  average time in seconds between consecutive dates. \r\n  Geodiff (returns distance in Kilometers between two lat/long points).\r\n  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.\r\n  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.\r\n  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.\r\n\r\n  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from\r\n  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.\r\n  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype\r\n  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference\r\n  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype\r\n  for data quality and data assurance programs for any number of data streams.\r\n\r\n  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high \r\n\r\n  The order of the operation must match the \r\n  order of the stream.  If you specified topicid, you can perform TML on the new preprocessed stream append appending: \r\n  _preprocessed_processlogic\r\n  For example, if streamstojoin=\"stream1,stream2,streams3\", and preprocesslogic=\"min,max,diff\", the new streams will be:\r\n  stream1_preprocessed_Min, stream2_preprocessed_Max, stream3_preprocessed_Diff.\r\n\r\nRETURNS: None.\r\n\r\n**30. maadstml.viperlisttopics(vipertoken,host,port=-999,brokerhost='', brokerport=-999,microserviceid='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address where Kafka broker is running - if none is specified, the Kafka broker address in the VIPER.ENV file will be used.\r\n\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port on which Kafka is listenting.\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: A JSON formatted object of all the topics in the Kafka broker.\r\n\r\n\r\n**31. maadstml.viperpreprocesscustomjson(VIPERTOKEN,host,port,topic,producerid,offset,jsoncriteria='',rawdataoutput=0,maxrows=0,\r\n                   enabletls=0,delay=100,brokerhost='',brokerport=-999,microserviceid='',topicid=-999,streamstojoin='',preprocesslogic='',\r\n                   preprocessconditions='',identifier='',preprocesstopic='',array=0,saveasarray=0,timedelay=0,asynctimeout=120,\r\n                   usemysql=0,tmlfilepath='',pathtotmlattrs='')**\r\n\r\n**Parameters:**\t\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*topic* : string, required\r\n\r\n- Topic containing the raw data to consume.\r\n\r\n*producerid* : string, required\r\n\r\n- Id of the Topic.\r\n\r\n*offset* : int, required\r\n\r\n- Offset to consume from.  Set to -1 if consuming the last offset of topic.\r\n\r\n*jsoncriteria* : string, required\r\n\r\n- This is the JSON path to the data you want to consume . It must be the following format: \r\n\r\n            *UID* is path to the main id. For example, Patient ID\r\n\t\t\t\r\n\t\t\t*filter* is the path to something that filter the jsons \r\n\t\t\t\r\n\t\t\t*subtopic* is the path to the subtopics in the json (several paths can be specified)\r\n\t\t\t\r\n\t\t\t*values* is the path to the Values of the subtopics - Subtopic and Value must have 1-1 match\r\n\t\t\t\r\n\t\t\t*identifiers* is the path to any special identifiers for the subtopics\r\n\t\t\t\r\n\t\t\t*datetime* is the path to the datetime of the message\r\n\t\t\t\r\n\t\t\t*msgid* is the path to any msg id\r\n\r\n*For example:*\r\n\r\n     jsoncriteria='uid=subject.reference,filter:resourceType=Observation~\\\r\n                   subtopics=code.coding.0.code,component.0.code.coding.0.code,component.1.code.coding.0.code~\\\r\n                   values=valueQuantity.value,component.0.valueQuantity.value,component.1.valueQuantity.value~\\\r\n                   identifiers=code.coding.0.display,component.0.code.coding.0.display,component.1.code.coding.0.display~\\\r\n                   datetime=effectiveDateTime~\\\r\n                   msgid=id'\r\n\r\n*rawdataoutput* : int, optional\r\n\r\n- set to 1 if you want to output the raw data.  Note: This could involve a lot of data and Kafka may refuse to write to the topic.\r\n\r\n*maxrows* : int, optional\r\n\r\n- Number of offsets or percentage to roll back the data stream\r\n\r\n*enabletls* : int, optional\r\n\r\n- Set to 1 for TLS encrpyted traffic\r\n\r\n*delay* : int, optional\r\n\r\n- Delay to wait for Kafka to finish writing to topic\r\n\r\n*topicid* : int, optional\r\n\r\n- Since you are consuming raw data, this is not needed.  Topicid will be set for you.\r\n\r\n*streamstojoin* : string, optional\r\n\r\n- This is ignored for raw data.\r\n\r\n*preprocesslogic* : string, optional\r\n\r\n- Specify your preprocess algorithms. For example, You can set conditions to aggregate functions: MIN, MAX, AVG, COUNT, COUNTSTR, DIFF, \r\n  DIFFMARGIN, SUM, MEDIAN, VARIANCE, OUTLIERS, OUTLIERSX-Y, VARIED, \r\n  ANOMPROB,ANOMPROBX-Y, CONSISTENCY,\r\n  ENTROPY, AUTOCORR, TREND, IQR (InterQuartileRange), Midhinge, GM (Geometric mean), HM (Harmonic mean), Trimean, CV (coefficient of Variation), \r\n  Mad (Mean absolute deviation),Skewness, Kurtosis, Spikedetect, Unique, Uniquestr, Timediff: time should be in this layout:2006-01-02T15:04:05,\r\n  Timediff returns the difference in seconds between the first date/time and last datetime. Avgtimediff returns the \r\n  average time in seconds between consecutive dates.\r\n  Spikedetect uses a Zscore method to detect spikes in the data using lag of 5, StD of 3.5 from mean and influence of 0.5.\r\n  Geodiff (returns distance in Kilometers between two lat/long points)\r\n  Unique Checks numeric data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.\r\n\r\n  Dataage_[UTC offset]_[timetype], dataage can be used to check the last update time of the data in the data stream from\r\n  current local time.  You can specify the UTC offset to adjust the current time to match the timezone of the data stream.\r\n  You can specify timetype as millisecond, second, minute, hour, day.  For example, if Dataage_1_minute, then this processtype\r\n  will compare the last timestamp in the data stream, to the local UTC time offset +1 and compute the time difference\r\n  between the data stream timestamp and current local time and return the difference in minutes.  This is a very powerful processtype\r\n  for data quality and data assurance programs for any number of data streams.\r\n\r\n  Uniquestr Checks string data for duplication.  Returns 1 if no data duplication (unique), 0 otherwise.\r\n\r\n  Uniquecount Checks numeric data for duplication.  Returns count of unique numbers.\r\n \r\n  Uniquestrcount Checks string data for duplication.  Returns count of unique strings.\r\n  \r\n  CONSISTENCY checks if the data all have consistent data types. Returns 1 for consistent data types, 0 otherwise.\r\n\r\n  Meanci95 or Meanci99 - returns a 95% or 99% confidence interval: mean, low, high \r\n  \r\n  RAW for no processing.\r\n\r\n*preprocessconditions* : string, optional\r\n\r\n- Specify any preprocess conditions\r\n\r\n*identifier* : string, optional\r\n\r\n- Specify any text identifier\r\n\r\n*preprocesstopic* : string, optional\r\n\r\n- Specify the name of the topic to write preprocessed results.\r\n\r\n*array* : int, optional\r\n\r\n- Ignored for raw data - as jsoncriteria specifies json path\r\n\r\n*saveasarray* : int, optional\r\n\r\n- Set to 1 to save as json array\r\n\r\n*timedelay* : int, optional\r\n\r\n- Delay to wait for response from Kafka.\r\n\r\n*asynctimeout* : int, optional\r\n\r\n- Maximum delay for asyncio in Python library\r\n\r\n*usemysql* : int, optional\r\n\r\n- Set to 1 to specify whether MySQL is used to store TMLIDs.  This will be needed to track individual objects.\r\n\r\n*tmlfilepath* : string, optional\r\n\r\n- Ignored. \r\n\r\n*pathtotmlattrs* : string, optional\r\n\r\n- Specifiy any attributes for the TMLID.  Here you can specify OEM, Latitude, Longitude, and Location JSON paths:\r\n\r\n     pathtotmlattrs='oem=id,lat=subject.reference,long=component.0.code.coding.0.display,location=component.1.valueQuantity.value'\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address where Kafka broker is running - if none is specified, the Kafka broker address in the VIPER.ENV file will be used.\r\n\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port on which Kafka is listenting.\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: null\r\n\r\n**32. maadstml.viperstreamcorr(vipertoken,host,port,topic,producerid,offset=-1,maxrows=0,enabletls=1,delay=100,brokerhost='',\r\n                                 brokerport=-999,microserviceid='',topicid=-999,streamstojoin='',\r\n                                 identifier='',preprocesstopic='',description='',array=0, wherecondition='',\r\n                                 wheresearchkey='PreprocessIdentifier',rawdataoutput=1,threshhold=0,pvalue=0,\r\n                                 identifierextractpos=\"\",topcorrnum=5,jsoncriteria='',tmlfilepath='',usemysql=0,\r\n                                 pathtotmlattrs='',mincorrvectorlen=5,writecorrstotopic='',outputtopicnames=0,nlp=0,\r\n                                 correlationtype='',docrosscorr=0)**\r\n\r\n**Parameters:**\tPerform Stream correlations\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*topic* : string, required\r\n\r\n- Topic containing the raw data to consume.\r\n\r\n*producerid* : string, required\r\n\r\n- Id of the Topic.\r\n\r\n*wherecondition* : string, optional\r\n\r\n- Specify the where condition.  For example, if you want to filter the data on \"males\", enter males.  You can\r\n  specify exact match by using [males], or substring by using (males), or \"not\" includes by using {males}  \r\n\r\n*correlationtype* : string, optional\r\n\r\n-  Specify type of correlation you want to do.  Valid values are: kendall,spearman,pearson,ks\r\n   You can specify some, or all (leave blank and ALL will be done), separated by comma. ks=kolmogorov-Smirnov test.\r\n\r\n*docrosscorr* : int, optional\r\n\r\n- Set to 1 if you want to do cross-correlations with 4 variables, not the normal 2-variable. \r\n\r\n*wheresearchkey* : string, optional\r\n\r\n- Specify the where search key.  This key will be searched for \"males\".  \r\n\r\n*description* : string, optional\r\n\r\n- Specify a text description for this correlation.  \r\n\r\n*identifierextractpos* : string, optional\r\n\r\n- If doing correlation on data you have already preprocessed, you can extract the identifier from the identifier field\r\n  in the preprocessed json. \r\n\r\n*offset* : int, required\r\n\r\n- Offset to consume from.  Set to -1 if consuming the last offset of topic.\r\n\r\n*mincorrvectorlen* : int, optional\r\n\r\n- Minimum length of the data variables you are correlating.\r\n\r\n*topcorrnum* : int, optional\r\n\r\n- Top number of sorted correlations to output\r\n\r\n*threshhold* : int, optional\r\n\r\n- Threshold for the correlation coefficient.  Must range from 0-100.  All correlations will be greater than this number.\r\n\r\n*pvalue* : int, optional\r\n\r\n- Pvalue threshold for the p-values.  Must range from 0-100.  All p-values will be below this number.\r\n\r\n*writecorrstotopic* : string, optional\r\n\r\n- This is the name of the topic that Viper will write \"individual\" correlation results to.  \r\n\r\n*outputtopicnames* : int, optional\r\n\r\n- Set to 1 if you want to write out topic names.\r\n\r\n*nlp* : int, optional\r\n\r\n- Set to 1 if you want to correlate TEXT data by using natural language processing (NLP).\r\n\r\n*jsoncriteria* : string, required\r\n\r\n- This is the JSON path to the data you want to consume . It must be the following format: \r\n\r\n            *UID* is path to the main id. For example, Patient ID\r\n\t\t\t\r\n\t\t\t*filter* is the path to something that filter the jsons \r\n\t\t\t\r\n\t\t\t*subtopic* is the path to the subtopics in the json (several paths can be specified)\r\n\t\t\t\r\n\t\t\t*values* is the path to the Values of the subtopics - Subtopic and Value must have 1-1 match\r\n\t\t\t\r\n\t\t\t*identifiers* is the path to any special identifiers for the subtopics\r\n\t\t\t\r\n\t\t\t*datetime* is the path to the datetime of the message\r\n\t\t\t\r\n\t\t\t*msgid* is the path to any msg id\r\n\r\n*For example:*\r\n\r\n     jsoncriteria='uid=subject.reference,filter:resourceType=Observation~\\\r\n                   subtopics=code.coding.0.code,component.0.code.coding.0.code,component.1.code.coding.0.code~\\\r\n                   values=valueQuantity.value,component.0.valueQuantity.value,component.1.valueQuantity.value~\\\r\n                   identifiers=code.coding.0.display,component.0.code.coding.0.display,component.1.code.coding.0.display~\\\r\n                   datetime=effectiveDateTime~\\\r\n                   msgid=id'\r\n\r\n*rawdataoutput* : int, optional\r\n\r\n- set to 1 if you want to output the raw data.  Note: This could involve a lot of data and Kafka may refuse to write to the topic.\r\n\r\n*maxrows* : int, optional\r\n\r\n- Number of offsets or percentage to roll back the data stream\r\n\r\n*enabletls* : int, optional\r\n\r\n- Set to 1 for TLS encrpyted traffic\r\n\r\n*delay* : int, optional\r\n\r\n- Delay to wait for Kafka to finish writing to topic\r\n\r\n*topicid* : int, optional\r\n\r\n- Since you are consuming raw data, this is not needed.  Topicid will be set for you.\r\n\r\n*streamstojoin* : string, optional\r\n\r\n- This is ignored for raw data.\r\n\r\n*preprocesslogic* : string, optional\r\n\r\n- Specify your preprocess algorithms. For example, min, max, variance, trend, anomprob, outliers, etc..\r\n\r\n*preprocessconditions* : string, optional\r\n\r\n- Specify any preprocess conditions\r\n\r\n*identifier* : string, optional\r\n\r\n- Specify any text identifier\r\n\r\n*preprocesstopic* : string, optional\r\n\r\n- Specify the name of the topic to write preprocessed results.\r\n\r\n*array* : int, optional\r\n\r\n- Ignored for raw data - as jsoncriteria specifies json path\r\n\r\n*saveasarray* : int, optional\r\n\r\n- Set to 1 to save as json array\r\n\r\n*timedelay* : int, optional\r\n\r\n- Delay to wait for response from Kafka.\r\n\r\n*asynctimeout* : int, optional\r\n\r\n- Maximum delay for asyncio in Python library\r\n\r\n*usemysql* : int, optional\r\n\r\n- Set to 1 to specify whether MySQL is used to store TMLIDs.  This will be needed to track individual objects.\r\n\r\n*tmlfilepath* : string, optional\r\n\r\n- Ignored. \r\n\r\n*pathtotmlattrs* : string, optional\r\n\r\n- Specifiy any attributes for the TMLID.  Here you can specify OEM, Latitude, Longitude, and Location JSON paths:\r\n\r\n     pathtotmlattrs='oem=id,lat=subject.reference,long=component.0.code.coding.0.display,location=component.1.valueQuantity.value'\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address where Kafka broker is running - if none is specified, the Kafka broker address in the VIPER.ENV file will be used.\r\n\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port on which Kafka is listenting.\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\nRETURNS: null\r\n\r\n**33. maadstml.viperstreamcluster(vipertoken,host,port,topic,producerid,offset=-1,maxrows=0,enabletls=1,delay=100,brokerhost='',\r\n                                          brokerport=-999,microserviceid='',topicid=-999,iterations=1000, numclusters=8,\r\n                                          distancealgo=1,description='',rawdataoutput=0,valuekey='',filterkey='',groupkey='',\r\n                                          identifier='',datetimekey='',valueidentifier='',msgid='',valuecondition='',\r\n                                          identifierextractpos='',preprocesstopic='',\r\n                                          alertonclustersize=0,alertonsubjectpercentage=50,sendalertemailsto='',emailfrequencyinseconds=0,\r\n                                          companyname='',analysisdescription='',identifierextractposlatitude=-1,\r\n                                          identifierextractposlongitude=-1,identifierextractposlocation=-1,\r\n                                          identifierextractjoinedidentifiers=-1,pdfformat='',minimumsubjects=2)**\r\n\r\n\r\n**Parameters:**\tPerform Stream correlations\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*topic* : string, required\r\n\r\n- Topic containing the raw data to consume.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address where Kafka broker is running - if none is specified, the Kafka broker address in the VIPER.ENV file will be used.\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port on which Kafka is listenting.\r\n\r\n*alertonsubjectpercentage* : int, optional\r\n\r\n- Set a value between 0-100 that specifies the percentage of subjects that exceed a threshold. \r\n\r\n*identifierextractjoinedidentifiers* : int, optional\r\n\r\n - Position of additional text in identfier field.\r\n\r\n*pdfformat* : string, optional\r\n\r\n- Speficy format text of the PDF to generate and emailed to users.  You can set title, signature, showpdfemaillist, and charttitle.\r\n\r\n     pdfformat=\"title=This is a Transactional Machine Learning Auto-Generated PDF for Cluster Analysis For OTICS|signature=\\\r\n     Created by: OTICS, Toronto|showpdfemaillist=1|charttitle=Chart Shows Clusters of Patients with Similar Symptoms\"\r\n\r\n*minimumsubjects* : int, optional\r\n\r\n- Sepecify minimum subjects in the cluster analysis.\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\n*maxrows* : int, optional\r\n\r\n- Number of offsets or percentage to roll back the data stream\r\n\r\n*enabletls* : int, optional\r\n\r\n- Set to 1 for TLS encrpyted traffic\r\n\r\n*delay* : int, optional\r\n\r\n- Delay to wait for Kafka to finish writing to topic\r\n\r\n*producerid* : string, required\r\n\r\n- Id of the Topic.\r\n\r\n*topicid* : int, optional\r\n\r\n- Ignored\r\n\r\n*iterations* : int, optional\r\n\r\n - Number of iterations to compute clusters\r\n\r\n*numclusters* : int, optional\r\n\r\n - Number of clusters you want.  Maximum is 20.\r\n\r\n*distancealgo* : int, optional\r\n\r\n - Set to 1 for Euclidean, or 2 for EuclideanSquared.\r\n\r\n*valuekey* : string, required\r\n\r\n- JSON path to the value to cluster on \r\n\r\n*filterkey* : string, optional\r\n \r\n - JSON path to filter on.  Ex. Preprocesstype=Pearson, gets value from Key=Preprocesstype, and checks for value=Pearson\r\n\r\n*groupkey* : string, optional\r\n \r\n - JSON path to group on a key.  Ex. Topicid, to group on TMLIDs\r\n\r\n*valueidentifier* : string, optional\r\n \r\n - JSON path to text value IDs you correlated.\r\n\r\n*msgid* : string, optional\r\n\r\n - JSON path for a unique message id\r\n \r\n*valuecondition* : string, optional\r\n \r\n - A condition to filter numeric values on.  Ex. valuecondition=\"> .5\", if valuekey is correlations, then all correlation > 0.5 are taken.\r\n  \r\n*identifierextractpos* : string, optional\r\n\r\n - The location of data to extract from the Identifier field.  Ex. identifierextractpos=\"1,2\", will extract data from position 1 and 2.\r\n \r\n*preprocesstopic* : string, required\r\n\r\n - Topic to produce results to \r\n \r\n*alertonclustersize* : int, optional\r\n\r\n - Size of the cluster to alert on.  Ex.  if this is 100, then when any cluster has more than 100 elements an email is sent.\r\n\r\n*sendalertemailsto*: string, optional\r\n \r\n - List of email addresses to send alert to\r\n \r\n*emailfrequencyinseconds* : int, optional\r\n\r\n - Seconds between emails. Ex. set to 3600, so emails will be sent every 1 hour if alert condition met.\r\n\r\n*companyname* : string, optional\r\n \r\n - Your company name\r\n \r\n*analysisdescription* : string, optional\r\n\r\n - A detailed description of the analysis.  This will be added to the PDF.\r\n\r\n*identifierextractposlatitude* : int, optional\r\n\r\n- Position for latitude in the Identifier field  \r\n\r\n*identifierextractposlongitude* : int, optional\r\n\r\n- Position for longitude in the Identifier field  \r\n\r\n*identifierextractposlocation* : int, optional\r\n\r\n- Position for location in the Identifier field  \r\n\r\nRETURNS: null\r\n\r\n**34. maadstml.vipersearchanomaly(vipertoken,host,port,topic,producerid,offset,jsoncriteria='',rawdataoutput=0,maxrows=0,enabletls=0,delay=100,\r\n                       brokerhost='',brokerport=-999,microserviceid='',topicid=-999,identifier='',preprocesstopic='',\r\n                       timedelay=0,asynctimeout=120,searchterms='',entitysearch='',tagsearch='',checkanomaly=1,testtopic='',\r\n                       includeexclude=1,anomalythreshold=0,sendanomalyalertemail='',emailfrequency=3600)**\r\n\r\n**Parameters:**\tPerform Stream correlations\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*topic* : string, required\r\n\r\n- Topic containing the raw data to consume.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*brokerhost* : string, optional\r\n\r\n- Address where Kafka broker is running - if none is specified, the Kafka broker address in the VIPER.ENV file will be used.\r\n\r\n*brokerport* : int, optional\r\n\r\n- Port on which Kafka is listenting.\r\n\r\n*jsoncriteria* : string, optional\r\n\r\n- Enter the JSON path to the search fields\r\n\r\n*anomalythreshold* : int, optional\r\n\r\n - Threshold to meet to determine if search differs from the peer group.  This is a number between 0-100.  The lower the number\r\n   the \"more\" this search differs from the peer group and likely anomalous.\r\n\r\n*includeexclude* : int, optional\r\n\r\n- Set to 1 if you want the search terms included in the user searches, 0 otherwise.\r\n\r\n*sendanomalyalertemail* : string, optional\r\n\r\n- List of email addresses to send alerts to: separate list by comma.\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\n*maxrows* : int, optional\r\n\r\n- Number of offsets or percentage to roll back the data stream\r\n\r\n*enabletls* : int, optional\r\n\r\n- Set to 1 for TLS encrpyted traffic\r\n\r\n*delay* : int, optional\r\n\r\n- Delay to wait for Kafka to finish writing to topic\r\n\r\n*producerid* : string, required\r\n\r\n- Id of the Topic.\r\n\r\n*emailfrequency* : int, optional\r\n\r\n- Frequency in seconds, between alert emails.\r\n\r\n*testtopic* : string, optional\r\n\r\n - ignored \r\n\r\n*preprocesstopic* : string, required\r\n\r\n - Topic to produce results to \r\n \r\n*sendalertemailsto*: string, optional\r\n \r\n - List of email addresses to send alert to\r\n\r\n*tagsearch* : string, optional\r\n\r\n - Search for tags in the search.  You can enter: 'superlative,noun,interjection,verb,pronoun'\r\n\r\n*entitysearch* : string, optional\r\n\r\n - Search for entities in the search.  You can enter: 'person,gpe', where gpe=Geo-political entity\r\n \r\n*searchterms* : string, optional\r\n\r\n - You can specify your own search terms.  Separate list by comma.\r\n \r\n*emailfrequencyinseconds* : int, optional\r\n\r\n - Seconds between emails. Ex. set to 3600, so emails will be sent every 1 hour if alert condition met.\r\n\r\n*companyname* : string, optional\r\n \r\n - Your company name\r\n \r\n*topicid* : int, optional\r\n\r\n - ignored\r\n \r\n*identifier* : string, optional\r\n\r\n- identifier text\r\n\r\n*checkanomaly* : int, optional\r\n\r\n- Set to 1 to check for search anomaly.\r\n\r\n*rawdataoutput* : int, optional\r\n\r\n- ignored\r\n\r\nRETURNS: null\r\n\r\n**35. maadstml.vipermirrorbrokers(VIPERTOKEN,host,port,brokercloudusernamepassfrom,brokercloudusernamepassto,\r\n         enabletlsfrom,enabletlsto,\r\n         replicationfactorfrom,replicationfactorto,compressionfrom,compressionto,\r\n         saslfrom,saslto,partitions,brokerlistfrom,brokerlistto,                                         \r\n         topiclist,asynctimeout=300,microserviceid=\"\",servicenamefrom=\"broker\",\r\n  \t\t servicenameto=\"broker\",partitionchangeperc=0,replicationchange=0,filter=\"\",rollbackoffset=0)**\r\n\r\n**Parameters:**\tPerform Data Stream migration across brokers - fast and simple.\r\n\r\n*VIPERTOKEN* : string, required\r\n\r\n- A token given to you by VIPER administrator.\r\n\r\n*host* : string, required\r\n       \r\n- Indicates the url where the VIPER instance is located and listening.\r\n\r\n*port* : int, required\r\n\r\n- Port on which VIPER is listenting.\r\n\r\n*brokercloudusernamepassfrom* : string, required\r\n\r\n- This is a comma separated list of source broker username:password. For multiple brokers separate with comma, for example for 3 brokers:\r\n  username:password,username:password,username:password\r\n\r\n*brokercloudusernamepassto* : string, required\r\n\r\n- This is a comma separated list of destination broker username:password. For multiple brokers separate with comma, for example for 3 brokers:\r\n  username:password,username:password,username:password.  The number of source and destination brokers must match.\r\n\r\n*enabletlsfrom* : string, required\r\n\r\n- This is a colon separated list of whether source brokers require TLS: 1=TLS, 0=NoTLS. For multiple brokers separate with colon, \r\n  for example for 3 brokers: 1:0:1.  Some brokers may be On-Prem and do not need TLS.\r\n  \r\n*enabletlsto* : string, required\r\n\r\n- This is a colon separated list of whether destination brokers require TLS: 1=TLS, 0=NoTLS. For multiple brokers separate with colon, \r\n  for example for 3 brokers: 1:0:1.  Some brokers may be On-Prem and do not need TLS.\r\n\r\n*replicationfactorfrom* : string, optional\r\n\r\n- This is a colon separated list of the replication factor of source brokers. For multiple brokers separate with colon, \r\n  for example for 3 brokers: 3:4:3, or leave blank to let VIPER decide.  \r\n  \r\n*replicationfactorto* : string, optional\r\n\r\n- This is a colon separated list of the replication factor of destination brokers. For multiple brokers separate with colon, \r\n  for example for 3 brokers: 3:4:3, or leave blank to let VIPER decide.\r\n\r\n*compressionfrom* : string, required\r\n\r\n- This is a colon separated list of the compression type of source brokers: snappy, gzip, lz4. For multiple brokers separate with colon, \r\n  for example for 3 brokers: snappy:snappy:gzip.  \r\n  \r\n*compressionto* : string, required\r\n\r\n- This is a colon separated list of the compression type of destination brokers: snappy, gzip, lz4. For multiple brokers separate with colon, \r\n  for example for 3 brokers: snappy:snappy:gzip.  \r\n\r\n*saslfrom* : string, required\r\n\r\n- This is a colon separated list of the SASL type: None, Plain, SCRAM256, SCRAM512 of source brokers. For multiple brokers separate with colon, \r\n  for example for 3 brokers: PLAIN:SCRAM256:SCRAM512.  \r\n  \r\n*saslto* : string, required\r\n\r\n- This is a colon separated list of the SASL type: None, Plain, SCRAM256, SCRAM512 of destination brokers. For multiple brokers separate with colon, \r\n  for example for 3 brokers: PLAIN:SCRAM256:SCRAM512.  \r\n\r\n*partitions* : string, optional\r\n\r\n- If you are manually migrating topics you will need to specify the partitions of the topics in *topiclist*.  Otherwise, VIPER\r\n  will automatically find topics and their partitions on the broker for you - this is recommended.\r\n\r\n*brokerlistfrom* : string, required\r\n\r\n- This is a list of source brokers: host:port. For multiple brokers separate with comma, for example for 3 brokers: host:port,host:port,host:port.  \r\n\r\n*brokerlistto* : string, required\r\n\r\n- This is a list of destination brokers: host:port. For multiple brokers separate with comma, for example for 3 brokers: host:port,host:port,host:port.  \r\n\r\n*topiclist* : string, optional\r\n\r\n- You can manually specify topics to migrate, separate multiple topics with a comma. Otherwise, Viper will automatically find topics\r\n  on the broker for you - this is recommended.\r\n\r\n*partitionchangeperc* : number, optional\r\n\r\n- You can increase or decrease partitions on destination broker by specifying a percentage between 0-100, or -100-0.\r\n  Minimum partition will always be 1.\r\n\r\n*replicationchange* : ignored for now\r\n\r\n- You can increase or decrease replication factor on destination broker by specifying a positive or negative number.\r\n  Minimum partition will always be 2.\r\n\r\n*filter* : string, optional\r\n\r\n- You can specify a filter to choose only those topics that satisfy the filter.  Filters must have the \r\n  following format: \"searchstring1,searchstring2,searchstring3,..:Logic=0 or 1:search position: 0,1,2\".  For example, \r\n  Logic 0=AND, 1=OR, search position: 0=BeginsWith, 1=Any, 2=EndsWith\r\n\r\n*asynctimeout* : number, optional\r\n\r\n- This specifies the timeout in seconds for the python connection.\r\n\r\n*microserviceid* : string, optional\r\n\r\n- If you are routing connections to VIPER through a microservice then indicate it here.\r\n\r\n*servicenamefrom* : string, optional\r\n\r\n- You can specify the name of the source brokers.\r\n\r\n*servicenameto* : string, optional\r\n\r\n- You can specify the name of the destination brokers.\r\n\r\n*rollbackoffset*: ignored\r\n\r\n**36. maadstml.vipernlp(filename,maxsummarywords,maxkeywords)**\r\n\r\n**Parameters:**\tPerform NLP summarization of PDFs\r\n\r\n*filename* : string, required\r\n\r\n- Filename of PDF to summarize.\r\n\r\n*maxsummarywords* : int, required\r\n       \r\n- Maximum amount of words in the summary.\r\n\r\n*maxkeywords* : int, required\r\n\r\n- Maximum amount of keywords to extract.\r\n\r\nRETURNS: JSON string of summary.\r\n\r\n**37. maadstml.viperchatgpt(openaikey,texttoanalyse,query, temperature,modelname)**\r\n\r\n**Parameters:**\tStart a conversation with ChatGPT\r\n\r\n*openaikey* : string, required\r\n\r\n- OpenAI API key\r\n\r\n*texttoanalyse* : string, required\r\n       \r\n- Text you want ChatGPT to analyse\r\n\r\n*query* : string, required\r\n\r\n- Prompts for chatGPT.  For example, \"What are key points in this text? What are the concerns or issues?\"\r\n\r\n*temperature* : float, required\r\n\r\n- Temperature for chatgpt, must be between 0-1 i.e. 0.7\r\n\r\n*modelname* : string, required\r\n\r\n- ChatGPT model to use.  For example, text-davinci-002, text-curie-001, text-babbage-001.\r\n\r\nRETURNS: ChatGPT response.\r\n\r\n**38. maadstml.viperexractpdffields(pdffilename)**\r\n\r\n**Parameters:**\tExtract data from PDF\r\n\r\n*pdffilename* : string, required\r\n\r\n- PDF filename\r\n\r\nRETURNS: JSON of PDF and writes JSON and XML files of PDF to disk.\r\n\r\n**39. maadstml.viperexractpdffieldbylabel(pdffilename,labelname,arcotype)**\r\n\r\n**Parameters:**\tExtract data from PDF by PDF labels\r\n\r\n*pdffilename* : string, required\r\n\r\n- PDF filename\r\n\r\n*labelname* : string, required\r\n\r\n- Label name in the PDF filename to search for.\r\n\r\n*pdffilename,labelname,arcotype* : string, required\r\n\r\n- Acrobyte tag in PDF i.e. LTTextLineHorizontal\r\n\r\nRETURNS: Value of the labelname - if any.\r\n\r\n**40. maadstml.pgptingestdocs(docname,doctype, pgptip,pgptport,pgptendpoint)**\r\n\r\n**Parameters:**\t\r\n\r\n*docname* : string, required\r\n\r\n- A full-path to a PDF, or text file.\r\n\r\n*doctype* : string, required\r\n       \r\n- This can be: binary, or text.\r\n\r\n*pgptip* : string, required\r\n\r\n- Your container IP - this is usually: http://127.0.0.1\r\n\r\n*pgptport* : string, required\r\n\r\n- Your container Port - this is usually: 8001.  This will be dependent on the docker run port forwarding command. See: https://github.com/smaurice101/raspberrypi/tree/main/privategpt\r\n\r\n*pgptendpoint* : string, required\r\n\r\n- This must be: /v1/ingest\r\n\r\nRETURNS: JSON containing Document details, or ERROR. \r\n\r\n**41. maadstml.pgptgetingestedembeddings(docname,ip,port,endpoint)**\r\n\r\n**Parameters:**\t\r\n\r\n*docname* : string, required\r\n\r\n- A full-path to a PDF, or text file.\r\n\r\n*ip* : string, required\r\n\r\n- Your container IP - this is usually: http://127.0.0.1\r\n\r\n*port* : string, required\r\n\r\n- Your container Port - this is usually: 8001.  This will be dependent on the docker run port forwarding command. See: https://github.com/smaurice101/raspberrypi/tree/main/privategpt\r\n\r\n*endpoint* : string, required\r\n\r\n- This must be: /v1/ingest/list\r\n\r\nRETURNS: Three variables: docids,docstr,docidsstr; these are the embeddings related to docname. Or, ERROR. \r\n\r\n**42. maadstml.pgptchat(prompt,context,docfilter,port,includesources,ip,endpoint)**\r\n\r\n**Parameters:**\t\r\n\r\n*prompt* : string, required\r\n\r\n- A prompt for privateGPT.\r\n\r\n*context* : bool, required\r\n\r\n- This can be True or False. If True, privateGPT will use context, if False, it will not.\r\n\r\n*docfilter* : string array, required\r\n\r\n- This is docidsstr, and can be retrieved from pgptgetingestedembeddings.  If context=True, and dockfilter is empty, then ALL documents are used for context. \r\n\r\n*port* : string, required\r\n\r\n- Your container Port - this is usually: 8001.  This will be dependent on the docker run port forwarding command. See: https://github.com/smaurice101/raspberrypi/tree/main/privategpt\r\n\r\n*includesources* : bool, required\r\n\r\n- This can be True or False. If True, with context, privateGPT will return the sources in the response.\r\n\r\n*ip* : string, required\r\n\r\n- Your container IP - this is usually: http://127.0.0.1\r\n\r\n*endpoint* : string, required\r\n\r\n- This must be: /v1/completions\r\n\r\nRETURNS: The response from privateGPT, or ERROR. \r\n\r\n**43. maadstml.pgptdeleteembeddings(docids, ip,port,endpoint)**\r\n\r\n**Parameters:**\t\r\n\r\n*docids* : string array, required\r\n\r\n- An array of doc ids.  This can be retrieved from  pgptgetingestedembeddings.\r\n\r\n*port* : string, required\r\n\r\n- Your container Port - this is usually: 8001.  This will be dependent on the docker run port forwarding command. See: https://github.com/smaurice101/raspberrypi/tree/main/privategpt\r\n\r\n*ip* : string, required\r\n\r\n- Your container IP - this is usually: http://127.0.0.1\r\n\r\n*endpoint* : string, required\r\n\r\n- This must be: /v1/ingest/\r\n\r\nRETURNS: Null if successful, or ERROR. \r\n\r\n**44. maadstml.pgpthealth(ip,port,endpoint)**\r\n\r\n**Parameters:**\t\r\n\r\n*port* : string, required\r\n\r\n- Your container Port - this is usually: 8001.  This will be dependent on the docker run port forwarding command. See: https://github.com/smaurice101/raspberrypi/tree/main/privategpt\r\n\r\n*ip* : string, required\r\n\r\n- Your container IP - this is usually: http://127.0.0.1\r\n\r\n*endpoint* : string, required\r\n\r\n- This must be: /health\r\n\r\nRETURNS: This will return a JSON of OK if the privateGPT server is running, or ERROR. \r\n\r\n**45. maadstml.videochatloadresponse(url,port,filename,prompt,responsefolder='videogpt_response',temperature=0.2,max_output_tokens=512)**\r\n\r\n**Parameters:**\t\r\n\r\n*url* : string, required\r\n\r\n- IP video chatgpt is listening on in the container - this is usually: http://127.0.0.1\r\n\r\n*port* : string, required\r\n\r\n- Port video chat gpt is listening on in the container i.e. 7800\r\n\r\n*filename* : string, required\r\n\r\n- This is the video filename to analyse i.e. with mp4 extension\r\n\r\n*prompt* : string, required\r\n\r\n- This is the prompt for video chat gpt. i.e. \"what is the video about? Is there anaything strange in the video?\"\r\n\r\n*responsefolder* : string, optional\r\n\r\n- This is the folder you want video chatgpt to write responses to \r\n\r\n*temperature* : float, optional\r\n\r\n- Temperature determines how conservative video chat gpt is i.e. closer to 0 very conservative in responses\r\n\r\n*max_output_tokens* : int, optional\r\n\r\n- max_output_tokens determines tokens to return\r\n\r\nRETURNS: The file name the response was written to by video chatgpt. \r\n",
    "bugtrack_url": null,
    "license": "MIT License",
    "summary": "Multi-Agent Accelerator for Data Science (MAADS): Transactional Machine Learning",
    "version": "3.48",
    "project_urls": {
        "Homepage": "https://github.com/smaurice101/transactionalmachinelearning"
    },
    "split_keywords": [
        "genai",
        " multi-agent",
        " transactional machine learning",
        " artificial intelligence",
        " chatgpt",
        " generative ai",
        " privategpt",
        " data streams",
        " data science",
        " optimization",
        " prescriptive analytics",
        " machine learning",
        " automl",
        " auto-ml",
        " artificial intelligence",
        " predictive analytics",
        " advanced analytics"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "5a70eba6fdce7b767fef744f4f2101a46d8597ce1fdfbbdad62cc8e25e087d24",
                "md5": "2499e82fa05688ebcd9bdb0b53ba0c99",
                "sha256": "dfd018392df46e6a5b8d1376d638e9d57fff1886e1c38506bbe0c4fa355eaad5"
            },
            "downloads": -1,
            "filename": "maadstml-3.48-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2499e82fa05688ebcd9bdb0b53ba0c99",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 53613,
            "upload_time": "2024-04-17T14:20:19",
            "upload_time_iso_8601": "2024-04-17T14:20:19.680672Z",
            "url": "https://files.pythonhosted.org/packages/5a/70/eba6fdce7b767fef744f4f2101a46d8597ce1fdfbbdad62cc8e25e087d24/maadstml-3.48-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "92930c46f7c4f04885334c4d5a22a6088f79d104f95314ce28099fe9270b0091",
                "md5": "07c59cecbb8d67ca86739d44400db2c9",
                "sha256": "0252fbc6c34325ee531aebbbe1a6027c6af75473ed60debce7379f9f03708fcc"
            },
            "downloads": -1,
            "filename": "maadstml-3.48.tar.gz",
            "has_sig": false,
            "md5_digest": "07c59cecbb8d67ca86739d44400db2c9",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 96727,
            "upload_time": "2024-04-17T14:20:24",
            "upload_time_iso_8601": "2024-04-17T14:20:24.019509Z",
            "url": "https://files.pythonhosted.org/packages/92/93/0c46f7c4f04885334c4d5a22a6088f79d104f95314ce28099fe9270b0091/maadstml-3.48.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-17 14:20:24",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "smaurice101",
    "github_project": "transactionalmachinelearning",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "maadstml"
}
        
Elapsed time: 0.24365s