percolation


Namepercolation JSON
Version 0.2.dev0 PyPI version JSON
download
home_pagehttps://github.com/ttm/percolation
Summarypercolation is a python package for harnessing open linked social data
upload_time2019-05-01 21:04:37
maintainer
docs_urlNone
authorRenato Fabbri
requires_python
licenseMIT
keywords complexity networks human interaction physics data text mining analysis visualization music physics synthesis toolbox semantic web linked data owl ontology social participation anthropological physics
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # percolation
the percolation python package for harnessing open linked social data

## install with:
If you are running Ubuntu, you may want to install pygraphviz from standard (apt) package manager.

    $ pip install percolation
or
    $ python setup.py install
For greater control of customization (and debugging), clone the repo and install with pip with -e:
    $ git clone https://github.com/ttm/percolation.git
    $ pip install -e <path_to_repo>
This install method is especially useful when reloading modified module in subsequent runs of percolation
(usually with the standard importlib).

Percolation may use social, music and visuals packages to enable anthropological physics experiments and social harnessing:
- https://github.com/ttm/social
- https://github.com/ttm/music
- https://github.com/ttm/visuals


## core features
  - Provide routines for percolation in social systems by experiments and processes.
  - Ease knowledge about the networked self.
  - Analyses of social systems through textual, topological, temporal statistics within a typology framework of agents and networks.
  - Art and games from social structures, such as music and animation.
  - Integration of resources through RDF data and OWL ontologies.
  - Cross provenance resource recommendation, extending facilities from the Participation package.
  - WWW integration to provide data and media.

## coding conventions
A function name has a verb if it changes state of initialized objects, if it only "returns something", it is has no verb in name.
This rule is extended for the cases where instead of return triples, they are added to the percolation\_graph.

Classes, functions and variables are writen in CamelCase, headlessCamelCase and lowercase, respectively.
Underline is used only in variable names where the words in variable name make something unreadable (usually because the resulting name is big).

The code is the documentation. Code should be very readable to avoid writing unnecessary documentation and duplicating routine representations. This adds up to using docstrings to give context to the objects or omitting the docstrings.

Tasks might have c("some status message") which are printed with time interval in seconds between P.check calls.
These messages are turned of by setting P.QUIET=True or calling P.silence() which just sets P.QUIET=True

The usual variables in scripts are: P for percolation, NS for P.rdf.NS for namespace, a for NS.rdf.type, c for P.utils.check, S for social, M for music, V for visuals, n for numpy, p for pylab, r for rdflib, x for networkx

The file cureimport.py in newtests avoids cluttering the header of the percolation file while hacking framework. In using the Python interpreter, subsequent runs of scripts don't reload or raise error with importlib if the prior error was on load. Just load it first: import cureimport, percolation as P, etc.

The variable P.percolation\_graph is a ConjunctiveGraph with all execution state information metadata and translates and with each variable value as value, a bag (unordered, e.g. word sizes) or a collection (ordered, principal components, etc).

Every feature should be related to at least one legacy/ outline.

Routines should be oriented towards making or navigating percolation graph paths directly or through numeric computation and rendering of new triples in Open Linked Social Data and external resources such as the DBpedia sparql endpoint: http://dbpedia.org/sparql

### package structure
Data and metadata is in the P.percolation\_graph=rdflib.ConjunctiveGraph()
which is persistent across runs in the system and is initialized by bootstrap.py
and developed by user with rdf or automatically while other percolation tasks are run.
The statistics module have routines for obtaining statistics from data, which are applied
to data in measures.
The analyses module make (qualitative) assertions about the measures in social structures.
The utils eases file navigation and sharing in local system and web, percolation status register and small features that fit nowhere else.
The help module have some directions on percolation usage while legacy module have diverse usage outlines.

#### the modules are: 
bootstrap.py for PercolationServer and the canonic startup

legacy/\* for standard usage outlines, analyses and media rendering from legacy data (Open Linked Open Data)
- harnessing/\*.py for percolative procedures (e.g. experiments), resource recommendation, self-knowledge and information collection and diffusion.
- mediaRendering/ for general output of media (music.py, image.py, animation.py, table.py, game.py).
- rdf 
- analyses/\*.py for canonical percolation analysis of structures, results in assertions and data endorsements
- measures/\*.py for measurement routines, data structures and values from initial data
- triples/\*.py triples with information about files and notes
  - datasets.py for triples on the datasets of open linked social data with local and remote filenames
  - linksets.py for triples that link datasets (e.g. irc:Participant#hybrid po:sameAs fb:Participant#renato.fabbri)
  - enrichments.py for hand notes and other structures for enrichment of the percolation status (be traslated to rdf)
  - notes.py for hand notes and other structures for enrichment of the percolation status (be traslated to rdf)
  - software.py for triples of software and ontologies pertinent to percolation environment

rdf/\* for rdf data managing
- ontology.py triples and organization of the participation ontology (po), an umbrella ontology
- reasoning.py reasoning on specific rdfs and owl rules to enhance performance and benchmarking among approaches
- rdflib.py facilities for rdflib graph manipulation
- sparql.py facilities for querying and connecting through sparql

statistics/\* for computing statistics appropriated to Open Linked Social Data
- kolmogorv\_smirnov.py for obtaining KS distance and c statistics
- unit\_root\_test.py for e.g. the augmented Dickey–Fuller test
- pca.py for correlation and principal component analysis
- outliers.py for the detection of outliers in data
- circular.py for circular statistics
- localization.py for mean, standard deviation, skewness and kurtosis
- grouping/\*.py for obtaining meaningful groups of entities through Erdos sectorialization, k-means, k-nn, Kohonen, genetic algorithms, etc.

measures/\* for taking measures of social structures. It takes data and produces more informative data which is used in the analyses
- text/\*.py for taking measures from chars, tokens, sentences, paragraphs of a single text
- topology/\*.py for making networks and taking topological measures from a single structure
- time/\*.py for taking circular measures of 
- integrated/\*.py 
  - pca.py for application of principal component analysis to grouped entities and appropriated data
  - power\_law.py for measures about the optimum fit of the power-law 
  - kolmogorov\_smirnov.py for KS-distance and c statistics between the grouped entities.
- multi/\* for measures of multiple structures
  - grouping.py for obtaining meaningful groups of entities. Basic grouping of texts is the message, basic grouping for topology and time is the participant. The topmost grouping is the snapshot or collection of snapshots.
  - scale.py for measures in multiple scales (e.g. snapshots, snapshot, sector, user, message)
  - timeline.py for timeline sequences of structures, make unit root test and pca averages and stds
  - scale\_timeline.py for multiple scale timelines, find best fit for power-law and 

analysis/\* for deriving assertions from social structures (e.g. mean(token size) above mean of OLSD legacy. Same file tree as measures

utils/\*.py
  - general.py for general purpose utilities that fit nowhere else, e.g. randomNick
  - status.py for 
  - file.py for navigating and modifying file structure
  - web/\* for integration to the WWW

help/\* for helper routines (e.g. wizard or steps to make something)

## usage example
```python
import percolation as P

P.start() # starts percolation server and session with metadata about open linked social data
P.analyse() # take measures and deliver assertions
P.legacy.media_rendering.render() # make tables, music and animation
P.web() # start server to make data and media accessible in the Web
```

## further information
The percolation package is a work in progress.

### notes
In the integrated measures, see if networks that have peculiar distribution of measures in Erdos sectors also have smaller KS-distance between histograms of degrees and other topological measures. Generalizing, see if structures with an outlier of a measure (or set of measures) is correlated with other measures characteristics, such as the correlation histogram.

Are there benchmark datasets and results for the statistics used in percolation? If so, integrate them into legacy.statistics.tests.
Otherwise, make benchmarks from synthesized and empirical data.

Percolation makes use of other packages designed for percolation for direct use or through the rendered RDF they deliver.
These are:
- The social package for rendering RDF data from social networking platforms and protocols (e.g. facebook, twitter, irc, instagram, noosfero, diaspora)
- The gmane package for rendering RDF data from public Gmane email lists (e.g. apache, c++ stdlib)
- The participation package for rendering RDF data from social participation platforms (e.g. participabr, cidade democratica, aa)
- The scientific package (ToDo) for rendering RDF data from scientific resources such as FAPESP
- The music package to render music from open linked social data
- The visuals package to render images and movies from open linked social data

See legacy.triples for further notes.

See percolationLegacy issues at: https://github.com/ttm/percolationLegacy/issues

See this deprecated document for some of the intended goals:
https://github.com/ttm/percolationlegacy/blob/master/latex/percolation-article.pdf
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/ttm/percolation",
    "name": "percolation",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "complexity networks human interaction physics data text mining analysis visualization music physics synthesis toolbox semantic web linked data owl ontology social participation anthropological physics",
    "author": "Renato Fabbri",
    "author_email": "listamacambira@googlegroups.com",
    "download_url": "https://files.pythonhosted.org/packages/f8/19/b26b8fd8d98204a5d6b4003dbf25e97ff8da054b0037aec44ef0cb244f4f/percolation-0.2.dev0.tar.gz",
    "platform": "",
    "description": "# percolation\nthe percolation python package for harnessing open linked social data\n\n## install with:\nIf you are running Ubuntu, you may want to install pygraphviz from standard (apt) package manager.\n\n    $ pip install percolation\nor\n    $ python setup.py install\nFor greater control of customization (and debugging), clone the repo and install with pip with -e:\n    $ git clone https://github.com/ttm/percolation.git\n    $ pip install -e <path_to_repo>\nThis install method is especially useful when reloading modified module in subsequent runs of percolation\n(usually with the standard importlib).\n\nPercolation may use social, music and visuals packages to enable anthropological physics experiments and social harnessing:\n- https://github.com/ttm/social\n- https://github.com/ttm/music\n- https://github.com/ttm/visuals\n\n\n## core features\n  - Provide routines for percolation in social systems by experiments and processes.\n  - Ease knowledge about the networked self.\n  - Analyses of social systems through textual, topological, temporal statistics within a typology framework of agents and networks.\n  - Art and games from social structures, such as music and animation.\n  - Integration of resources through RDF data and OWL ontologies.\n  - Cross provenance resource recommendation, extending facilities from the Participation package.\n  - WWW integration to provide data and media.\n\n## coding conventions\nA function name has a verb if it changes state of initialized objects, if it only \"returns something\", it is has no verb in name.\nThis rule is extended for the cases where instead of return triples, they are added to the percolation\\_graph.\n\nClasses, functions and variables are writen in CamelCase, headlessCamelCase and lowercase, respectively.\nUnderline is used only in variable names where the words in variable name make something unreadable (usually because the resulting name is big).\n\nThe code is the documentation. Code should be very readable to avoid writing unnecessary documentation and duplicating routine representations. This adds up to using docstrings to give context to the objects or omitting the docstrings.\n\nTasks might have c(\"some status message\") which are printed with time interval in seconds between P.check calls.\nThese messages are turned of by setting P.QUIET=True or calling P.silence() which just sets P.QUIET=True\n\nThe usual variables in scripts are: P for percolation, NS for P.rdf.NS for namespace, a for NS.rdf.type, c for P.utils.check, S for social, M for music, V for visuals, n for numpy, p for pylab, r for rdflib, x for networkx\n\nThe file cureimport.py in newtests avoids cluttering the header of the percolation file while hacking framework. In using the Python interpreter, subsequent runs of scripts don't reload or raise error with importlib if the prior error was on load. Just load it first: import cureimport, percolation as P, etc.\n\nThe variable P.percolation\\_graph is a ConjunctiveGraph with all execution state information metadata and translates and with each variable value as value, a bag (unordered, e.g. word sizes) or a collection (ordered, principal components, etc).\n\nEvery feature should be related to at least one legacy/ outline.\n\nRoutines should be oriented towards making or navigating percolation graph paths directly or through numeric computation and rendering of new triples in Open Linked Social Data and external resources such as the DBpedia sparql endpoint: http://dbpedia.org/sparql\n\n### package structure\nData and metadata is in the P.percolation\\_graph=rdflib.ConjunctiveGraph()\nwhich is persistent across runs in the system and is initialized by bootstrap.py\nand developed by user with rdf or automatically while other percolation tasks are run.\nThe statistics module have routines for obtaining statistics from data, which are applied\nto data in measures.\nThe analyses module make (qualitative) assertions about the measures in social structures.\nThe utils eases file navigation and sharing in local system and web, percolation status register and small features that fit nowhere else.\nThe help module have some directions on percolation usage while legacy module have diverse usage outlines.\n\n#### the modules are: \nbootstrap.py for PercolationServer and the canonic startup\n\nlegacy/\\* for standard usage outlines, analyses and media rendering from legacy data (Open Linked Open Data)\n- harnessing/\\*.py for percolative procedures (e.g. experiments), resource recommendation, self-knowledge and information collection and diffusion.\n- mediaRendering/ for general output of media (music.py, image.py, animation.py, table.py, game.py).\n- rdf \n- analyses/\\*.py for canonical percolation analysis of structures, results in assertions and data endorsements\n- measures/\\*.py for measurement routines, data structures and values from initial data\n- triples/\\*.py triples with information about files and notes\n  - datasets.py for triples on the datasets of open linked social data with local and remote filenames\n  - linksets.py for triples that link datasets (e.g. irc:Participant#hybrid po:sameAs fb:Participant#renato.fabbri)\n  - enrichments.py for hand notes and other structures for enrichment of the percolation status (be traslated to rdf)\n  - notes.py for hand notes and other structures for enrichment of the percolation status (be traslated to rdf)\n  - software.py for triples of software and ontologies pertinent to percolation environment\n\nrdf/\\* for rdf data managing\n- ontology.py triples and organization of the participation ontology (po), an umbrella ontology\n- reasoning.py reasoning on specific rdfs and owl rules to enhance performance and benchmarking among approaches\n- rdflib.py facilities for rdflib graph manipulation\n- sparql.py facilities for querying and connecting through sparql\n\nstatistics/\\* for computing statistics appropriated to Open Linked Social Data\n- kolmogorv\\_smirnov.py for obtaining KS distance and c statistics\n- unit\\_root\\_test.py for e.g. the augmented Dickey\u2013Fuller test\n- pca.py for correlation and principal component analysis\n- outliers.py for the detection of outliers in data\n- circular.py for circular statistics\n- localization.py for mean, standard deviation, skewness and kurtosis\n- grouping/\\*.py for obtaining meaningful groups of entities through Erdos sectorialization, k-means, k-nn, Kohonen, genetic algorithms, etc.\n\nmeasures/\\* for taking measures of social structures. It takes data and produces more informative data which is used in the analyses\n- text/\\*.py for taking measures from chars, tokens, sentences, paragraphs of a single text\n- topology/\\*.py for making networks and taking topological measures from a single structure\n- time/\\*.py for taking circular measures of \n- integrated/\\*.py \n  - pca.py for application of principal component analysis to grouped entities and appropriated data\n  - power\\_law.py for measures about the optimum fit of the power-law \n  - kolmogorov\\_smirnov.py for KS-distance and c statistics between the grouped entities.\n- multi/\\* for measures of multiple structures\n  - grouping.py for obtaining meaningful groups of entities. Basic grouping of texts is the message, basic grouping for topology and time is the participant. The topmost grouping is the snapshot or collection of snapshots.\n  - scale.py for measures in multiple scales (e.g. snapshots, snapshot, sector, user, message)\n  - timeline.py for timeline sequences of structures, make unit root test and pca averages and stds\n  - scale\\_timeline.py for multiple scale timelines, find best fit for power-law and \n\nanalysis/\\* for deriving assertions from social structures (e.g. mean(token size) above mean of OLSD legacy. Same file tree as measures\n\nutils/\\*.py\n  - general.py for general purpose utilities that fit nowhere else, e.g. randomNick\n  - status.py for \n  - file.py for navigating and modifying file structure\n  - web/\\* for integration to the WWW\n\nhelp/\\* for helper routines (e.g. wizard or steps to make something)\n\n## usage example\n```python\nimport percolation as P\n\nP.start() # starts percolation server and session with metadata about open linked social data\nP.analyse() # take measures and deliver assertions\nP.legacy.media_rendering.render() # make tables, music and animation\nP.web() # start server to make data and media accessible in the Web\n```\n\n## further information\nThe percolation package is a work in progress.\n\n### notes\nIn the integrated measures, see if networks that have peculiar distribution of measures in Erdos sectors also have smaller KS-distance between histograms of degrees and other topological measures. Generalizing, see if structures with an outlier of a measure (or set of measures) is correlated with other measures characteristics, such as the correlation histogram.\n\nAre there benchmark datasets and results for the statistics used in percolation? If so, integrate them into legacy.statistics.tests.\nOtherwise, make benchmarks from synthesized and empirical data.\n\nPercolation makes use of other packages designed for percolation for direct use or through the rendered RDF they deliver.\nThese are:\n- The social package for rendering RDF data from social networking platforms and protocols (e.g. facebook, twitter, irc, instagram, noosfero, diaspora)\n- The gmane package for rendering RDF data from public Gmane email lists (e.g. apache, c++ stdlib)\n- The participation package for rendering RDF data from social participation platforms (e.g. participabr, cidade democratica, aa)\n- The scientific package (ToDo) for rendering RDF data from scientific resources such as FAPESP\n- The music package to render music from open linked social data\n- The visuals package to render images and movies from open linked social data\n\nSee legacy.triples for further notes.\n\nSee percolationLegacy issues at: https://github.com/ttm/percolationLegacy/issues\n\nSee this deprecated document for some of the intended goals:\nhttps://github.com/ttm/percolationlegacy/blob/master/latex/percolation-article.pdf",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "percolation is a python package for harnessing open linked social data",
    "version": "0.2.dev0",
    "project_urls": {
        "Homepage": "https://github.com/ttm/percolation"
    },
    "split_keywords": [
        "complexity",
        "networks",
        "human",
        "interaction",
        "physics",
        "data",
        "text",
        "mining",
        "analysis",
        "visualization",
        "music",
        "physics",
        "synthesis",
        "toolbox",
        "semantic",
        "web",
        "linked",
        "data",
        "owl",
        "ontology",
        "social",
        "participation",
        "anthropological",
        "physics"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "f819b26b8fd8d98204a5d6b4003dbf25e97ff8da054b0037aec44ef0cb244f4f",
                "md5": "802c7d944ab41245fbca904073ec2c9f",
                "sha256": "7bf347a4bbbdf40da43ee68ba414c0b363f86a12bc3a72cc4a2572d97ad4c7d0"
            },
            "downloads": -1,
            "filename": "percolation-0.2.dev0.tar.gz",
            "has_sig": false,
            "md5_digest": "802c7d944ab41245fbca904073ec2c9f",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 70387,
            "upload_time": "2019-05-01T21:04:37",
            "upload_time_iso_8601": "2019-05-01T21:04:37.153417Z",
            "url": "https://files.pythonhosted.org/packages/f8/19/b26b8fd8d98204a5d6b4003dbf25e97ff8da054b0037aec44ef0cb244f4f/percolation-0.2.dev0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2019-05-01 21:04:37",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ttm",
    "github_project": "percolation",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "percolation"
}
        
Elapsed time: 0.29437s