tf-geometric


Nametf-geometric JSON
Version 0.0.29 PyPI version JSON
download
home_pagehttps://github.com/CrawlScript/tf_geometric
Summary Efficient and Friendly Graph Neural Network Library for TensorFlow 1.x and 2.x.
upload_time2020-10-03 19:01:22
maintainer
docs_urlNone
authorJun Hu
requires_python>3.5.0
licenseGNU General Public License v3.0 (See LICENSE)
keywords
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            tf_geometric
============

Efficient and Friendly Graph Neural Network Library for TensorFlow 1.x and 2.x.

Inspired by **rusty1s/pytorch_geometric**\ , we build a GNN library for TensorFlow.

Homepage and Documentation
--------------------------


* Homepage: `https://github.com/CrawlScript/tf_geometric <https://github.com/CrawlScript/tf_geometric>`_
* Documentation: `https://tf-geometric.readthedocs.io <https://tf-geometric.readthedocs.io>`_

Efficient and Friendly
----------------------

We use Message Passing mechanism to implement graph neural networks, which is way efficient than the dense matrix based implementations and more friendly than the sparse matrix based ones.
In addition, we provide easy and elegant APIs for complex GNN operations.
The following example constructs a graph and applies a Multi-head Graph Attention Network (GAT) on it:

.. code-block:: python

   # coding=utf-8
   import numpy as np
   import tf_geometric as tfg
   import tensorflow as tf

   graph = tfg.Graph(
       x=np.random.randn(5, 20),  # 5 nodes, 20 features,
       edge_index=[[0, 0, 1, 3],
                   [1, 2, 2, 1]]  # 4 undirected edges
   )

   print("Graph Desc: \n", graph)

   graph.convert_edge_to_directed()  # pre-process edges
   print("Processed Graph Desc: \n", graph)
   print("Processed Edge Index:\n", graph.edge_index)

   # Multi-head Graph Attention Network (GAT)
   gat_layer = tfg.layers.GAT(units=4, num_heads=4, activation=tf.nn.relu)
   output = gat_layer([graph.x, graph.edge_index])
   print("Output of GAT: \n", output)

Output:

.. code-block::

   Graph Desc:
    Graph Shape: x => (5, 20)  edge_index => (2, 4)    y => None

   Processed Graph Desc:
    Graph Shape: x => (5, 20)  edge_index => (2, 8)    y => None

   Processed Edge Index:
    [[0 0 1 1 1 2 2 3]
    [1 2 0 2 3 0 1 1]]

   Output of GAT:
    tf.Tensor(
   [[0.22443159 0.         0.58263206 0.32468423]
    [0.29810357 0.         0.19403605 0.35630274]
    [0.18071976 0.         0.58263206 0.32468423]
    [0.36123228 0.         0.88897204 0.450244  ]
    [0.         0.         0.8013462  0.        ]], shape=(5, 4), dtype=float32)

DEMO
----

We recommend you to get started with some demo.

Node Classification
^^^^^^^^^^^^^^^^^^^


* `Graph Convolutional Network (GCN) <demo/demo_gcn.py>`_
* `Multi-head Graph Attention Network (GAT) <demo/demo_gat.py>`_
* `GraphSAGE <demo/demo_graph_sage.py>`_
* `GIN <demo/demo_gin.py>`_
* `ChebyNet <demo/demo_chebynet.py>`_
* `SGC <demo/demo_sgc.py>`_
* `TAGCN <demo/demo_tagcn.py>`_

Graph Classification
^^^^^^^^^^^^^^^^^^^^


* `MeanPooling <demo/demo_mean_pool.py>`_
* `SAGPooling <demo/demo_sag_pool_h.py>`_

Link Prediction
^^^^^^^^^^^^^^^


* `Graph Auto-Encoder (GAE) <demo/demo_gae.py>`_

Installation
------------

Requirements:


* Operation System: Windows / Linux / Mac OS
* Python: version >= 3.5
* Python Packages:

  * tensorflow/tensorflow-gpu: >= 1.14.0 or >= 2.0.0b1
  * numpy >= 1.17.4
  * networkx >= 2.1
  * scipy >= 1.1.0

Use one of the following commands below:

.. code-block:: bash

   pip install -U tf_geometric # this will not install the tensorflow/tensorflow-gpu package

   pip install -U tf_geometric[tf1-cpu] # this will install TensorFlow 1.x CPU version

   pip install -U tf_geometric[tf1-gpu] # this will install TensorFlow 1.x GPU version

   pip install -U tf_geometric[tf2-cpu] # this will install TensorFlow 2.x CPU version

   pip install -U tf_geometric[tf2-gpu] # this will install TensorFlow 2.x GPU version

OOP and Functional API
----------------------

We provide both OOP and Functional API, with which you can make some cool things.

.. code-block:: python

   # coding=utf-8
   import os

   # Enable GPU 0
   os.environ["CUDA_VISIBLE_DEVICES"] = "0"

   import tf_geometric as tfg
   import tensorflow as tf
   import numpy as np
   from tf_geometric.utils.graph_utils import convert_edge_to_directed

   # ==================================== Graph Data Structure ====================================
   # In tf_geometric, graph data can be either individual Tensors or Graph objects
   # A graph usually consists of x(node features), edge_index and edge_weight(optional)

   # Node Features => (num_nodes, num_features)
   x = np.random.randn(5, 20).astype(np.float32) # 5 nodes, 20 features

   # Edge Index => (2, num_edges)
   # Each column of edge_index (u, v) represents an directed edge from u to v.
   # Note that it does not cover the edge from v to u. You should provide (v, u) to cover it.
   # This is not convenient for users.
   # Thus, we allow users to provide edge_index in undirected form and convert it later.
   # That is, we can only provide (u, v) and convert it to (u, v) and (v, u) with `convert_edge_to_directed` method.
   edge_index = np.array([
       [0, 0, 1, 3],
       [1, 2, 2, 1]
   ])

   # Edge Weight => (num_edges)
   edge_weight = np.array([0.9, 0.8, 0.1, 0.2]).astype(np.float32)

   # Make the edge_index directed such that we can use it as the input of GCN
   edge_index, [edge_weight] = convert_edge_to_directed(edge_index, [edge_weight])


   # We can convert these numpy array as TensorFlow Tensors and pass them to gnn functions
   outputs = tfg.nn.gcn(
       tf.Variable(x),
       tf.constant(edge_index),
       tf.constant(edge_weight),
       tf.Variable(tf.random.truncated_normal([20, 2])) # GCN Weight
   )
   print(outputs)

   # Usually, we use a graph object to manager these information
   # edge_weight is optional, we can set it to None if you don't need it
   graph = tfg.Graph(x=x, edge_index=edge_index, edge_weight=edge_weight)

   # You can easily convert these numpy arrays as Tensors with the Graph Object API
   graph.convert_data_to_tensor()

   # Then, we can use them without too many manual conversion
   outputs = tfg.nn.gcn(
       graph.x,
       graph.edge_index,
       graph.edge_weight,
       tf.Variable(tf.random.truncated_normal([20, 2])),  # GCN Weight
       cache=graph.cache  # GCN use caches to avoid re-computing of the normed edge information
   )
   print(outputs)


   # For algorithms that deal with batches of graphs, we can pack a batch of graph into a BatchGraph object
   # Batch graph wrap a batch of graphs into a single graph, where each nodes has an unique index and a graph index.
   # The node_graph_index is the index of the corresponding graph for each node in the batch.
   # The edge_graph_index is the index of the corresponding edge for each node in the batch.
   batch_graph = tfg.BatchGraph.from_graphs([graph, graph, graph, graph])

   # We can reversely split a BatchGraph object into Graphs objects
   graphs = batch_graph.to_graphs()

   # Graph Pooling algorithms often rely on such batch data structure
   # Most of them accept a BatchGraph's data as input and output a feature vector for each graph in the batch
   outputs = tfg.nn.mean_pool(batch_graph.x, batch_graph.node_graph_index, num_graphs=batch_graph.num_graphs)
   print(outputs)

   # We provide some advanced graph pooling operations such as topk_pool
   node_score = tfg.nn.gcn(
       batch_graph.x,
       batch_graph.edge_index,
       batch_graph.edge_weight,
       tf.Variable(tf.random.truncated_normal([20, 1])),  # GCN Weight
       cache=graph.cache  # GCN use caches to avoid re-computing of the normed edge information
   )
   node_score = tf.reshape(node_score, [-1])
   topk_node_index = tfg.nn.topk_pool(batch_graph.node_graph_index, node_score, ratio=0.6)
   print(topk_node_index)




   # ==================================== Built-in Datasets ====================================
   # all graph data are in numpy format
   train_data, valid_data, test_data = tfg.datasets.PPIDataset().load_data()

   # we can convert them into tensorflow format
   test_data = [graph.convert_data_to_tensor() for graph in test_data]





   # ==================================== Basic OOP API ====================================
   # OOP Style GCN (Graph Convolutional Network)
   gcn_layer = tfg.layers.GCN(units=20, activation=tf.nn.relu)

   for graph in test_data:
       # Cache can speed-up GCN by caching the normed edge information
       outputs = gcn_layer([graph.x, graph.edge_index, graph.edge_weight], cache=graph.cache)
       print(outputs)


   # OOP Style GAT (Multi-head Graph Attention Network)
   gat_layer = tfg.layers.GAT(units=20, activation=tf.nn.relu, num_heads=4)
   for graph in test_data:
       outputs = gat_layer([graph.x, graph.edge_index])
       print(outputs)



   # ==================================== Basic Functional API ====================================
   # Functional Style GCN
   # Functional API is more flexible for advanced algorithms
   # You can pass both data and parameters to functional APIs

   gcn_w = tf.Variable(tf.random.truncated_normal([test_data[0].num_features, 20]))
   for graph in test_data:
       outputs = tfg.nn.gcn(graph.x, edge_index, edge_weight, gcn_w, activation=tf.nn.relu)
       print(outputs)


   # ==================================== Advanced OOP API ====================================
   # All APIs are implemented with Map-Reduce Style
   # This is a gcn without weight normalization and transformation.
   # Create your own GNN Layer by subclassing the MapReduceGNN class
   class NaiveGCN(tfg.layers.MapReduceGNN):

       def map(self, repeated_x, neighbor_x, edge_weight=None):
           return tfg.nn.identity_mapper(repeated_x, neighbor_x, edge_weight)

       def reduce(self, neighbor_msg, node_index, num_nodes=None):
           return tfg.nn.sum_reducer(neighbor_msg, node_index, num_nodes)

       def update(self, x, reduced_neighbor_msg):
           return tfg.nn.sum_updater(x, reduced_neighbor_msg)


   naive_gcn = NaiveGCN()

   for graph in test_data:
       print(naive_gcn([graph.x, graph.edge_index, graph.edge_weight]))


   # ==================================== Advanced Functional API ====================================
   # All APIs are implemented with Map-Reduce Style
   # This is a gcn without without weight normalization and transformation
   # Just pass the mapper/reducer/updater functions to the Functional API

   for graph in test_data:
       outputs = tfg.nn.aggregate_neighbors(
           x=graph.x,
           edge_index=graph.edge_index,
           edge_weight=graph.edge_weight,
           mapper=tfg.nn.identity_mapper,
           reducer=tfg.nn.sum_reducer,
           updater=tfg.nn.sum_updater
       )
       print(outputs)
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/CrawlScript/tf_geometric",
    "name": "tf-geometric",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">3.5.0",
    "maintainer_email": "",
    "keywords": "",
    "author": "Jun Hu",
    "author_email": "hujunxianligong@gmail.com",
    "download_url": "https://files.pythonhosted.org/packages/de/db/ddfcb527a95649f37d9e2da22185cbb578461539903fe0b6325886a940e7/tf_geometric-0.0.29.tar.gz",
    "platform": "",
    "description": "tf_geometric\n============\n\nEfficient and Friendly Graph Neural Network Library for TensorFlow 1.x and 2.x.\n\nInspired by **rusty1s/pytorch_geometric**\\ , we build a GNN library for TensorFlow.\n\nHomepage and Documentation\n--------------------------\n\n\n* Homepage: `https://github.com/CrawlScript/tf_geometric <https://github.com/CrawlScript/tf_geometric>`_\n* Documentation: `https://tf-geometric.readthedocs.io <https://tf-geometric.readthedocs.io>`_\n\nEfficient and Friendly\n----------------------\n\nWe use Message Passing mechanism to implement graph neural networks, which is way efficient than the dense matrix based implementations and more friendly than the sparse matrix based ones.\nIn addition, we provide easy and elegant APIs for complex GNN operations.\nThe following example constructs a graph and applies a Multi-head Graph Attention Network (GAT) on it:\n\n.. code-block:: python\n\n   # coding=utf-8\n   import numpy as np\n   import tf_geometric as tfg\n   import tensorflow as tf\n\n   graph = tfg.Graph(\n       x=np.random.randn(5, 20),  # 5 nodes, 20 features,\n       edge_index=[[0, 0, 1, 3],\n                   [1, 2, 2, 1]]  # 4 undirected edges\n   )\n\n   print(\"Graph Desc: \\n\", graph)\n\n   graph.convert_edge_to_directed()  # pre-process edges\n   print(\"Processed Graph Desc: \\n\", graph)\n   print(\"Processed Edge Index:\\n\", graph.edge_index)\n\n   # Multi-head Graph Attention Network (GAT)\n   gat_layer = tfg.layers.GAT(units=4, num_heads=4, activation=tf.nn.relu)\n   output = gat_layer([graph.x, graph.edge_index])\n   print(\"Output of GAT: \\n\", output)\n\nOutput:\n\n.. code-block::\n\n   Graph Desc:\n    Graph Shape: x => (5, 20)  edge_index => (2, 4)    y => None\n\n   Processed Graph Desc:\n    Graph Shape: x => (5, 20)  edge_index => (2, 8)    y => None\n\n   Processed Edge Index:\n    [[0 0 1 1 1 2 2 3]\n    [1 2 0 2 3 0 1 1]]\n\n   Output of GAT:\n    tf.Tensor(\n   [[0.22443159 0.         0.58263206 0.32468423]\n    [0.29810357 0.         0.19403605 0.35630274]\n    [0.18071976 0.         0.58263206 0.32468423]\n    [0.36123228 0.         0.88897204 0.450244  ]\n    [0.         0.         0.8013462  0.        ]], shape=(5, 4), dtype=float32)\n\nDEMO\n----\n\nWe recommend you to get started with some demo.\n\nNode Classification\n^^^^^^^^^^^^^^^^^^^\n\n\n* `Graph Convolutional Network (GCN) <demo/demo_gcn.py>`_\n* `Multi-head Graph Attention Network (GAT) <demo/demo_gat.py>`_\n* `GraphSAGE <demo/demo_graph_sage.py>`_\n* `GIN <demo/demo_gin.py>`_\n* `ChebyNet <demo/demo_chebynet.py>`_\n* `SGC <demo/demo_sgc.py>`_\n* `TAGCN <demo/demo_tagcn.py>`_\n\nGraph Classification\n^^^^^^^^^^^^^^^^^^^^\n\n\n* `MeanPooling <demo/demo_mean_pool.py>`_\n* `SAGPooling <demo/demo_sag_pool_h.py>`_\n\nLink Prediction\n^^^^^^^^^^^^^^^\n\n\n* `Graph Auto-Encoder (GAE) <demo/demo_gae.py>`_\n\nInstallation\n------------\n\nRequirements:\n\n\n* Operation System: Windows / Linux / Mac OS\n* Python: version >= 3.5\n* Python Packages:\n\n  * tensorflow/tensorflow-gpu: >= 1.14.0 or >= 2.0.0b1\n  * numpy >= 1.17.4\n  * networkx >= 2.1\n  * scipy >= 1.1.0\n\nUse one of the following commands below:\n\n.. code-block:: bash\n\n   pip install -U tf_geometric # this will not install the tensorflow/tensorflow-gpu package\n\n   pip install -U tf_geometric[tf1-cpu] # this will install TensorFlow 1.x CPU version\n\n   pip install -U tf_geometric[tf1-gpu] # this will install TensorFlow 1.x GPU version\n\n   pip install -U tf_geometric[tf2-cpu] # this will install TensorFlow 2.x CPU version\n\n   pip install -U tf_geometric[tf2-gpu] # this will install TensorFlow 2.x GPU version\n\nOOP and Functional API\n----------------------\n\nWe provide both OOP and Functional API, with which you can make some cool things.\n\n.. code-block:: python\n\n   # coding=utf-8\n   import os\n\n   # Enable GPU 0\n   os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"0\"\n\n   import tf_geometric as tfg\n   import tensorflow as tf\n   import numpy as np\n   from tf_geometric.utils.graph_utils import convert_edge_to_directed\n\n   # ==================================== Graph Data Structure ====================================\n   # In tf_geometric, graph data can be either individual Tensors or Graph objects\n   # A graph usually consists of x(node features), edge_index and edge_weight(optional)\n\n   # Node Features => (num_nodes, num_features)\n   x = np.random.randn(5, 20).astype(np.float32) # 5 nodes, 20 features\n\n   # Edge Index => (2, num_edges)\n   # Each column of edge_index (u, v) represents an directed edge from u to v.\n   # Note that it does not cover the edge from v to u. You should provide (v, u) to cover it.\n   # This is not convenient for users.\n   # Thus, we allow users to provide edge_index in undirected form and convert it later.\n   # That is, we can only provide (u, v) and convert it to (u, v) and (v, u) with `convert_edge_to_directed` method.\n   edge_index = np.array([\n       [0, 0, 1, 3],\n       [1, 2, 2, 1]\n   ])\n\n   # Edge Weight => (num_edges)\n   edge_weight = np.array([0.9, 0.8, 0.1, 0.2]).astype(np.float32)\n\n   # Make the edge_index directed such that we can use it as the input of GCN\n   edge_index, [edge_weight] = convert_edge_to_directed(edge_index, [edge_weight])\n\n\n   # We can convert these numpy array as TensorFlow Tensors and pass them to gnn functions\n   outputs = tfg.nn.gcn(\n       tf.Variable(x),\n       tf.constant(edge_index),\n       tf.constant(edge_weight),\n       tf.Variable(tf.random.truncated_normal([20, 2])) # GCN Weight\n   )\n   print(outputs)\n\n   # Usually, we use a graph object to manager these information\n   # edge_weight is optional, we can set it to None if you don't need it\n   graph = tfg.Graph(x=x, edge_index=edge_index, edge_weight=edge_weight)\n\n   # You can easily convert these numpy arrays as Tensors with the Graph Object API\n   graph.convert_data_to_tensor()\n\n   # Then, we can use them without too many manual conversion\n   outputs = tfg.nn.gcn(\n       graph.x,\n       graph.edge_index,\n       graph.edge_weight,\n       tf.Variable(tf.random.truncated_normal([20, 2])),  # GCN Weight\n       cache=graph.cache  # GCN use caches to avoid re-computing of the normed edge information\n   )\n   print(outputs)\n\n\n   # For algorithms that deal with batches of graphs, we can pack a batch of graph into a BatchGraph object\n   # Batch graph wrap a batch of graphs into a single graph, where each nodes has an unique index and a graph index.\n   # The node_graph_index is the index of the corresponding graph for each node in the batch.\n   # The edge_graph_index is the index of the corresponding edge for each node in the batch.\n   batch_graph = tfg.BatchGraph.from_graphs([graph, graph, graph, graph])\n\n   # We can reversely split a BatchGraph object into Graphs objects\n   graphs = batch_graph.to_graphs()\n\n   # Graph Pooling algorithms often rely on such batch data structure\n   # Most of them accept a BatchGraph's data as input and output a feature vector for each graph in the batch\n   outputs = tfg.nn.mean_pool(batch_graph.x, batch_graph.node_graph_index, num_graphs=batch_graph.num_graphs)\n   print(outputs)\n\n   # We provide some advanced graph pooling operations such as topk_pool\n   node_score = tfg.nn.gcn(\n       batch_graph.x,\n       batch_graph.edge_index,\n       batch_graph.edge_weight,\n       tf.Variable(tf.random.truncated_normal([20, 1])),  # GCN Weight\n       cache=graph.cache  # GCN use caches to avoid re-computing of the normed edge information\n   )\n   node_score = tf.reshape(node_score, [-1])\n   topk_node_index = tfg.nn.topk_pool(batch_graph.node_graph_index, node_score, ratio=0.6)\n   print(topk_node_index)\n\n\n\n\n   # ==================================== Built-in Datasets ====================================\n   # all graph data are in numpy format\n   train_data, valid_data, test_data = tfg.datasets.PPIDataset().load_data()\n\n   # we can convert them into tensorflow format\n   test_data = [graph.convert_data_to_tensor() for graph in test_data]\n\n\n\n\n\n   # ==================================== Basic OOP API ====================================\n   # OOP Style GCN (Graph Convolutional Network)\n   gcn_layer = tfg.layers.GCN(units=20, activation=tf.nn.relu)\n\n   for graph in test_data:\n       # Cache can speed-up GCN by caching the normed edge information\n       outputs = gcn_layer([graph.x, graph.edge_index, graph.edge_weight], cache=graph.cache)\n       print(outputs)\n\n\n   # OOP Style GAT (Multi-head Graph Attention Network)\n   gat_layer = tfg.layers.GAT(units=20, activation=tf.nn.relu, num_heads=4)\n   for graph in test_data:\n       outputs = gat_layer([graph.x, graph.edge_index])\n       print(outputs)\n\n\n\n   # ==================================== Basic Functional API ====================================\n   # Functional Style GCN\n   # Functional API is more flexible for advanced algorithms\n   # You can pass both data and parameters to functional APIs\n\n   gcn_w = tf.Variable(tf.random.truncated_normal([test_data[0].num_features, 20]))\n   for graph in test_data:\n       outputs = tfg.nn.gcn(graph.x, edge_index, edge_weight, gcn_w, activation=tf.nn.relu)\n       print(outputs)\n\n\n   # ==================================== Advanced OOP API ====================================\n   # All APIs are implemented with Map-Reduce Style\n   # This is a gcn without weight normalization and transformation.\n   # Create your own GNN Layer by subclassing the MapReduceGNN class\n   class NaiveGCN(tfg.layers.MapReduceGNN):\n\n       def map(self, repeated_x, neighbor_x, edge_weight=None):\n           return tfg.nn.identity_mapper(repeated_x, neighbor_x, edge_weight)\n\n       def reduce(self, neighbor_msg, node_index, num_nodes=None):\n           return tfg.nn.sum_reducer(neighbor_msg, node_index, num_nodes)\n\n       def update(self, x, reduced_neighbor_msg):\n           return tfg.nn.sum_updater(x, reduced_neighbor_msg)\n\n\n   naive_gcn = NaiveGCN()\n\n   for graph in test_data:\n       print(naive_gcn([graph.x, graph.edge_index, graph.edge_weight]))\n\n\n   # ==================================== Advanced Functional API ====================================\n   # All APIs are implemented with Map-Reduce Style\n   # This is a gcn without without weight normalization and transformation\n   # Just pass the mapper/reducer/updater functions to the Functional API\n\n   for graph in test_data:\n       outputs = tfg.nn.aggregate_neighbors(\n           x=graph.x,\n           edge_index=graph.edge_index,\n           edge_weight=graph.edge_weight,\n           mapper=tfg.nn.identity_mapper,\n           reducer=tfg.nn.sum_reducer,\n           updater=tfg.nn.sum_updater\n       )\n       print(outputs)",
    "bugtrack_url": null,
    "license": "GNU General Public License v3.0 (See LICENSE)",
    "summary": " Efficient and Friendly Graph Neural Network Library for TensorFlow 1.x and 2.x. ",
    "version": "0.0.29",
    "split_keywords": [],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "md5": "f22108aa449bc07f42fd25491706e11c",
                "sha256": "f8ddd3c3244d3778c5d9a2843fa7c0073ecc43a5ec5745fa0c923c336f59f3f7"
            },
            "downloads": -1,
            "filename": "tf_geometric-0.0.29.tar.gz",
            "has_sig": false,
            "md5_digest": "f22108aa449bc07f42fd25491706e11c",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">3.5.0",
            "size": 27869,
            "upload_time": "2020-10-03T19:01:22",
            "upload_time_iso_8601": "2020-10-03T19:01:22.603988Z",
            "url": "https://files.pythonhosted.org/packages/de/db/ddfcb527a95649f37d9e2da22185cbb578461539903fe0b6325886a940e7/tf_geometric-0.0.29.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2020-10-03 19:01:22",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": null,
    "github_project": "CrawlScript",
    "error": "Could not fetch GitHub repository",
    "lcname": "tf-geometric"
}
        
Elapsed time: 0.14205s