equate


Nameequate JSON
Version 0.0.11 PyPI version JSON
download
home_pagehttps://github.com/thorwhalen/equate
Summary"This is a package with tools for matching things. Dirty things like language, files in your file system, socks and whistles."
upload_time2023-11-21 17:39:02
maintainer
docs_urlNone
authorthorwhalen
requires_python
licensemit
keywords matching joining connecting
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Equate

This is a package with tools for matching things. 

Dirty things like language, files in your file system, socks and whistles.

# Install

```
pip install equate
```

Moving on...

# Little peep

Merging/joining tables is very common instance, yet only a small part of what is
possible, and often needed. Consider the following use cases:

- Find the columns to match (join keys) by comparing how well the values of the column
match.

- Comparing the values of the columns with something more flexible than hard equality.
For example, correlation, similarity, etc.

- Find near duplicate columns

- Find rows to align, based on flexible comparison of fuzzily matched cells

## Simple case

Say you have two sets of strings, and all you do is want to is match each element of
the "keys" set to an element of the "values" set (never reusing the same value for a 
different key), and say you know that a matching value string will differ from its 
key only by a few characters. In that case just do this:

```python
keys = ['apple', 'banana', 'carrot']
values = ['car', 'app', 'carob', 'cabana']
dict(match_greedily(keys, values))
# == {'apple': 'app', 'banana': 'cabana', 'carrot': 'carob'}
```

Note that by default, `match_greedily` uses the edit-distance as its `score_func`, 
and you can specify your own custom `score_func`. But the matching is done "greedily". 
This means that at every step, the highest-score value will be taken as the match. 
Some times this is not ideal (typically, your matches validity will decline along with 
the number of keys matched). 

## More involved methods

Here's a more involved entry point:

```python
from equate import match_keys_to_values

keys = ['apple pie', 'apple crumble', 'banana split']
values = ['american pie', 'big apple', 'american girl', 'banana republic']
dict(match_keys_to_values(keys, values))
# == {'apple pie': 'american pie',
# 'apple crumble': 'big apple',
# 'banana split': 'banana republic'}
```

The algorithm that gets you these matches is a bit more involved, and parametrizable.
This is how it works.

First, it computes a similarity matrix, and then applies a "matcher" that will 
search through this similarity matrix for an optimal matching, using any 
optimal search function you want (and of course, we provide a few standard ones).

The similarity matrix, if full, contains scores for every possible combination of 
keys and values. 
You can specify how to compute this similarity matrix, which means that if you 
don't want to compute the similarity for every combination, you don't need to. 
Usually, your similarity matrix will be a "sparse matrix", that is, only specify a 
few non-zero entries.

The default similarity matrix function uses a `obj_to_vect` function along with a 
vector similarity function `similarty_func` to compute what the `score_func` 
did in our first `match_greedily` function.
By default here, though, `similarity_matrix` will use methods that are more powerful 
than just an edit-distance. It will learn and use a "TfIdf" vectorization 
(a.k.a. embedding) and a cosine similarity function. 
As a result, you should get finer matchings.

```python
from equate import similarity_matrix

keys = ['apple pie', 'apple crumble', 'banana split']
values = ['american pie', 'big apple', 'american girl', 'banana republic']
m = similarity_matrix(keys, values)
m.round(2).tolist()
# == [[0.54, 0.38, 0.0, 0.0], [0.0, 0.33, 0.0, 0.0], [0.0, 0.0, 0.0, 0.41]]
```

The `equate.util` module has a few optimal matching function you can use from 
there to extract the matching pairs from the matrix. 
At the time of writing this, we've implemented: 
`greedy_matching`, 
`hungarian_matching`, 
`maximal_matching`,
`stable_marriage_matching`, and 
`kuhn_munkres_matching`.

At the time of writing this, the default `matcher` used by `match_keys_to_values` is 
[hungarian_matching](https://en.wikipedia.org/wiki/Hungarian_algorithm).


# An example: In search of a import-to-package name matcher

Here, we'll go through an actual practical example of when you might want to match 
things: "Guessing" the pip install name from a pip package name, 
and other related analyses.

## The problem

Ever got an import error and wondered what the pip install package name was.

Say... 
```
ImportError: No module named skimage
```

But it ain't `pip install skimage` is it (well, it USED to not to, but you get the point...).
What you actually need to do to install (with `pip`) is:
```
pip install scikit-image
```

I would have guessed that!

So no, it's annoying. It shouldn't be allowed. And since it is, there should be an index out there to help out, right?

```
pip install --just-find-it-for-me skimage
```

Instead of just complaining, I thought I'd throw some code at it.
(I'll still complain though.)

Here's a solution: Ask the world (of semantic clouds -- otherwise known as "Google") about it...

## A (fun) solution


```python
import requests
import re
from collections import Counter

search_re = re.compile('(?<=pip install\W)[-\w]+')

def pkg_name_options(query):
    r = requests.get('https://www.google.com/search', params={'q': f'python "pip install" {query}'})
    if r.status_code == 200:
        return Counter(filter(lambda x: x != query, p.findall(r.content.decode('latin-1')))).most_common()
    
def best_guess(query):
    t = pkg_name_options(query)
    if t:
        return t[0][0]
        
```


```python
>>> pkg_name_options('skimage')
[('scikit-image', 5),
 ('-e', 2),
 ('virtualenv', 1),
 ('scikit', 1),
 ('scikit-', 1),
 ('pillow', 1)]
```









```python
>>> best_guess('skimage')
'scikit-image'
```


Yay, it works!
With a sample of one!
Let's try two...


```python
>>> pkg_name_options('sklearn')
[('numpy', 3), ('scikit-learn', 2), ('-U', 2), ('scikit-', 1), ('scipy', 1)]
```




Okay, so it already fails. 

Sure, I could parse more carefully. I could dig into the webpages and get more scope. 

That'd be fun. 

But that's not very nice too Google (and probably is illegal, if anyone cares). 

What you'll find next is an attempt to look at the man in the mirror instead. Looking locally, where the packages actually are: In the site-packages folders...



## Extract, analyze and compare site-packages info names


```python
import pandas as pd
import numpy as np

from equate.examples.site_names import (
    DFLT_SITE_PKG_DIR,    
    site_packages_info_df,
    print_n_null_elements_in_each_column_containing_at_least_one,
    Lidx,
)
```


```python
>>> DFLT_SITE_PKG_DIR
'~/.virtualenvs/382/lib/python3.8/site-packages'
```



```python
>>> data = site_packages_info_df()
>>> print(f"{data.shape}")
(303, 8)
>>> data
```






<div>
<style scoped>
    .dataframe tbody tr th:only-of-type {
        vertical-align: middle;
    }

    .dataframe tbody tr th {
        vertical-align: top;
    }

    .dataframe thead th {
        text-align: right;
    }
</style>
<table border="1" class="dataframe">
  <thead>
    <tr style="text-align: right;">
      <th></th>
      <th>dist_info_dirname</th>
      <th>info_kind</th>
      <th>dist_name</th>
      <th>most_frequent_record_dirname</th>
      <th>first_line_of_top_level_txt</th>
      <th>installer</th>
      <th>metadata_name</th>
      <th>pypi_url_name</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th>0</th>
      <td>xlrd-1.2.0.dist-info</td>
      <td>dist-info</td>
      <td>xlrd</td>
      <td>xlrd</td>
      <td>xlrd</td>
      <td>pip</td>
      <td>xlrd</td>
      <td>None</td>
    </tr>
    <tr>
      <th>1</th>
      <td>boltons-20.2.0.dist-info</td>
      <td>dist-info</td>
      <td>boltons</td>
      <td>boltons</td>
      <td>boltons</td>
      <td>pip</td>
      <td>boltons</td>
      <td>None</td>
    </tr>
    <tr>
      <th>2</th>
      <td>appdirs-1.4.3.dist-info</td>
      <td>dist-info</td>
      <td>appdirs</td>
      <td>appdirs</td>
      <td>appdirs</td>
      <td>pip</td>
      <td>appdirs</td>
      <td>None</td>
    </tr>
    <tr>
      <th>3</th>
      <td>yapf-0.29.0.dist-info</td>
      <td>dist-info</td>
      <td>yapf</td>
      <td>yapftests</td>
      <td>yapf</td>
      <td>pip</td>
      <td>yapf</td>
      <td>None</td>
    </tr>
    <tr>
      <th>4</th>
      <td>cmudict-0.4.4.dist-info</td>
      <td>dist-info</td>
      <td>cmudict</td>
      <td>cmudict</td>
      <td>cmudict</td>
      <td>pip</td>
      <td>cmudict</td>
      <td>None</td>
    </tr>
    <tr>
      <th>...</th>
      <td>...</td>
      <td>...</td>
      <td>...</td>
      <td>...</td>
      <td>...</td>
      <td>...</td>
      <td>...</td>
      <td>...</td>
    </tr>
    <tr>
      <th>298</th>
      <td>simplegeneric-0.8.1.dist-info</td>
      <td>dist-info</td>
      <td>simplegeneric</td>
      <td>simplegeneric</td>
      <td>simplegeneric</td>
      <td>pip</td>
      <td>simplegeneric</td>
      <td>None</td>
    </tr>
    <tr>
      <th>299</th>
      <td>plotly-4.6.0.dist-info</td>
      <td>dist-info</td>
      <td>plotly</td>
      <td>plotly</td>
      <td>_plotly_future_</td>
      <td>pip</td>
      <td>plotly</td>
      <td>None</td>
    </tr>
    <tr>
      <th>300</th>
      <td>rsa-3.4.2.dist-info</td>
      <td>dist-info</td>
      <td>rsa</td>
      <td>rsa</td>
      <td>rsa</td>
      <td>pip</td>
      <td>rsa</td>
      <td>None</td>
    </tr>
    <tr>
      <th>301</th>
      <td>backcall-0.1.0.dist-info</td>
      <td>dist-info</td>
      <td>backcall</td>
      <td>backcall</td>
      <td>backcall</td>
      <td>pip</td>
      <td>backcall</td>
      <td>None</td>
    </tr>
    <tr>
      <th>302</th>
      <td>cantools-33.1.1.dist-info</td>
      <td>dist-info</td>
      <td>cantools</td>
      <td>cantools</td>
      <td>cantools</td>
      <td>pip</td>
      <td>cantools</td>
      <td>None</td>
    </tr>
  </tbody>
</table>
<p>303 rows × 8 columns</p>
</div>



```python
>>> print_n_null_elements_in_each_column_containing_at_least_one(data)
most_frequent_record_dirname:	1 null values
first_line_of_top_level_txt:	6 null values
installer:	32 null values
metadata_name:	1 null values
pypi_url_name:	255 null values
```


```python
>>> lidx = Lidx(data)
>>> df = data[lidx.no_nans]
>>> print(f"no nan df: {len(df)=}")
no_nans: 302
equal: 187
dash_underscore_eq: 220
('equal', 'dash_underscore_eq'): 186
```


```python
>>> lidx = Lidx(df)
>>> lidx.print_diagnosis()
no_nans: 302
equal: 187
dash_underscore_eq: 220
('equal', 'dash_underscore_eq'): 186
```





```python
>>> lidx = Lidx(df, 'first_line_of_top_level_txt')
>>> lidx.print_diagnosis()
no_nans: 297
equal: 182
dash_underscore_eq: 214
('equal', 'dash_underscore_eq'): 181
```



```python
>>> t = Lidx(df, 'most_frequent_record_dirname')
>>> tt = Lidx(df, 'first_line_of_top_level_txt')
>>> sum(t.equal | tt.equal)
199
```


```python
>> sum(t.dash_underscore_eq | tt.dash_underscore_eq)
233
```




            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/thorwhalen/equate",
    "name": "equate",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "matching,joining,connecting",
    "author": "thorwhalen",
    "author_email": "",
    "download_url": "https://files.pythonhosted.org/packages/d0/4e/437d38ae0705276d6d39b720d83bc98de05aec67bac45331b5bfc5625e20/equate-0.0.11.tar.gz",
    "platform": "any",
    "description": "# Equate\n\nThis is a package with tools for matching things. \n\nDirty things like language, files in your file system, socks and whistles.\n\n# Install\n\n```\npip install equate\n```\n\nMoving on...\n\n# Little peep\n\nMerging/joining tables is very common instance, yet only a small part of what is\npossible, and often needed. Consider the following use cases:\n\n- Find the columns to match (join keys) by comparing how well the values of the column\nmatch.\n\n- Comparing the values of the columns with something more flexible than hard equality.\nFor example, correlation, similarity, etc.\n\n- Find near duplicate columns\n\n- Find rows to align, based on flexible comparison of fuzzily matched cells\n\n## Simple case\n\nSay you have two sets of strings, and all you do is want to is match each element of\nthe \"keys\" set to an element of the \"values\" set (never reusing the same value for a \ndifferent key), and say you know that a matching value string will differ from its \nkey only by a few characters. In that case just do this:\n\n```python\nkeys = ['apple', 'banana', 'carrot']\nvalues = ['car', 'app', 'carob', 'cabana']\ndict(match_greedily(keys, values))\n# == {'apple': 'app', 'banana': 'cabana', 'carrot': 'carob'}\n```\n\nNote that by default, `match_greedily` uses the edit-distance as its `score_func`, \nand you can specify your own custom `score_func`. But the matching is done \"greedily\". \nThis means that at every step, the highest-score value will be taken as the match. \nSome times this is not ideal (typically, your matches validity will decline along with \nthe number of keys matched). \n\n## More involved methods\n\nHere's a more involved entry point:\n\n```python\nfrom equate import match_keys_to_values\n\nkeys = ['apple pie', 'apple crumble', 'banana split']\nvalues = ['american pie', 'big apple', 'american girl', 'banana republic']\ndict(match_keys_to_values(keys, values))\n# == {'apple pie': 'american pie',\n# 'apple crumble': 'big apple',\n# 'banana split': 'banana republic'}\n```\n\nThe algorithm that gets you these matches is a bit more involved, and parametrizable.\nThis is how it works.\n\nFirst, it computes a similarity matrix, and then applies a \"matcher\" that will \nsearch through this similarity matrix for an optimal matching, using any \noptimal search function you want (and of course, we provide a few standard ones).\n\nThe similarity matrix, if full, contains scores for every possible combination of \nkeys and values. \nYou can specify how to compute this similarity matrix, which means that if you \ndon't want to compute the similarity for every combination, you don't need to. \nUsually, your similarity matrix will be a \"sparse matrix\", that is, only specify a \nfew non-zero entries.\n\nThe default similarity matrix function uses a `obj_to_vect` function along with a \nvector similarity function `similarty_func` to compute what the `score_func` \ndid in our first `match_greedily` function.\nBy default here, though, `similarity_matrix` will use methods that are more powerful \nthan just an edit-distance. It will learn and use a \"TfIdf\" vectorization \n(a.k.a. embedding) and a cosine similarity function. \nAs a result, you should get finer matchings.\n\n```python\nfrom equate import similarity_matrix\n\nkeys = ['apple pie', 'apple crumble', 'banana split']\nvalues = ['american pie', 'big apple', 'american girl', 'banana republic']\nm = similarity_matrix(keys, values)\nm.round(2).tolist()\n# == [[0.54, 0.38, 0.0, 0.0], [0.0, 0.33, 0.0, 0.0], [0.0, 0.0, 0.0, 0.41]]\n```\n\nThe `equate.util` module has a few optimal matching function you can use from \nthere to extract the matching pairs from the matrix. \nAt the time of writing this, we've implemented: \n`greedy_matching`, \n`hungarian_matching`, \n`maximal_matching`,\n`stable_marriage_matching`, and \n`kuhn_munkres_matching`.\n\nAt the time of writing this, the default `matcher` used by `match_keys_to_values` is \n[hungarian_matching](https://en.wikipedia.org/wiki/Hungarian_algorithm).\n\n\n# An example: In search of a import-to-package name matcher\n\nHere, we'll go through an actual practical example of when you might want to match \nthings: \"Guessing\" the pip install name from a pip package name, \nand other related analyses.\n\n## The problem\n\nEver got an import error and wondered what the pip install package name was.\n\nSay... \n```\nImportError: No module named skimage\n```\n\nBut it ain't `pip install skimage` is it (well, it USED to not to, but you get the point...).\nWhat you actually need to do to install (with `pip`) is:\n```\npip install scikit-image\n```\n\nI would have guessed that!\n\nSo no, it's annoying. It shouldn't be allowed. And since it is, there should be an index out there to help out, right?\n\n```\npip install --just-find-it-for-me skimage\n```\n\nInstead of just complaining, I thought I'd throw some code at it.\n(I'll still complain though.)\n\nHere's a solution: Ask the world (of semantic clouds -- otherwise known as \"Google\") about it...\n\n## A (fun) solution\n\n\n```python\nimport requests\nimport re\nfrom collections import Counter\n\nsearch_re = re.compile('(?<=pip install\\W)[-\\w]+')\n\ndef pkg_name_options(query):\n    r = requests.get('https://www.google.com/search', params={'q': f'python \"pip install\" {query}'})\n    if r.status_code == 200:\n        return Counter(filter(lambda x: x != query, p.findall(r.content.decode('latin-1')))).most_common()\n    \ndef best_guess(query):\n    t = pkg_name_options(query)\n    if t:\n        return t[0][0]\n        \n```\n\n\n```python\n>>> pkg_name_options('skimage')\n[('scikit-image', 5),\n ('-e', 2),\n ('virtualenv', 1),\n ('scikit', 1),\n ('scikit-', 1),\n ('pillow', 1)]\n```\n\n\n\n\n\n\n\n\n\n```python\n>>> best_guess('skimage')\n'scikit-image'\n```\n\n\nYay, it works!\nWith a sample of one!\nLet's try two...\n\n\n```python\n>>> pkg_name_options('sklearn')\n[('numpy', 3), ('scikit-learn', 2), ('-U', 2), ('scikit-', 1), ('scipy', 1)]\n```\n\n\n\n\nOkay, so it already fails. \n\nSure, I could parse more carefully. I could dig into the webpages and get more scope. \n\nThat'd be fun. \n\nBut that's not very nice too Google (and probably is illegal, if anyone cares). \n\nWhat you'll find next is an attempt to look at the man in the mirror instead. Looking locally, where the packages actually are: In the site-packages folders...\n\n\n\n## Extract, analyze and compare site-packages info names\n\n\n```python\nimport pandas as pd\nimport numpy as np\n\nfrom equate.examples.site_names import (\n    DFLT_SITE_PKG_DIR,    \n    site_packages_info_df,\n    print_n_null_elements_in_each_column_containing_at_least_one,\n    Lidx,\n)\n```\n\n\n```python\n>>> DFLT_SITE_PKG_DIR\n'~/.virtualenvs/382/lib/python3.8/site-packages'\n```\n\n\n\n```python\n>>> data = site_packages_info_df()\n>>> print(f\"{data.shape}\")\n(303, 8)\n>>> data\n```\n\n\n\n\n\n\n<div>\n<style scoped>\n    .dataframe tbody tr th:only-of-type {\n        vertical-align: middle;\n    }\n\n    .dataframe tbody tr th {\n        vertical-align: top;\n    }\n\n    .dataframe thead th {\n        text-align: right;\n    }\n</style>\n<table border=\"1\" class=\"dataframe\">\n  <thead>\n    <tr style=\"text-align: right;\">\n      <th></th>\n      <th>dist_info_dirname</th>\n      <th>info_kind</th>\n      <th>dist_name</th>\n      <th>most_frequent_record_dirname</th>\n      <th>first_line_of_top_level_txt</th>\n      <th>installer</th>\n      <th>metadata_name</th>\n      <th>pypi_url_name</th>\n    </tr>\n  </thead>\n  <tbody>\n    <tr>\n      <th>0</th>\n      <td>xlrd-1.2.0.dist-info</td>\n      <td>dist-info</td>\n      <td>xlrd</td>\n      <td>xlrd</td>\n      <td>xlrd</td>\n      <td>pip</td>\n      <td>xlrd</td>\n      <td>None</td>\n    </tr>\n    <tr>\n      <th>1</th>\n      <td>boltons-20.2.0.dist-info</td>\n      <td>dist-info</td>\n      <td>boltons</td>\n      <td>boltons</td>\n      <td>boltons</td>\n      <td>pip</td>\n      <td>boltons</td>\n      <td>None</td>\n    </tr>\n    <tr>\n      <th>2</th>\n      <td>appdirs-1.4.3.dist-info</td>\n      <td>dist-info</td>\n      <td>appdirs</td>\n      <td>appdirs</td>\n      <td>appdirs</td>\n      <td>pip</td>\n      <td>appdirs</td>\n      <td>None</td>\n    </tr>\n    <tr>\n      <th>3</th>\n      <td>yapf-0.29.0.dist-info</td>\n      <td>dist-info</td>\n      <td>yapf</td>\n      <td>yapftests</td>\n      <td>yapf</td>\n      <td>pip</td>\n      <td>yapf</td>\n      <td>None</td>\n    </tr>\n    <tr>\n      <th>4</th>\n      <td>cmudict-0.4.4.dist-info</td>\n      <td>dist-info</td>\n      <td>cmudict</td>\n      <td>cmudict</td>\n      <td>cmudict</td>\n      <td>pip</td>\n      <td>cmudict</td>\n      <td>None</td>\n    </tr>\n    <tr>\n      <th>...</th>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n      <td>...</td>\n    </tr>\n    <tr>\n      <th>298</th>\n      <td>simplegeneric-0.8.1.dist-info</td>\n      <td>dist-info</td>\n      <td>simplegeneric</td>\n      <td>simplegeneric</td>\n      <td>simplegeneric</td>\n      <td>pip</td>\n      <td>simplegeneric</td>\n      <td>None</td>\n    </tr>\n    <tr>\n      <th>299</th>\n      <td>plotly-4.6.0.dist-info</td>\n      <td>dist-info</td>\n      <td>plotly</td>\n      <td>plotly</td>\n      <td>_plotly_future_</td>\n      <td>pip</td>\n      <td>plotly</td>\n      <td>None</td>\n    </tr>\n    <tr>\n      <th>300</th>\n      <td>rsa-3.4.2.dist-info</td>\n      <td>dist-info</td>\n      <td>rsa</td>\n      <td>rsa</td>\n      <td>rsa</td>\n      <td>pip</td>\n      <td>rsa</td>\n      <td>None</td>\n    </tr>\n    <tr>\n      <th>301</th>\n      <td>backcall-0.1.0.dist-info</td>\n      <td>dist-info</td>\n      <td>backcall</td>\n      <td>backcall</td>\n      <td>backcall</td>\n      <td>pip</td>\n      <td>backcall</td>\n      <td>None</td>\n    </tr>\n    <tr>\n      <th>302</th>\n      <td>cantools-33.1.1.dist-info</td>\n      <td>dist-info</td>\n      <td>cantools</td>\n      <td>cantools</td>\n      <td>cantools</td>\n      <td>pip</td>\n      <td>cantools</td>\n      <td>None</td>\n    </tr>\n  </tbody>\n</table>\n<p>303 rows \u00d7 8 columns</p>\n</div>\n\n\n\n```python\n>>> print_n_null_elements_in_each_column_containing_at_least_one(data)\nmost_frequent_record_dirname:\t1 null values\nfirst_line_of_top_level_txt:\t6 null values\ninstaller:\t32 null values\nmetadata_name:\t1 null values\npypi_url_name:\t255 null values\n```\n\n\n```python\n>>> lidx = Lidx(data)\n>>> df = data[lidx.no_nans]\n>>> print(f\"no nan df: {len(df)=}\")\nno_nans: 302\nequal: 187\ndash_underscore_eq: 220\n('equal', 'dash_underscore_eq'): 186\n```\n\n\n```python\n>>> lidx = Lidx(df)\n>>> lidx.print_diagnosis()\nno_nans: 302\nequal: 187\ndash_underscore_eq: 220\n('equal', 'dash_underscore_eq'): 186\n```\n\n\n\n\n\n```python\n>>> lidx = Lidx(df, 'first_line_of_top_level_txt')\n>>> lidx.print_diagnosis()\nno_nans: 297\nequal: 182\ndash_underscore_eq: 214\n('equal', 'dash_underscore_eq'): 181\n```\n\n\n\n```python\n>>> t = Lidx(df, 'most_frequent_record_dirname')\n>>> tt = Lidx(df, 'first_line_of_top_level_txt')\n>>> sum(t.equal | tt.equal)\n199\n```\n\n\n```python\n>> sum(t.dash_underscore_eq | tt.dash_underscore_eq)\n233\n```\n\n\n\n",
    "bugtrack_url": null,
    "license": "mit",
    "summary": "\"This is a package with tools for matching things. Dirty things like language, files in your file system, socks and whistles.\"",
    "version": "0.0.11",
    "project_urls": {
        "Homepage": "https://github.com/thorwhalen/equate"
    },
    "split_keywords": [
        "matching",
        "joining",
        "connecting"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "93ee769151f04bca34d8a028bf6563639212d5a3955e5a0c706ce15d8df8b3d3",
                "md5": "2460218b734a56fb4c81cc050c15d889",
                "sha256": "d1109aecac1f64a4a75acd86fde58f658b2698fb80d465e80340a4ea43bfb8dc"
            },
            "downloads": -1,
            "filename": "equate-0.0.11-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2460218b734a56fb4c81cc050c15d889",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 13717,
            "upload_time": "2023-11-21T17:39:00",
            "upload_time_iso_8601": "2023-11-21T17:39:00.329445Z",
            "url": "https://files.pythonhosted.org/packages/93/ee/769151f04bca34d8a028bf6563639212d5a3955e5a0c706ce15d8df8b3d3/equate-0.0.11-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "d04e437d38ae0705276d6d39b720d83bc98de05aec67bac45331b5bfc5625e20",
                "md5": "fe6c87912dfab0277d59ff7786059ba3",
                "sha256": "2bb2d56ff996c525bb1c0bf8b1e832990c204772426d857901a810e4aa1c6738"
            },
            "downloads": -1,
            "filename": "equate-0.0.11.tar.gz",
            "has_sig": false,
            "md5_digest": "fe6c87912dfab0277d59ff7786059ba3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": null,
            "size": 13961,
            "upload_time": "2023-11-21T17:39:02",
            "upload_time_iso_8601": "2023-11-21T17:39:02.022633Z",
            "url": "https://files.pythonhosted.org/packages/d0/4e/437d38ae0705276d6d39b720d83bc98de05aec67bac45331b5bfc5625e20/equate-0.0.11.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-21 17:39:02",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "thorwhalen",
    "github_project": "equate",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "equate"
}
        
Elapsed time: 0.14197s