twscrape


Nametwscrape JSON
Version 0.12 PyPI version JSON
download
home_pageNone
SummaryTwitter GraphQL and Search API implementation with SNScrape data models
upload_time2024-04-18 16:34:27
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseMIT
keywords api scrape scrapper snscrape twitter
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # twscrape

<div align="center">

[<img src="https://badgen.net/pypi/v/twscrape" alt="version" />](https://pypi.org/project/twscrape)
[<img src="https://badgen.net/pypi/python/twscrape" alt="py versions" />](https://pypi.org/project/twscrape)
[<img src="https://badgen.net/pypi/dm/twscrape" alt="downloads" />](https://pypi.org/project/twscrape)
[<img src="https://badgen.net/github/license/vladkens/twscrape" alt="license" />](https://github.com/vladkens/twscrape/blob/main/LICENSE)
[<img src="https://badgen.net/static/-/buy%20me%20a%20coffee/ff813f?icon=buymeacoffee&label" alt="donate" />](https://buymeacoffee.com/vladkens)

</div>

Twitter GraphQL API implementation with [SNScrape](https://github.com/JustAnotherArchivist/snscrape) data models.

<div align="center">
  <img src=".github/example.png" alt="example of cli usage" height="400px">
</div>

## Install

```bash
pip install twscrape
```
Or development version:
```bash
pip install git+https://github.com/vladkens/twscrape.git
```

## Features
- Support both Search & GraphQL Twitter API
- Async/Await functions (can run multiple scrapers in parallel at the same time)
- Login flow (with receiving verification code from email)
- Saving/restoring account sessions
- Raw Twitter API responses & SNScrape models
- Automatic account switching to smooth Twitter API rate limits

## Usage

Since this project works through an authorized API, accounts need to be added. You can register and add an account yourself. You can also google sites that provide these things.

The email password is needed to get the code to log in to the account automatically (via imap protocol).

Data models:
- [User](https://github.com/vladkens/twscrape/blob/main/twscrape/models.py#L87)
- [Tweet](https://github.com/vladkens/twscrape/blob/main/twscrape/models.py#L136)

```python
import asyncio
from twscrape import API, gather
from twscrape.logger import set_log_level

async def main():
    api = API()  # or API("path-to.db") - default is `accounts.db`

    # ADD ACCOUNTS (for CLI usage see BELOW)
    await api.pool.add_account("user1", "pass1", "u1@example.com", "mail_pass1")
    await api.pool.add_account("user2", "pass2", "u2@example.com", "mail_pass2")
    await api.pool.login_all()

    # or add account with COOKIES (with cookies login not required)
    cookies = "abc=12; ct0=xyz"  # or '{"abc": "12", "ct0": "xyz"}'
    await api.pool.add_account("user3", "pass3", "u3@mail.com", "mail_pass3", cookies=cookies)

    # API USAGE

    # search (latest tab)
    await gather(api.search("elon musk", limit=20))  # list[Tweet]
    # change search tab (product), can be: Top, Latest (default), Media
    await gather(api.search("elon musk", limit=20, kv={"product": "Top"}))

    # tweet info
    tweet_id = 20
    await api.tweet_details(tweet_id)  # Tweet
    await gather(api.retweeters(tweet_id, limit=20))  # list[User]
    await gather(api.favoriters(tweet_id, limit=20))  # list[User]

    # Note: this method have small pagination from X side, like 5 tweets per query
    await gather(api.tweet_replies(tweet_id, limit=20))  # list[Tweet]

    # get user by login
    user_login = "xdevelopers"
    await api.user_by_login(user_login)  # User

    # user info
    user_id = 2244994945
    await api.user_by_id(user_id)  # User
    await gather(api.following(user_id, limit=20))  # list[User]
    await gather(api.followers(user_id, limit=20))  # list[User]
    await gather(api.verified_followers(user_id, limit=20))  # list[User]
    await gather(api.subscriptions(user_id, limit=20))  # list[User]
    await gather(api.user_tweets(user_id, limit=20))  # list[Tweet]
    await gather(api.user_tweets_and_replies(user_id, limit=20))  # list[Tweet]
    await gather(api.liked_tweets(user_id, limit=20))  # list[Tweet]

    # list info
    list_id = 123456789
    await gather(api.list_timeline(list_id))

    # NOTE 1: gather is a helper function to receive all data as list, FOR can be used as well:
    async for tweet in api.search("elon musk"):
        print(tweet.id, tweet.user.username, tweet.rawContent)  # tweet is `Tweet` object

    # NOTE 2: all methods have `raw` version (returns `httpx.Response` object):
    async for rep in api.search_raw("elon musk"):
        print(rep.status_code, rep.json())  # rep is `httpx.Response` object

    # change log level, default info
    set_log_level("DEBUG")

    # Tweet & User model can be converted to regular dict or json, e.g.:
    doc = await api.user_by_id(user_id)  # User
    doc.dict()  # -> python dict
    doc.json()  # -> json string

if __name__ == "__main__":
    asyncio.run(main())
```

### Stoping iteration with break

In order to correctly release an account in case of `break` in loop, a special syntax must be used. Otherwise, Python's events loop will release lock on the account sometime in the future. See explanation [here](https://github.com/vladkens/twscrape/issues/27#issuecomment-1623395424).

```python
from contextlib import aclosing

async with aclosing(api.search("elon musk")) as gen:
    async for tweet in gen:
        if tweet.id < 200:
            break
```

## CLI

### Get help on CLI commands

```sh
# show all commands
twscrape

# help on specific comand
twscrape search --help
```

### Add accounts

To add accounts use `add_accounts` command. Command syntax is:
```sh
twscrape add_accounts <file_path> <line_format>
```

Where:
`<line_format>` is format of line if accounts file splited by delimeter. Possible tokens:
- `username` – required
- `password` – required
- `email` – required
- `email_password` – to receive email code (you can use `--manual` mode to get code)
- `cookies` – can be any parsable format (string, json, base64 string, etc)
- `_` – skip column from parse

Tokens should be splited by delimeter, usually "`:`" used.

Example:

I have account files named `order-12345.txt` with format:
```text
username:password:email:email password:user_agent:cookies
```

Command to add accounts will be (user_agent column skiped with `_`):
```sh
twscrape add_accounts ./order-12345.txt username:password:email:email_password:_:cookies
```

### Login accounts

_Note:_ If you added accounts with cookies, login not required.

Run:

```sh
twscrape login_accounts
```

`twscrape` will start login flow for each new account. If X will ask to verify email and you provided `email_password` in `add_account`, then `twscrape` will try to receive verification code by IMAP protocol. After success login account cookies will be saved to db file for future use.

#### Manual email verification

In case your email provider not support IMAP protocol (ProtonMail, Tutanota, etc) or IMAP is disabled in settings, you can enter email verification code manually. To do this run login command with `--manual` flag.

Example:

```sh
twscrape login_accounts --manual
twscrape relogin user1 user2 --manual
twscrape relogin_failed --manual
```

### Get list of accounts and their statuses

```sh
twscrape accounts

# Output:
# username  logged_in  active  last_used            total_req  error_msg
# user1     True       True    2023-05-20 03:20:40  100        None
# user2     True       True    2023-05-20 03:25:45  120        None
# user3     False      False   None                 120        Login error
```

### Re-login accounts

It is possible to re-login specific accounts:

```sh
twscrape relogin user1 user2
```

Or retry login for all failed logins:

```sh
twscrape relogin_failed
```

### Use different accounts file

Useful if using a different set of accounts for different actions

```
twscrape --db test-accounts.db <command>
```

### Search commands

```sh
twscrape search "QUERY" --limit=20
twscrape tweet_details TWEET_ID
twscrape tweet_replies TWEET_ID --limit=20
twscrape retweeters TWEET_ID --limit=20
twscrape favoriters TWEET_ID --limit=20
twscrape user_by_id USER_ID
twscrape user_by_login USERNAME
twscrape following USER_ID --limit=20
twscrape followers USER_ID --limit=20
twscrape verified_followers USER_ID --limit=20
twscrape subscriptions USER_ID --limit=20
twscrape user_tweets USER_ID --limit=20
twscrape user_tweets_and_replies USER_ID --limit=20
twscrape liked_tweets USER_ID --limit=20
```

The default output is in the console (stdout), one document per line. So it can be redirected to the file.

```sh
twscrape search "elon mask lang:es" --limit=20 > data.txt
```

By default, parsed data is returned. The original tweet responses can be retrieved with `--raw` flag.

```sh
twscrape search "elon mask lang:es" --limit=20 --raw
```

## Proxy

There are few options to use proxies.

1. You can add proxy per account

```py
proxy = "http://login:pass@example.com:8080"
await api.pool.add_account("user4", "pass4", "u4@mail.com", "mail_pass4", proxy=proxy)
```

2. You can use global proxy for all accounts

```py
proxy = "http://login:pass@example.com:8080"
api = API(proxy=proxy)
doc = await api.user_by_login("elonmusk")
```

3. Use can set proxy with environemt variable `TWS_RPOXY`:

```sh
TWS_PROXY=socks5://user:pass@127.0.0.1:1080 twscrape user_by_login elonmusk
```

4. You can change proxy any time like:

```py
api.proxy = "socks5://user:pass@127.0.0.1:1080"
doc = await api.user_by_login("elonmusk")  # new proxy will be used
api.proxy = None
doc = await api.user_by_login("elonmusk")  # no proxy used
```

5. Proxy priorities

- `api.proxy` have top priority
- `env.proxy` will be used if `api.proxy` is None
- `acc.proxy` have lowest priotity

So if you want to use proxy PER ACCOUNT, do NOT override proxy with env variable or by passing proxy param to API.

_Note:_ If proxy not working, exception will be raised from API class.

## Environment variables

- `TWS_WAIT_EMAIL_CODE` – timeout for email verification code during login (default: `30`, in seconds)
- `TWS_RAISE_WHEN_NO_ACCOUNT` – raise `NoAccountError` exception when no available accounts right now, instead of waiting for availability (default: `false`, possible value: `false` / `0` / `true` / `1`)

## Limitations

After 1 July 2023 Twitter [introduced new limits](https://twitter.com/elonmusk/status/1675187969420828672) and still continue to update it periodically.

The basic behaviour is as follows:
- the reqest limit is updated every 15 minutes for each endpoint individually
- e.g. each account have 50 search requests / 15 min, 50 profile requests / 15 min, etc.

API data limits:
- `user_tweets` & `user_tweets_and_replies` – can return ~3200 tweets maximum

## Articles
- [How to still scrape millions of tweets in 2023](https://medium.com/@vladkens/how-to-still-scrape-millions-of-tweets-in-2023-using-twscrape-97f5d3881434)
- [_(Add Article)_](https://github.com/vladkens/twscrape/edit/main/readme.md)

## See also
- [twitter-advanced-search](https://github.com/igorbrigadir/twitter-advanced-search) – guide on search filters
- [twitter-api-client](https://github.com/trevorhobenshield/twitter-api-client) – Implementation of Twitter's v1, v2, and GraphQL APIs
- [snscrape](https://github.com/JustAnotherArchivist/snscrape) – is a scraper for social networking services (SNS)
- [twint](https://github.com/twintproject/twint) – Twitter Intelligence Tool

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "twscrape",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": null,
    "keywords": "api, scrape, scrapper, snscrape, twitter",
    "author": null,
    "author_email": "vladkens <v.pronsky@gmail.com>",
    "download_url": "https://files.pythonhosted.org/packages/72/e1/2ced6300ef6f0603e4016a3f437b92aefb3be09423064efd146e53e914dc/twscrape-0.12.tar.gz",
    "platform": null,
    "description": "# twscrape\n\n<div align=\"center\">\n\n[<img src=\"https://badgen.net/pypi/v/twscrape\" alt=\"version\" />](https://pypi.org/project/twscrape)\n[<img src=\"https://badgen.net/pypi/python/twscrape\" alt=\"py versions\" />](https://pypi.org/project/twscrape)\n[<img src=\"https://badgen.net/pypi/dm/twscrape\" alt=\"downloads\" />](https://pypi.org/project/twscrape)\n[<img src=\"https://badgen.net/github/license/vladkens/twscrape\" alt=\"license\" />](https://github.com/vladkens/twscrape/blob/main/LICENSE)\n[<img src=\"https://badgen.net/static/-/buy%20me%20a%20coffee/ff813f?icon=buymeacoffee&label\" alt=\"donate\" />](https://buymeacoffee.com/vladkens)\n\n</div>\n\nTwitter GraphQL API implementation with [SNScrape](https://github.com/JustAnotherArchivist/snscrape) data models.\n\n<div align=\"center\">\n  <img src=\".github/example.png\" alt=\"example of cli usage\" height=\"400px\">\n</div>\n\n## Install\n\n```bash\npip install twscrape\n```\nOr development version:\n```bash\npip install git+https://github.com/vladkens/twscrape.git\n```\n\n## Features\n- Support both Search & GraphQL Twitter API\n- Async/Await functions (can run multiple scrapers in parallel at the same time)\n- Login flow (with receiving verification code from email)\n- Saving/restoring account sessions\n- Raw Twitter API responses & SNScrape models\n- Automatic account switching to smooth Twitter API rate limits\n\n## Usage\n\nSince this project works through an authorized API, accounts need to be added. You can register and add an account yourself. You can also google sites that provide these things.\n\nThe email password is needed to get the code to log in to the account automatically (via imap protocol).\n\nData models:\n- [User](https://github.com/vladkens/twscrape/blob/main/twscrape/models.py#L87)\n- [Tweet](https://github.com/vladkens/twscrape/blob/main/twscrape/models.py#L136)\n\n```python\nimport asyncio\nfrom twscrape import API, gather\nfrom twscrape.logger import set_log_level\n\nasync def main():\n    api = API()  # or API(\"path-to.db\") - default is `accounts.db`\n\n    # ADD ACCOUNTS (for CLI usage see BELOW)\n    await api.pool.add_account(\"user1\", \"pass1\", \"u1@example.com\", \"mail_pass1\")\n    await api.pool.add_account(\"user2\", \"pass2\", \"u2@example.com\", \"mail_pass2\")\n    await api.pool.login_all()\n\n    # or add account with COOKIES (with cookies login not required)\n    cookies = \"abc=12; ct0=xyz\"  # or '{\"abc\": \"12\", \"ct0\": \"xyz\"}'\n    await api.pool.add_account(\"user3\", \"pass3\", \"u3@mail.com\", \"mail_pass3\", cookies=cookies)\n\n    # API USAGE\n\n    # search (latest tab)\n    await gather(api.search(\"elon musk\", limit=20))  # list[Tweet]\n    # change search tab (product), can be: Top, Latest (default), Media\n    await gather(api.search(\"elon musk\", limit=20, kv={\"product\": \"Top\"}))\n\n    # tweet info\n    tweet_id = 20\n    await api.tweet_details(tweet_id)  # Tweet\n    await gather(api.retweeters(tweet_id, limit=20))  # list[User]\n    await gather(api.favoriters(tweet_id, limit=20))  # list[User]\n\n    # Note: this method have small pagination from X side, like 5 tweets per query\n    await gather(api.tweet_replies(tweet_id, limit=20))  # list[Tweet]\n\n    # get user by login\n    user_login = \"xdevelopers\"\n    await api.user_by_login(user_login)  # User\n\n    # user info\n    user_id = 2244994945\n    await api.user_by_id(user_id)  # User\n    await gather(api.following(user_id, limit=20))  # list[User]\n    await gather(api.followers(user_id, limit=20))  # list[User]\n    await gather(api.verified_followers(user_id, limit=20))  # list[User]\n    await gather(api.subscriptions(user_id, limit=20))  # list[User]\n    await gather(api.user_tweets(user_id, limit=20))  # list[Tweet]\n    await gather(api.user_tweets_and_replies(user_id, limit=20))  # list[Tweet]\n    await gather(api.liked_tweets(user_id, limit=20))  # list[Tweet]\n\n    # list info\n    list_id = 123456789\n    await gather(api.list_timeline(list_id))\n\n    # NOTE 1: gather is a helper function to receive all data as list, FOR can be used as well:\n    async for tweet in api.search(\"elon musk\"):\n        print(tweet.id, tweet.user.username, tweet.rawContent)  # tweet is `Tweet` object\n\n    # NOTE 2: all methods have `raw` version (returns `httpx.Response` object):\n    async for rep in api.search_raw(\"elon musk\"):\n        print(rep.status_code, rep.json())  # rep is `httpx.Response` object\n\n    # change log level, default info\n    set_log_level(\"DEBUG\")\n\n    # Tweet & User model can be converted to regular dict or json, e.g.:\n    doc = await api.user_by_id(user_id)  # User\n    doc.dict()  # -> python dict\n    doc.json()  # -> json string\n\nif __name__ == \"__main__\":\n    asyncio.run(main())\n```\n\n### Stoping iteration with break\n\nIn order to correctly release an account in case of `break` in loop, a special syntax must be used. Otherwise, Python's events loop will release lock on the account sometime in the future. See explanation [here](https://github.com/vladkens/twscrape/issues/27#issuecomment-1623395424).\n\n```python\nfrom contextlib import aclosing\n\nasync with aclosing(api.search(\"elon musk\")) as gen:\n    async for tweet in gen:\n        if tweet.id < 200:\n            break\n```\n\n## CLI\n\n### Get help on CLI commands\n\n```sh\n# show all commands\ntwscrape\n\n# help on specific comand\ntwscrape search --help\n```\n\n### Add accounts\n\nTo add accounts use `add_accounts` command. Command syntax is:\n```sh\ntwscrape add_accounts <file_path> <line_format>\n```\n\nWhere:\n`<line_format>` is format of line if accounts file splited by delimeter. Possible tokens:\n- `username` \u2013 required\n- `password` \u2013 required\n- `email` \u2013 required\n- `email_password` \u2013 to receive email code (you can use `--manual` mode to get code)\n- `cookies` \u2013 can be any parsable format (string, json, base64 string, etc)\n- `_` \u2013 skip column from parse\n\nTokens should be splited by delimeter, usually \"`:`\" used.\n\nExample:\n\nI have account files named `order-12345.txt` with format:\n```text\nusername:password:email:email password:user_agent:cookies\n```\n\nCommand to add accounts will be (user_agent column skiped with `_`):\n```sh\ntwscrape add_accounts ./order-12345.txt username:password:email:email_password:_:cookies\n```\n\n### Login accounts\n\n_Note:_ If you added accounts with cookies, login not required.\n\nRun:\n\n```sh\ntwscrape login_accounts\n```\n\n`twscrape` will start login flow for each new account. If X will ask to verify email and you provided `email_password` in `add_account`, then `twscrape` will try to receive verification code by IMAP protocol. After success login account cookies will be saved to db file for future use.\n\n#### Manual email verification\n\nIn case your email provider not support IMAP protocol (ProtonMail, Tutanota, etc) or IMAP is disabled in settings, you can enter email verification code manually. To do this run login command with `--manual` flag.\n\nExample:\n\n```sh\ntwscrape login_accounts --manual\ntwscrape relogin user1 user2 --manual\ntwscrape relogin_failed --manual\n```\n\n### Get list of accounts and their statuses\n\n```sh\ntwscrape accounts\n\n# Output:\n# username  logged_in  active  last_used            total_req  error_msg\n# user1     True       True    2023-05-20 03:20:40  100        None\n# user2     True       True    2023-05-20 03:25:45  120        None\n# user3     False      False   None                 120        Login error\n```\n\n### Re-login accounts\n\nIt is possible to re-login specific accounts:\n\n```sh\ntwscrape relogin user1 user2\n```\n\nOr retry login for all failed logins:\n\n```sh\ntwscrape relogin_failed\n```\n\n### Use different accounts file\n\nUseful if using a different set of accounts for different actions\n\n```\ntwscrape --db test-accounts.db <command>\n```\n\n### Search commands\n\n```sh\ntwscrape search \"QUERY\" --limit=20\ntwscrape tweet_details TWEET_ID\ntwscrape tweet_replies TWEET_ID --limit=20\ntwscrape retweeters TWEET_ID --limit=20\ntwscrape favoriters TWEET_ID --limit=20\ntwscrape user_by_id USER_ID\ntwscrape user_by_login USERNAME\ntwscrape following USER_ID --limit=20\ntwscrape followers USER_ID --limit=20\ntwscrape verified_followers USER_ID --limit=20\ntwscrape subscriptions USER_ID --limit=20\ntwscrape user_tweets USER_ID --limit=20\ntwscrape user_tweets_and_replies USER_ID --limit=20\ntwscrape liked_tweets USER_ID --limit=20\n```\n\nThe default output is in the console (stdout), one document per line. So it can be redirected to the file.\n\n```sh\ntwscrape search \"elon mask lang:es\" --limit=20 > data.txt\n```\n\nBy default, parsed data is returned. The original tweet responses can be retrieved with `--raw` flag.\n\n```sh\ntwscrape search \"elon mask lang:es\" --limit=20 --raw\n```\n\n## Proxy\n\nThere are few options to use proxies.\n\n1. You can add proxy per account\n\n```py\nproxy = \"http://login:pass@example.com:8080\"\nawait api.pool.add_account(\"user4\", \"pass4\", \"u4@mail.com\", \"mail_pass4\", proxy=proxy)\n```\n\n2. You can use global proxy for all accounts\n\n```py\nproxy = \"http://login:pass@example.com:8080\"\napi = API(proxy=proxy)\ndoc = await api.user_by_login(\"elonmusk\")\n```\n\n3. Use can set proxy with environemt variable `TWS_RPOXY`:\n\n```sh\nTWS_PROXY=socks5://user:pass@127.0.0.1:1080 twscrape user_by_login elonmusk\n```\n\n4. You can change proxy any time like:\n\n```py\napi.proxy = \"socks5://user:pass@127.0.0.1:1080\"\ndoc = await api.user_by_login(\"elonmusk\")  # new proxy will be used\napi.proxy = None\ndoc = await api.user_by_login(\"elonmusk\")  # no proxy used\n```\n\n5. Proxy priorities\n\n- `api.proxy` have top priority\n- `env.proxy` will be used if `api.proxy` is None\n- `acc.proxy` have lowest priotity\n\nSo if you want to use proxy PER ACCOUNT, do NOT override proxy with env variable or by passing proxy param to API.\n\n_Note:_ If proxy not working, exception will be raised from API class.\n\n## Environment variables\n\n- `TWS_WAIT_EMAIL_CODE` \u2013 timeout for email verification code during login (default: `30`, in seconds)\n- `TWS_RAISE_WHEN_NO_ACCOUNT` \u2013 raise `NoAccountError` exception when no available accounts right now, instead of waiting for availability (default: `false`, possible value: `false` / `0` / `true` / `1`)\n\n## Limitations\n\nAfter 1 July 2023 Twitter [introduced new limits](https://twitter.com/elonmusk/status/1675187969420828672) and still continue to update it periodically.\n\nThe basic behaviour is as follows:\n- the reqest limit is updated every 15 minutes for each endpoint individually\n- e.g. each account have 50 search requests / 15 min, 50 profile requests / 15 min, etc.\n\nAPI data limits:\n- `user_tweets` & `user_tweets_and_replies` \u2013 can return ~3200 tweets maximum\n\n## Articles\n- [How to still scrape millions of tweets in 2023](https://medium.com/@vladkens/how-to-still-scrape-millions-of-tweets-in-2023-using-twscrape-97f5d3881434)\n- [_(Add Article)_](https://github.com/vladkens/twscrape/edit/main/readme.md)\n\n## See also\n- [twitter-advanced-search](https://github.com/igorbrigadir/twitter-advanced-search) \u2013 guide on search filters\n- [twitter-api-client](https://github.com/trevorhobenshield/twitter-api-client) \u2013 Implementation of Twitter's v1, v2, and GraphQL APIs\n- [snscrape](https://github.com/JustAnotherArchivist/snscrape) \u2013 is a scraper for social networking services (SNS)\n- [twint](https://github.com/twintproject/twint) \u2013 Twitter Intelligence Tool\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "Twitter GraphQL and Search API implementation with SNScrape data models",
    "version": "0.12",
    "project_urls": {
        "repository": "https://github.com/vladkens/twscrape"
    },
    "split_keywords": [
        "api",
        " scrape",
        " scrapper",
        " snscrape",
        " twitter"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1138e4f4e614bd4177325962a57f943d2cacd567b7ddc825004135335e232e33",
                "md5": "0f3fa13c468afd24aade1f90231a98ab",
                "sha256": "95bfc382c300c2430aaad87f9d4280dad47d8972f8a6c0f66aab444c04bae798"
            },
            "downloads": -1,
            "filename": "twscrape-0.12-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "0f3fa13c468afd24aade1f90231a98ab",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 32925,
            "upload_time": "2024-04-18T16:34:25",
            "upload_time_iso_8601": "2024-04-18T16:34:25.949029Z",
            "url": "https://files.pythonhosted.org/packages/11/38/e4f4e614bd4177325962a57f943d2cacd567b7ddc825004135335e232e33/twscrape-0.12-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "72e12ced6300ef6f0603e4016a3f437b92aefb3be09423064efd146e53e914dc",
                "md5": "230a471614b02572f014b44b940d742d",
                "sha256": "2f52e80edf090c5c47024e400de70850d28542e87fb7d19a1f1b21f61f5cf9e5"
            },
            "downloads": -1,
            "filename": "twscrape-0.12.tar.gz",
            "has_sig": false,
            "md5_digest": "230a471614b02572f014b44b940d742d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 572141,
            "upload_time": "2024-04-18T16:34:27",
            "upload_time_iso_8601": "2024-04-18T16:34:27.894068Z",
            "url": "https://files.pythonhosted.org/packages/72/e1/2ced6300ef6f0603e4016a3f437b92aefb3be09423064efd146e53e914dc/twscrape-0.12.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-18 16:34:27",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "vladkens",
    "github_project": "twscrape",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "twscrape"
}
        
Elapsed time: 0.27349s