Name | ntscraper JSON |
Version |
0.3.17
JSON |
| download |
home_page | None |
Summary | Unofficial library to scrape Twitter profiles and posts from Nitter instances |
upload_time | 2024-09-01 08:18:52 |
maintainer | None |
docs_url | None |
author | Lorenzo Bocchi |
requires_python | None |
license | MIT |
keywords |
twitter
nitter
scraping
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# Unofficial Nitter scraper
## Note
Twitter has recently made some changes which affected every third party Twitter client, including Nitter. As a result, most Nitter instances have shut down or will shut down shortly. Even local instances are affected by this, so you may not be able to scrape as many tweets as expected, if at all.
## The scraper
This is a simple library to scrape Nitter instances for tweets. It can:
- search and scrape tweets with a certain term
- search and scrape tweets with a certain hashtag
- scrape tweets from a user profile
- get profile information of a user, such as display name, username, number of tweets, profile picture ...
If the instance to use is not provided to the scraper, it will use a random public instance. If you can, please host your own instance in order to avoid overloading the public ones and letting Nitter stay alive for everyone. You can read more about that here: https://github.com/zedeus/nitter#installation.
---
## Installation
```
pip install ntscraper
```
## How to use
First, initialize the library:
```python
from ntscraper import Nitter
scraper = Nitter(log_level=1, skip_instance_check=False)
```
The valid logging levels are:
- None = no logs
- 0 = only warning and error logs
- 1 = previous + informational logs (default)
The `skip_instance_check` parameter is used to skip the check of the Nitter instances altogether during the execution of the script. If you use your own instance or trust the instance you are relying on, then you can skip set it to 'True', otherwise it's better to leave it to false.
Then, choose the proper function for what you want to do from the following.
### Scrape tweets
```python
github_hash_tweets = scraper.get_tweets("github", mode='hashtag')
bezos_tweets = scraper.get_tweets("JeffBezos", mode='user')
```
Parameters:
- term: search term
- mode: modality to scrape the tweets. Default is 'term' which will look for tweets containing the search term. Other modes are 'hashtag' to search for a hashtag and 'user' to scrape tweets from a user profile
- number: number of tweets to scrape. Default is -1 (no limit).
- since: date to start scraping from, formatted as YYYY-MM-DD. Default is None
- until: date to stop scraping at, formatted as YYYY-MM-DD. Default is None
- near: location to search tweets from. Default is None (anywhere)
- language: language of the tweets to search. Default is None (any language). The language must be specified as a 2-letter ISO 639-1 code (e.g. 'en' for English, 'es' for Spanish, 'fr' for French ...)
- to: user to which the tweets are directed. Default is None (any user). For example, if you want to search for tweets directed to @github, you would set this parameter to 'github'
- replies: whether to include replies in the search. If 'filters' or 'exclude' are set, this is overridden. Default is False
- filters: list of filters to apply to the search. Default is None. Valid filters are: 'nativeretweets', 'media', 'videos', 'news', 'verified', 'native_video', 'replies', 'links', 'images', 'safe', 'quote', 'pro_video'
- exclude: list of filters to exclude from the search. Default is None. Valid filters are the same as above
- max_retries: max retries to scrape a page. Default is 5
- instance: Nitter instance to use. Default is None and will be chosen at random
Returns a dictionary with tweets and threads for the term.
#### Multiprocessing
You can also scrape multiple terms at once using multiprocessing:
```python
terms = ["github", "bezos", "musk"]
results = scraper.get_tweets(terms, mode='term')
```
Each term will be scraped in a different process. The result will be a list of dictionaries, one for each term.
The multiprocessing code needs to run in a `if __name__ == "__main__"` block to avoid errors. With multiprocessing, only full logging is supported. Also, the number of processes is limited to the number of available cores on your machine.
NOTE: using multiprocessing on public instances is highly discouraged since it puts too much load on the servers and could potentially also get you rate limited. Please only use it on your local instance.
### Get single tweet
```python
tweet = scraper.get_tweet_by_id("x", "1826317783430303888")
```
Parameters:
- username: username of the tweet's author
- tweet_id: ID of the tweet
- instane: Nitter instance to use. Default is None
- max_retries: max retries to scrape a page. Default is 5
Returns a dictionary with the tweet's content.
### Get profile information
```python
bezos_information = scraper.get_profile_info("JeffBezos")
```
Parameters:
- username: username of the page to scrape
- max_retries: max retries to scrape a page. Default is 5
- instance: Nitter instance to use. Default is None
- mode: mode of fetching profile info. 'simple' for basic info, 'detail' for detailed info including following and followers lists. Default is 'simple'
Returns a dictionary of the profile's information.
#### Multiprocessing
As for the term scraping, you can also get info from multiple profiles at once using multiprocessing:
```python
usernames = ["x", "github"]
results = scraper.get_profile_info(usernames)
```
Each user will be scraped in a different process. The result will be a list of dictionaries, one for each user.
The multiprocessing code needs to run in a `if __name__ == "__main__"` block to avoid errors. With multiprocessing, only full logging is supported. Also, the number of processes is limited to the number of available cores on your machine.
NOTE: using multiprocessing on public instances is highly discouraged since it puts too much load on the servers and could potentially also get you rate limited. Please only use it on your local instance.
### Get random Nitter instance
```python
random_instance = scraper.get_random_instance()
```
Returns a random Nitter instance.
## Note
Due to recent changes on Twitter's side, some Nitter instances may not work properly even if they are marked as "working" on Nitter's wiki. If you have trouble scraping with a certain instance, try changing it and check if the problem persists.
## To do list
- [ ] Add scraping of individual posts with comments
Raw data
{
"_id": null,
"home_page": null,
"name": "ntscraper",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "twitter, nitter, scraping",
"author": "Lorenzo Bocchi",
"author_email": "lorenzobocchi99@yahoo.com",
"download_url": "https://files.pythonhosted.org/packages/67/82/6ff69ebca0cb6829de3ac653226eb3bc1d48d92ba4c526c48e9c3cb3d6f7/ntscraper-0.3.17.tar.gz",
"platform": null,
"description": "# Unofficial Nitter scraper\r\n\r\n## Note\r\n\r\nTwitter has recently made some changes which affected every third party Twitter client, including Nitter. As a result, most Nitter instances have shut down or will shut down shortly. Even local instances are affected by this, so you may not be able to scrape as many tweets as expected, if at all.\r\n\r\n## The scraper\r\n\r\nThis is a simple library to scrape Nitter instances for tweets. It can:\r\n\r\n- search and scrape tweets with a certain term\r\n\r\n- search and scrape tweets with a certain hashtag\r\n\r\n- scrape tweets from a user profile\r\n\r\n- get profile information of a user, such as display name, username, number of tweets, profile picture ...\r\n\r\nIf the instance to use is not provided to the scraper, it will use a random public instance. If you can, please host your own instance in order to avoid overloading the public ones and letting Nitter stay alive for everyone. You can read more about that here: https://github.com/zedeus/nitter#installation.\r\n\r\n---\r\n\r\n## Installation\r\n\r\n```\r\npip install ntscraper\r\n```\r\n\r\n## How to use\r\n\r\nFirst, initialize the library:\r\n\r\n```python\r\nfrom ntscraper import Nitter\r\n\r\nscraper = Nitter(log_level=1, skip_instance_check=False)\r\n```\r\nThe valid logging levels are:\r\n- None = no logs\r\n- 0 = only warning and error logs\r\n- 1 = previous + informational logs (default)\r\n\r\nThe `skip_instance_check` parameter is used to skip the check of the Nitter instances altogether during the execution of the script. If you use your own instance or trust the instance you are relying on, then you can skip set it to 'True', otherwise it's better to leave it to false.\r\n\r\nThen, choose the proper function for what you want to do from the following.\r\n\r\n### Scrape tweets\r\n\r\n```python\r\ngithub_hash_tweets = scraper.get_tweets(\"github\", mode='hashtag')\r\n\r\nbezos_tweets = scraper.get_tweets(\"JeffBezos\", mode='user')\r\n```\r\n\r\nParameters:\r\n- term: search term\r\n- mode: modality to scrape the tweets. Default is 'term' which will look for tweets containing the search term. Other modes are 'hashtag' to search for a hashtag and 'user' to scrape tweets from a user profile\r\n- number: number of tweets to scrape. Default is -1 (no limit).\r\n- since: date to start scraping from, formatted as YYYY-MM-DD. Default is None\r\n- until: date to stop scraping at, formatted as YYYY-MM-DD. Default is None\r\n- near: location to search tweets from. Default is None (anywhere)\r\n- language: language of the tweets to search. Default is None (any language). The language must be specified as a 2-letter ISO 639-1 code (e.g. 'en' for English, 'es' for Spanish, 'fr' for French ...)\r\n- to: user to which the tweets are directed. Default is None (any user). For example, if you want to search for tweets directed to @github, you would set this parameter to 'github'\r\n- replies: whether to include replies in the search. If 'filters' or 'exclude' are set, this is overridden. Default is False\r\n- filters: list of filters to apply to the search. Default is None. Valid filters are: 'nativeretweets', 'media', 'videos', 'news', 'verified', 'native_video', 'replies', 'links', 'images', 'safe', 'quote', 'pro_video'\r\n- exclude: list of filters to exclude from the search. Default is None. Valid filters are the same as above\r\n- max_retries: max retries to scrape a page. Default is 5\r\n- instance: Nitter instance to use. Default is None and will be chosen at random\r\n\r\nReturns a dictionary with tweets and threads for the term.\r\n\r\n#### Multiprocessing\r\n\r\nYou can also scrape multiple terms at once using multiprocessing:\r\n\r\n```python\r\nterms = [\"github\", \"bezos\", \"musk\"]\r\n\r\nresults = scraper.get_tweets(terms, mode='term')\r\n```\r\n\r\nEach term will be scraped in a different process. The result will be a list of dictionaries, one for each term.\r\n\r\nThe multiprocessing code needs to run in a `if __name__ == \"__main__\"` block to avoid errors. With multiprocessing, only full logging is supported. Also, the number of processes is limited to the number of available cores on your machine.\r\n\r\nNOTE: using multiprocessing on public instances is highly discouraged since it puts too much load on the servers and could potentially also get you rate limited. Please only use it on your local instance.\r\n\r\n### Get single tweet\r\n\r\n```python\r\ntweet = scraper.get_tweet_by_id(\"x\", \"1826317783430303888\")\r\n```\r\n\r\nParameters:\r\n- username: username of the tweet's author\r\n- tweet_id: ID of the tweet\r\n- instane: Nitter instance to use. Default is None\r\n- max_retries: max retries to scrape a page. Default is 5\r\n\r\nReturns a dictionary with the tweet's content.\r\n\r\n### Get profile information\r\n\r\n```python\r\nbezos_information = scraper.get_profile_info(\"JeffBezos\")\r\n```\r\n\r\nParameters:\r\n- username: username of the page to scrape\r\n- max_retries: max retries to scrape a page. Default is 5\r\n- instance: Nitter instance to use. Default is None\r\n- mode: mode of fetching profile info. 'simple' for basic info, 'detail' for detailed info including following and followers lists. Default is 'simple'\r\n\r\nReturns a dictionary of the profile's information.\r\n\r\n#### Multiprocessing\r\n\r\nAs for the term scraping, you can also get info from multiple profiles at once using multiprocessing:\r\n\r\n```python\r\nusernames = [\"x\", \"github\"]\r\n\r\nresults = scraper.get_profile_info(usernames)\r\n```\r\n\r\nEach user will be scraped in a different process. The result will be a list of dictionaries, one for each user.\r\n\r\nThe multiprocessing code needs to run in a `if __name__ == \"__main__\"` block to avoid errors. With multiprocessing, only full logging is supported. Also, the number of processes is limited to the number of available cores on your machine.\r\n\r\nNOTE: using multiprocessing on public instances is highly discouraged since it puts too much load on the servers and could potentially also get you rate limited. Please only use it on your local instance.\r\n\r\n### Get random Nitter instance\r\n\r\n```python\r\nrandom_instance = scraper.get_random_instance()\r\n```\r\n\r\nReturns a random Nitter instance.\r\n\r\n## Note\r\n\r\nDue to recent changes on Twitter's side, some Nitter instances may not work properly even if they are marked as \"working\" on Nitter's wiki. If you have trouble scraping with a certain instance, try changing it and check if the problem persists.\r\n\r\n## To do list\r\n\r\n- [ ] Add scraping of individual posts with comments\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "Unofficial library to scrape Twitter profiles and posts from Nitter instances",
"version": "0.3.17",
"project_urls": {
"Documentation": "https://github.com/bocchilorenzo/ntscraper",
"Homepage": "https://github.com/bocchilorenzo/ntscraper",
"Source": "https://github.com/bocchilorenzo/ntscraper"
},
"split_keywords": [
"twitter",
" nitter",
" scraping"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "5531c93c687501a2cd243078631c9cdd88cd759a98c320a36dc19366d0e1e708",
"md5": "778ca1d8f9af34e2dc5092a57045c244",
"sha256": "9e4f5bf1d44c73a41aec734e73e51d450ae3e793429706f31202976d5568ad03"
},
"downloads": -1,
"filename": "ntscraper-0.3.17-py3-none-any.whl",
"has_sig": false,
"md5_digest": "778ca1d8f9af34e2dc5092a57045c244",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 12691,
"upload_time": "2024-09-01T08:18:50",
"upload_time_iso_8601": "2024-09-01T08:18:50.776870Z",
"url": "https://files.pythonhosted.org/packages/55/31/c93c687501a2cd243078631c9cdd88cd759a98c320a36dc19366d0e1e708/ntscraper-0.3.17-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "67826ff69ebca0cb6829de3ac653226eb3bc1d48d92ba4c526c48e9c3cb3d6f7",
"md5": "a2ae454e7d0f834781c1634ddad499c3",
"sha256": "6d957fa2f9ed51e701c3533472b73bdfb4f5816a29032b3cbac70b09d8e62235"
},
"downloads": -1,
"filename": "ntscraper-0.3.17.tar.gz",
"has_sig": false,
"md5_digest": "a2ae454e7d0f834781c1634ddad499c3",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 14543,
"upload_time": "2024-09-01T08:18:52",
"upload_time_iso_8601": "2024-09-01T08:18:52.343668Z",
"url": "https://files.pythonhosted.org/packages/67/82/6ff69ebca0cb6829de3ac653226eb3bc1d48d92ba4c526c48e9c3cb3d6f7/ntscraper-0.3.17.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-09-01 08:18:52",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "bocchilorenzo",
"github_project": "ntscraper",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"requirements": [],
"lcname": "ntscraper"
}