linkedin-scraper4


Namelinkedin-scraper4 JSON
Version 3.0.0 PyPI version JSON
download
home_pagehttps://github.com/joeyism/linkedin_scraper
SummaryScrapes user data from Linkedin
upload_time2023-01-11 05:44:44
maintainer
docs_urlNone
authorJoey Sham
requires_python
license
keywords linkedin scraping scraper
VCS
bugtrack_url
requirements selenium requests lxml
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Linkedin Scraper

Scrapes Linkedin User Data

## Installation

```bash
pip3 install --user linkedin_scraper
```

Version **2.0.0** and before is called `linkedin_user_scraper` and can be installed via `pip3 install --user linkedin_user_scraper`

## Setup
First, you must set your chromedriver location by

```bash
export CHROMEDRIVER=~/chromedriver
```

## Usage
To use it, just create the class.

### Sample Usage
```python
from linkedin_scraper import Person, actions
from selenium import webdriver
driver = webdriver.Chrome()

email = "some-email@email.address"
password = "password123"
actions.login(driver, email, password) # if email and password isnt given, it'll prompt in terminal
person = Person("https://www.linkedin.com/in/andre-iguodala-65b48ab5", driver=driver)
```

**NOTE**: The account used to log-in should have it's language set English to make sure everything works as expected.

### User Scraping

```python
from linkedin_scraper import Person
person = Person("https://www.linkedin.com/in/andre-iguodala-65b48ab5")
```

### Company Scraping

```python
from linkedin_scraper import Company
company = Company("https://ca.linkedin.com/company/google")
```

### Scraping sites where login is required first
1. Run `ipython` or `python`
2. In `ipython`/`python`, run the following code (you can modify it if you need to specify your driver)
3. 
```python
from linkedin_scraper import Person
from selenium import webdriver
driver = webdriver.Chrome()
person = Person("https://www.linkedin.com/in/andre-iguodala-65b48ab5", driver = driver, scrape=False)
```
4. Login to Linkedin
5. [OPTIONAL] Logout of Linkedin
6. In the same `ipython`/`python` code, run
```python
person.scrape()
```

The reason is that LinkedIn has recently blocked people from viewing certain profiles without having previously signed in. So by setting `scrape=False`, it doesn't automatically scrape the profile, but Chrome will open the linkedin page anyways. You can login and logout, and the cookie will stay in the browser and it won't affect your profile views. Then when you run `person.scrape()`, it'll scrape and close the browser. If you want to keep the browser on so you can scrape others, run it as 

**NOTE**: For version >= `2.1.0`, scraping can also occur while logged in. Beware that users will be able to see that you viewed their profile.

```python
person.scrape(close_on_complete=False)
``` 
so it doesn't close.

### Scraping sites and login automatically
From verison **2.4.0** on, `actions` is a part of the library that allows signing into Linkedin first. The email and password can be provided as a variable into the function. If not provided, both will be prompted in terminal.

```python
from linkedin_scraper import Person, actions
from selenium import webdriver
driver = webdriver.Chrome()
email = "some-email@email.address"
password = "password123"
actions.login(driver, email, password) # if email and password isnt given, it'll prompt in terminal
person = Person("https://www.linkedin.com/in/andre-iguodala-65b48ab5", driver=driver)
```


## API

### Person
A Person object can be created with the following inputs:

```python
Person(linkedin_url=None, name=None, about=[], experiences=[], educations=[], interests=[], accomplishments=[], company=None, job_title=None, driver=None, scrape=True)
```
#### `linkedin_url`
This is the linkedin url of their profile

#### `name`
This is the name of the person

#### `about`
This is the small paragraph about the person

#### `experiences`
This is the past experiences they have. A list of `linkedin_scraper.scraper.Experience`

#### `educations`
This is the past educations they have. A list of `linkedin_scraper.scraper.Education`

#### `interests`
This is the interests they have. A list of `linkedin_scraper.scraper.Interest`

#### `accomplishment`
This is the accomplishments they have. A list of `linkedin_scraper.scraper.Accomplishment`

### `company`
This the most recent company or institution they have worked at. 

### `job_title`
This the most recent job title they have. 

#### `driver`
This is the driver from which to scraper the Linkedin profile. A driver using Chrome is created by default. However, if a driver is passed in, that will be used instead.

For example
```python
driver = webdriver.Chrome()
person = Person("https://www.linkedin.com/in/andre-iguodala-65b48ab5", driver = driver)
```

#### `scrape`
When this is **True**, the scraping happens automatically. To scrape afterwards, that can be run by the `scrape()` function from the `Person` object.


### `scrape(close_on_complete=True)`
This is the meat of the code, where execution of this function scrapes the profile. If *close_on_complete* is True (which it is by default), then the browser will close upon completion. If scraping of other profiles are desired, then you might want to set that to false so you can keep using the same driver.

 


### Company

```python
Company(linkedin_url=None, name=None, about_us=None, website=None, headquarters=None, founded=None, company_type=None, company_size=None, specialties=None, showcase_pages=[], affiliated_companies=[], driver=None, scrape=True, get_employees=True)
```

#### `linkedin_url`
This is the linkedin url of their profile

#### `name`
This is the name of the company

#### `about_us`
The description of the company

#### `website`
The website of the company

#### `headquarters`
The headquarters location of the company

#### `founded`
When the company was founded

#### `company_type`
The type of the company

#### `company_size`
How many people are employeed at the company

#### `specialties`
What the company specializes in

#### `showcase_pages`
Pages that the company owns to showcase their products

#### `affiliated_companies`
Other companies that are affiliated with this one

#### `driver`
This is the driver from which to scraper the Linkedin profile. A driver using Chrome is created by default. However, if a driver is passed in, that will be used instead.

#### `get_employees`
Whether to get all the employees of company

For example
```python
driver = webdriver.Chrome()
company = Company("https://ca.linkedin.com/company/google", driver=driver)
```


### `scrape(close_on_complete=True)`
This is the meat of the code, where execution of this function scrapes the company. If *close_on_complete* is True (which it is by default), then the browser will close upon completion. If scraping of other companies are desired, then you might want to set that to false so you can keep using the same driver.

## Contribution

<a href="https://www.buymeacoffee.com/joeyism" target="_blank"><img src="https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png" alt="Buy Me A Coffee" style="height: 41px !important;width: 174px !important;box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;-webkit-box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;" ></a>

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/joeyism/linkedin_scraper",
    "name": "linkedin-scraper4",
    "maintainer": "",
    "docs_url": null,
    "requires_python": "",
    "maintainer_email": "",
    "keywords": "linkedin,scraping,scraper",
    "author": "Joey Sham",
    "author_email": "sham.joey@gmail.com",
    "download_url": "https://github.com/joeyism/linkedin_scraper/dist/3.0.0.tar.gz",
    "platform": null,
    "description": "# Linkedin Scraper\n\nScrapes Linkedin User Data\n\n## Installation\n\n```bash\npip3 install --user linkedin_scraper\n```\n\nVersion **2.0.0** and before is called `linkedin_user_scraper` and can be installed via `pip3 install --user linkedin_user_scraper`\n\n## Setup\nFirst, you must set your chromedriver location by\n\n```bash\nexport CHROMEDRIVER=~/chromedriver\n```\n\n## Usage\nTo use it, just create the class.\n\n### Sample Usage\n```python\nfrom linkedin_scraper import Person, actions\nfrom selenium import webdriver\ndriver = webdriver.Chrome()\n\nemail = \"some-email@email.address\"\npassword = \"password123\"\nactions.login(driver, email, password) # if email and password isnt given, it'll prompt in terminal\nperson = Person(\"https://www.linkedin.com/in/andre-iguodala-65b48ab5\", driver=driver)\n```\n\n**NOTE**: The account used to log-in should have it's language set English to make sure everything works as expected.\n\n### User Scraping\n\n```python\nfrom linkedin_scraper import Person\nperson = Person(\"https://www.linkedin.com/in/andre-iguodala-65b48ab5\")\n```\n\n### Company Scraping\n\n```python\nfrom linkedin_scraper import Company\ncompany = Company(\"https://ca.linkedin.com/company/google\")\n```\n\n### Scraping sites where login is required first\n1. Run `ipython` or `python`\n2. In `ipython`/`python`, run the following code (you can modify it if you need to specify your driver)\n3. \n```python\nfrom linkedin_scraper import Person\nfrom selenium import webdriver\ndriver = webdriver.Chrome()\nperson = Person(\"https://www.linkedin.com/in/andre-iguodala-65b48ab5\", driver = driver, scrape=False)\n```\n4. Login to Linkedin\n5. [OPTIONAL] Logout of Linkedin\n6. In the same `ipython`/`python` code, run\n```python\nperson.scrape()\n```\n\nThe reason is that LinkedIn has recently blocked people from viewing certain profiles without having previously signed in. So by setting `scrape=False`, it doesn't automatically scrape the profile, but Chrome will open the linkedin page anyways. You can login and logout, and the cookie will stay in the browser and it won't affect your profile views. Then when you run `person.scrape()`, it'll scrape and close the browser. If you want to keep the browser on so you can scrape others, run it as \n\n**NOTE**: For version >= `2.1.0`, scraping can also occur while logged in. Beware that users will be able to see that you viewed their profile.\n\n```python\nperson.scrape(close_on_complete=False)\n``` \nso it doesn't close.\n\n### Scraping sites and login automatically\nFrom verison **2.4.0** on, `actions` is a part of the library that allows signing into Linkedin first. The email and password can be provided as a variable into the function. If not provided, both will be prompted in terminal.\n\n```python\nfrom linkedin_scraper import Person, actions\nfrom selenium import webdriver\ndriver = webdriver.Chrome()\nemail = \"some-email@email.address\"\npassword = \"password123\"\nactions.login(driver, email, password) # if email and password isnt given, it'll prompt in terminal\nperson = Person(\"https://www.linkedin.com/in/andre-iguodala-65b48ab5\", driver=driver)\n```\n\n\n## API\n\n### Person\nA Person object can be created with the following inputs:\n\n```python\nPerson(linkedin_url=None, name=None, about=[], experiences=[], educations=[], interests=[], accomplishments=[], company=None, job_title=None, driver=None, scrape=True)\n```\n#### `linkedin_url`\nThis is the linkedin url of their profile\n\n#### `name`\nThis is the name of the person\n\n#### `about`\nThis is the small paragraph about the person\n\n#### `experiences`\nThis is the past experiences they have. A list of `linkedin_scraper.scraper.Experience`\n\n#### `educations`\nThis is the past educations they have. A list of `linkedin_scraper.scraper.Education`\n\n#### `interests`\nThis is the interests they have. A list of `linkedin_scraper.scraper.Interest`\n\n#### `accomplishment`\nThis is the accomplishments they have. A list of `linkedin_scraper.scraper.Accomplishment`\n\n### `company`\nThis the most recent company or institution they have worked at. \n\n### `job_title`\nThis the most recent job title they have. \n\n#### `driver`\nThis is the driver from which to scraper the Linkedin profile. A driver using Chrome is created by default. However, if a driver is passed in, that will be used instead.\n\nFor example\n```python\ndriver = webdriver.Chrome()\nperson = Person(\"https://www.linkedin.com/in/andre-iguodala-65b48ab5\", driver = driver)\n```\n\n#### `scrape`\nWhen this is **True**, the scraping happens automatically. To scrape afterwards, that can be run by the `scrape()` function from the `Person` object.\n\n\n### `scrape(close_on_complete=True)`\nThis is the meat of the code, where execution of this function scrapes the profile. If *close_on_complete* is True (which it is by default), then the browser will close upon completion. If scraping of other profiles are desired, then you might want to set that to false so you can keep using the same driver.\n\n \n\n\n### Company\n\n```python\nCompany(linkedin_url=None, name=None, about_us=None, website=None, headquarters=None, founded=None, company_type=None, company_size=None, specialties=None, showcase_pages=[], affiliated_companies=[], driver=None, scrape=True, get_employees=True)\n```\n\n#### `linkedin_url`\nThis is the linkedin url of their profile\n\n#### `name`\nThis is the name of the company\n\n#### `about_us`\nThe description of the company\n\n#### `website`\nThe website of the company\n\n#### `headquarters`\nThe headquarters location of the company\n\n#### `founded`\nWhen the company was founded\n\n#### `company_type`\nThe type of the company\n\n#### `company_size`\nHow many people are employeed at the company\n\n#### `specialties`\nWhat the company specializes in\n\n#### `showcase_pages`\nPages that the company owns to showcase their products\n\n#### `affiliated_companies`\nOther companies that are affiliated with this one\n\n#### `driver`\nThis is the driver from which to scraper the Linkedin profile. A driver using Chrome is created by default. However, if a driver is passed in, that will be used instead.\n\n#### `get_employees`\nWhether to get all the employees of company\n\nFor example\n```python\ndriver = webdriver.Chrome()\ncompany = Company(\"https://ca.linkedin.com/company/google\", driver=driver)\n```\n\n\n### `scrape(close_on_complete=True)`\nThis is the meat of the code, where execution of this function scrapes the company. If *close_on_complete* is True (which it is by default), then the browser will close upon completion. If scraping of other companies are desired, then you might want to set that to false so you can keep using the same driver.\n\n## Contribution\n\n<a href=\"https://www.buymeacoffee.com/joeyism\" target=\"_blank\"><img src=\"https://www.buymeacoffee.com/assets/img/custom_images/orange_img.png\" alt=\"Buy Me A Coffee\" style=\"height: 41px !important;width: 174px !important;box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;-webkit-box-shadow: 0px 3px 2px 0px rgba(190, 190, 190, 0.5) !important;\" ></a>\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Scrapes user data from Linkedin",
    "version": "3.0.0",
    "split_keywords": [
        "linkedin",
        "scraping",
        "scraper"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "87e875f1e3622c5d07a01a74b32c22db1d7ea3f8f58a40ce7b5a44de6e1d1f42",
                "md5": "482aabe77570ea58d4309461783dc97a",
                "sha256": "423a9e693ebe5babe19e7a76ad814c5646fbd069629ca51bdd20d257e75c2a41"
            },
            "downloads": -1,
            "filename": "linkedin_scraper4-3.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "482aabe77570ea58d4309461783dc97a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": null,
            "size": 24649,
            "upload_time": "2023-01-11T05:44:44",
            "upload_time_iso_8601": "2023-01-11T05:44:44.017593Z",
            "url": "https://files.pythonhosted.org/packages/87/e8/75f1e3622c5d07a01a74b32c22db1d7ea3f8f58a40ce7b5a44de6e1d1f42/linkedin_scraper4-3.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-01-11 05:44:44",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "github_user": "joeyism",
    "github_project": "linkedin_scraper",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "requirements": [
        {
            "name": "selenium",
            "specs": []
        },
        {
            "name": "requests",
            "specs": []
        },
        {
            "name": "lxml",
            "specs": []
        }
    ],
    "lcname": "linkedin-scraper4"
}
        
Elapsed time: 0.03722s