Comic Crawler
=============
.. image:: https://travis-ci.org/eight04/ComicCrawler.svg?branch=master
:target: https://travis-ci.org/eight04/ComicCrawler
Comic Crawler 是用來扒圖的一支 Python Script。擁有簡易的下載管理員、圖書館功能、 與方便的擴充能力。
下載和安裝(Windows)
---------------------
Comic Crawler is on
`PyPI <https://pypi.python.org/pypi/comiccrawler/>`__. 安裝完
python 後,可以直接用 pip 指令自動安裝。
Install Python
~~~~~~~~~~~~~~
你需要 Python 3.11 以上。安裝檔可以從它的
`官方網站 <https://www.python.org/>`__ 下載。
安裝時記得要選「Add python.exe to path」,才能使用 pip 指令。
Install Deno
~~~~~~~~~~~~
Comic Crawler 使用 Deno 來分析需要執行 JavaScript 的網站︰
https://docs.deno.com/runtime/manual/getting_started/installation
Windows 10 (1709) 以上的版本,可以直接在 cmd 底下輸入以下指令安裝︰
::
winget install deno
Install Comic Crawler
~~~~~~~~~~~~~~~~~~~~~
在 cmd 底下輸入以下指令︰
::
pip install comiccrawler
更新時︰
::
pip install comiccrawler --upgrade --upgrade-strategy eager
最後在 cmd 底下輸入以下指令執行 Comic Crawler︰
::
comiccrawler gui
Supported domains
-----------------
.. DOMAINS
..
163.bilibili.com 8comic.com 99.hhxxee.com ac.qq.com beta.sankakucomplex.com chan.sankakucomplex.com comic.acgn.cc comic.sfacg.com comicbus.com coomer.su copymanga.com danbooru.donmai.us deviantart.com e-hentai.org exhentai.org fanbox.cc fantia.jp gelbooru.com hk.dm5.com ikanman.com imgbox.com jpg4.su kemono.party kemono.su konachan.com linevoom.line.me m.dmzj.com m.manhuabei.com m.wuyouhui.net manga.bilibili.com manhua.dmzj.com manhuagui.com nijie.info pixabay.com raw.senmanga.com seemh.com seiga.nicovideo.jp smp.yoedge.com tel.dm5.com tsundora.com tuchong.com tumblr.com tw.weibo.com twitter.com wix.com www.177pic.info www.1manhua.net www.33am.cn www.36rm.cn www.99comic.com www.aacomic.com www.artstation.com www.buka.cn www.cartoonmad.com www.chuixue.com www.chuixue.net www.cocomanhua.com www.comicabc.com www.comicvip.com www.dm5.com www.dmzj.com www.facebook.com www.flickr.com www.gufengmh.com www.gufengmh8.com www.hhcomic.cc www.hheess.com www.hhmmoo.com www.hhssee.com www.hhxiee.com www.iibq.com www.instagram.com www.mangacopy.com www.manhuadui.com www.manhuaren.com www.mh160.com www.mhgui.com www.ohmanhua.com www.pixiv.net www.sankakucomplex.com www.setnmh.com www.tohomh.com www.tohomh123.com www.xznj120.com x.com yande.re
.. END DOMAINS
使用說明
--------
As a CLI tool:
::
Usage:
comiccrawler [--profile=<profile>] (
domains |
download <url> [--dest=<save_path>] |
gui
)
comiccrawler (--help | --version)
Commands:
domains 列出支援的網址
download 下載指定的 url
gui 啟動主視窗
Options:
--profile 指定設定檔存放的資料夾(預設為 "~/comiccrawler")
--dest 設定下載目錄(預設為 ".")
--help 顯示幫助訊息
--version 顯示版本
or you can use it in your python script:
.. code:: python
from comiccrawler.mission import Mission
from comiccrawler.analyzer import Analyzer
from comiccrawler.crawler import download
# create a mission
m = Mission(url="http://example.com")
Analyzer(m).analyze()
# select the episodes you want
for ep in m.episodes:
if ep.title != "chapter 123":
ep.skip = True
# download to savepath
download(m, "path/to/save")
圖形介面
--------
.. figure:: http://i.imgur.com/ZzF0YFx.png
:alt: 主視窗
- 在文字欄貼上網址後點「加入連結」或是按 Enter
- 若是剪貼簿裡有支援的網址,且文字欄同時是空的,程式會自動貼上
- 對著任務右鍵,可以選擇把任務加入圖書館。圖書館內的任務,在每次程式啟動時,都會檢查是否有更新。
設定檔
------
::
[DEFAULT]
; 設定下載完成後要執行的程式,{target} 會被替換成任務資料夾的絕對路徑
runafterdownload = 7z a "{target}.zip" "{target}"
; 啟動時自動檢查圖書館更新
libraryautocheck = true
; 檢查更新間隔(單位︰小時)
autocheck_interval = 24
; 下載目的資料夾。相對路徑會根據設定檔資料夾的位置。
savepath = download
; 開啟 grabber 偵錯
errorlog = false
; 每隔 5 分鐘自動存檔
autosave = 5
; 存檔時使用下載時的原始檔名而不用頁碼
; 強列建議不要使用這個選項,見 https://github.com/eight04/ComicCrawler/issues/90
originalfilename = false
; 自動轉換集數名稱中數字的格式,可以用於補0
; 例︰第1集 -> 第001集
; 詳細的格式指定方式請參考 https://docs.python.org/3/library/string.html#format-specification-mini-language
; 注意︰這個設定會影響檔名中的所有數字,包括檔名中英數混合的ID如instagram
titlenumberformat = {:03d}
; 連線時使用 http/https proxy
proxy = 127.0.0.1:1080
; 加入新任務時,預設選擇所有集數
selectall = true
; 不要根據各集名稱建立子資料夾,將所有圖片放在任務資料夾內
noepfolder = true
; 遇到重複任務時的動作
; update: 檢查更新
; reselect_episodes: 重新選取集數
mission_conflict_action = update
; 是否驗證加密連線(SSL),預設是 true
verify = false
; 從瀏覽器中讀取 cookies,使用 yt-dlp 的 cookies-from-browser
; https://github.com/yt-dlp/yt-dlp/blob/e5d4f11104ce7ea1717a90eea82c0f7d230ea5d5/yt_dlp/cookies.py#L109
browser = firefox
; 瀏覽器 profile 的名稱
browser_profile = act3nn7e.default
- 設定檔位於 ``~\comiccrawler\setting.ini``。可以在執行時指定 ``--profile`` 選項以變更預設的位置。(在 Windows 中 ``~`` 會被展開為 ``%HOME%`` 或 ``%USERPROFILE%``)
- 執行一次 ``comiccrawler gui`` 後關閉,設定檔會自動產生。若 Comic Crawler 更新後有新增的設定,在關閉後會自動將新設定加入設定檔。
- 各別的網站會有自己的設定,通常是要填入一些登入相關資訊
- 以 curl 開頭的設定,要填入對應網址的 curl 指令。以 twitter 為例︰https://github.com/eight04/ComicCrawler/issues/241#issuecomment-904411605
- 以 cookie 開頭的設定,要填入對應的 cookie。
- 設定檔會在重新啟動後生效。若 ComicCrawler 正在執行中,可以點「重載設定檔」來載入新設定
.. warning::
若在執行時,修改設定檔並儲存,接著結束 ComicCrawler,修改會遺失。因為 ComicCrawler 結束前會把設定寫回設定檔。
- 各別網站的設定不會互相影響。假如在 [DEFAULT] 設 savepath = a;在 [Pixiv] 設 savepath = b,那麼從 pixiv 下載的都會存到 b 資料夾,其它的就用預設值,存到 a 資料夾。
Module example
--------------
Starting from version 2016.4.21, you can add your own module to ``~/comiccrawler/mods/module_name.py``.
.. code:: python
#! python3
"""
This is an example to show how to write a comiccrawler module.
"""
import re
from urllib.parse import urljoin
from comiccrawler.episode import Episode
# The header used in grabber method. Optional.
header = {}
# The cookies. Optional.
cookie = {}
# Match domain. Support sub-domain, which means "example.com" will match
# "*.example.com"
domain = ["www.example.com", "comic.example.com"]
# Module name
name = "Example"
# With noepfolder = True, Comic Crawler won't generate subfolder for each
# episode. Optional, default to False.
noepfolder = False
# If False then setup the referer header automatically to mimic browser behavior.
# If True then disable this behavior.
# Default: False
no_referer = True
# Wait 5 seconds before downloading another image. Optional, default to 0.
rest = 5
# Wait 5 seconds before analyzing the next page in the analyzer. Optional,
# default to 0.
rest_analyze = 5
# User settings which could be modified from setting.ini. The keys are
# case-sensitive.
#
# After loading the module, the config dictionary would be converted into
# a ConfigParser section data object so you can e.g. call
# config.getboolean("use_large_image") directly.
#
# Optional.
config = {
# The config value can only be str
"use_largest_image": "true",
# These special config starting with `cookie__` will be automatically
# used when grabbing html or image.
"cookie_user": "user-default-value",
"cookie_hash": "hash-default-value"
}
def load_config():
"""This function will be called each time the config reloads. Optional.
"""
pass
def get_title(html, url):
"""Return mission title.
The title would be used in saving filepath, so be sure to avoid
duplicated title.
"""
return re.search("<h1 id='title'>(.+?)</h1>", html).group(1)
def get_episodes(html, url):
"""Return episode list.
The episode list should be sorted by date, oldest first.
If is a multi-page list, specify the URL of the next page in
get_next_page. Comic Crawler would grab the next page and call this
function again.
The `Episode` object accepts an `image` property which can be a list of `Image`.
However, unlike `get_images`, the `Episode` object is JSON-stringified and saved
to the disk, therefore you must only use JSON-compatible types i.e. no `Image.get_url`.
"""
match_list = re.findall("<a href='(.+?)'>(.+?)</a>", html)
return [Episode(title, urljoin(url, ep_url))
for ep_url, title in match_list]
def get_images(html, url):
"""Get the URL of all images.
The return value could be:
- A list of image.
- A generator yielding image.
- An image, when there is only one image on the current page.
Comic Crawler treats following types as an image:
- str - the URL of the image
- callable - return a URL when called
- comiccrawler.core.Image - use it to provide customized filename.
While receiving the value, it is converted to an Image instance. See ``comiccrawler.core.Image.create()``.
If the episode has multi-pages, uses get_next_page to change page.
Use generator in caution! If the generator raises any error between
two images, next call to the generator will always result in
StopIteration, which means that Comic Crawler will think it had crawled
all images and navigate to next page. If you have to call grabhtml()
for each image (i.e. it may raise HTTPError), use a list of
callback instead!
"""
return re.findall("<img src='(.+?)'>", html)
def get_next_page(html, url):
"""Return the URL of the next page."""
match = re.search("<a id='nextpage' href='(.+?)'>next</a>", html)
if match:
return match.group(1)
def get_next_image_page(html, url):
"""Return the URL of the next page.
If this method is defined, it will be used by the crawler and ``get_next_page`` would be ignored.
Therefore ``get_next_page`` will only be used by the analyzer.
"""
pass
def redirecthandler(response, crawler):
"""Downloader will call this hook if redirect happens during downloading
an image. Sometimes services redirects users to an unexpected URL. You
can check it here.
"""
if response.url.endswith("404.jpg"):
raise Exception("Something went wrong")
def errorhandler(error, crawler):
"""Downloader will call errorhandler if there is an error happened when
downloading image. Normally you can just ignore this function.
"""
pass
def imagehandler(ext, b):
"""If this function exists, Comic Crawler will call it before writing
the image to disk. This allow the module to modify the image after
the download.
@ext str, file extension, including ".". (e.g. ".jpg")
@b The bytes object of the image.
It should return a (modified_ext, modified_b) tuple.
"""
return (ext, b)
def grabhandler(grab_method, url, **kwargs):
"""Called when the crawler is going to make a web request. Use this hook
to override the default grabber behavior.
@grab_method function, could be ``grabhtml`` or ``grabimg``.
@url str, request URL.
@kwargs other arguments that will be passed to grabber.
By returning ``None``
"""
if "/api/" in URL:
kwargs["headers"] = {"some-api-header": "some-value"}
return grab_method(url, **kwargs)
def after_request(crawler, response):
"""Called after the request is made."""
if response.url.endswith("404.jpg"):
raise Exception("Something went wrong")
def session_key(url):
"""Return a key to identify the session. If the key is the same, the
session would be shared. Otherwise, a new session would be created.
For example, you may want to separate the session between the main site
and the API endpoint.
Return None to pass the URL to next key function.
"""
r = urlparse(url)
if r.path.startswith("/api/"):
return (r.scheme, r.netloc, "api")
Todos
-----
- Need a better error log system.
- Support pool in Sankaku.
- Add module.get_episode_id to make the module decide how to compare episodes.
- Use HEAD to grab final URL before requesting the image?
Changelog
---------
- 2024.11.14
- Fix: add language tag to seemh.
- Fix: eight module.
- Fix: sankaku.
- Fix: handler 416 content range.
- Add: new domain in kemono.
- Add: try smaller images when 404 on twitter.
- 2024.8.14
- Add: extract file extension from URL.
- Fix: 404 error in dm5.
- 2024.8.9
- Add: jpg4 module.
- Fix: domain changed to x.com in twitter.
- Fix: failed locating popular posts in sankaku.
- Fix: handle empty thumbnail in fantia.
- Fix: skip posts without files in fanbox.
- Fix: stop detecting zip as docx.
- Fix: wrong image URL in bilibili manga.
- 2024.4.11
- Fix: redirect error in sankaku.
- 2024.4.10
- Fix: limit retry delay to 10 minutes at most.
- Fix: failed handling http 206 response.
- Fix: username may conatain dash in fanbox.
- Add: ``max_errors`` setting.
- Add: ability to run multiple crawlers. One for each host.
- 2024.4.2
- Fix: wrong protocol in seemh.
- Fix: failed downloading webp images.
- 2024.3.25
- Fix: skip episodes without images in kemono.
- Fix: sankaku.
- Fix: some posts are missing in twitter.
- Add: new domain kemono.su.
- Add: .clip to valid file extensions.
- Add: ability to write partial data to disk.
- Add: browser and browser_profile settings which are used to extract cookies.
- Add: after_request, session_key hooks.
- Add: session_manager for better control of api sessions.
- Change: set referer and origin header in analyzer.
- Change: wait 3 seconds after analyze error.
- 2024.1.4
- Fix: vm error in seemh.
- Change: drop imghdr.
- 2023.12.24
- Fix: bili, cartoonmad, seemh module.
- Fix: support python 3.12.
- 2023.12.11
- Fix: seemh, twitter modules.
- Add: fanbox module.
- 2023.10.11
- Fix: instagram, fantia, seemh, sanka modules.
- Add: progress bar.
- Change: switch to deno_vm.
- 2023.10.8
- Fix: unable to download bili free chapters.
- Fix: facebook module.
- Add: copymanga new domain.
- Add: kemono module.
- Add: linevoom module.
- Add: support more audio formats.
- Add: new plugin hook ``get_next_image_page``.
- 2022.11.21
- Fix: switching pages error in 8comic.
- Fix: use url path to guess extension.
- Add: allow to download txt file.
- 2022.11.11
- Fix: now danbooru requires curl.
- Fix: 8comic doesn't use ajax anymore.
- Fix: download error in instagram.
- Change: download thumbnail in fantia.
- Change: require python 3.10.
- 2022.2.6
- Fix: magic import error.
- Add: support replacing argument in runafterdownload.
- 2022.2.3
- Fix: analyze error in seemh.
- Add: support fantia.jp
- Add: support br encoding.
- Add: open episode URL on right-click when selecting episodes.
- Add: display completed episodes as green.
- Add: exponential backoff.
- Change: use curl in sankaku.
- Change: skip 404 ep on twitter.
- Change: use python-magic to detect file type.
- 2021.12.2
- Fix: empty episodes error.
- 2021.11.15
- Fix: copymanga.
- 2021.9.15
- Fix: ep order is wrong in twitter.
- Fix: copymanga.
- 2021.8.31
- Fix: hidden manga in dmzj.
- Fix: skip 404 episode in instagram.
- Fix: support multiple videos in instagram.
- Fix: failed analyzing search page in pixiv.
- Fix: empty episode error in gelbooru.
- Add: copymanga module.
- Add: twitter module.
- Add: ``grabhandler`` hook.
- Change: stop downloading if all available missions fail.
- 2020.10.29
- Fix: cartoonmad error.
- Fix: seemh error.
- Fix: show all content in gelbooru.
- Fix: qq error.
- Fix: paging issue in oh.
- Fix: deviantart error.
- Add: new domain for hhxiee.
- Add: new domain for oh.
- Change: send referer when fetching html.
- 2020.9.2
- Fix: api is changed in sankaku beta.
- Fix: avoid page limit in sankaku.
- Fix: cannot get title in weibo.
- Fix: cannot fetch nico image.
- Fix: duplicated pic in nijie.
- Fix: cannot fetch image in seemh.
- Add: oh module.
- Add: setnmh module.
- Add: manhuabei module.
- Add: module constant ``no_referer``.
- **Breaking: require Python@3.6+**
- 2020.6.3
- Fix: don't navigate to next page in danbooru.
- Fix: analyzation error in eight.
- Fix: instagram now requires login.
- Add: a ``verify`` option to disable security check.
- 2019.12.25
- Add: support search page in pixiv.
- 2019.11.19
- Fix: handle ``LastPageError`` in ``get_episodes``.
- Fix: download error in nijie.
- Fix: refetch size info if the size is unavailable in flickr.
- Fix: skip unavailable episodes in pixiv.
- Fix: handle filename with broken extension (``jpg@YYYY-mm-dd``).
- Add: instagram.
- Add: sankaku_beta.
- Add: ``redirecthandler`` hook.
- Add: contextmenu to delete missions from both managers.
- Change: decrease max retry from 10 to 3 so a broken mission will fail faster.
- 2019.11.12
- Fix: pixiv.
- Add: allow ``get_images`` to raise ``SkipPageError``.
- 2019.10.28
- Fix: download too many images in danbooru.
- 2019.10.19
- Add: manga.bilibili module.
- 2019.9.2
- Add: bilibili module.
- Add: 177pic module.
- 2019.8.19
- Fix: can't change page in danbooru.
- Fix: failed to analyze episodes in pixiv.
- Fix: download error in qq.
- Bump dependencies.
- 2019.7.1
- Add: autocheck_interval option.
- Add: manhuadui module.
- Fix: chuixue module.
- Fix: handle 404 errors in pixiv.
- 2019.5.20
- Fix: ignore empty episodes in youhui.
- 2019.5.3
- Fix: can't analyze profile URL with tags in pixiv.
- Add: pixabay module.
- 2019.3.27
- Fix: getcookie is not defined in eight.
- 2019.3.26
- Add: manhuaren module.
- Fix: failed to switch page in fb.
- 2019.3.18
- Fix: handle 403 error in artstation.
- 2019.3.13
- Add: new domain gufengmh8.com for gufeng module.
- Add: new domain tohomh123.com for toho module
- Add: new cookie igneous for exh module.
- Fix: download images in cartoonmad.
- Change: drop ck101 module.
- 2018.12.25
- Add: new domain hheess.com for hhcomic module.
- Add: new domain 36rm.cn for xznj module.
- Add: toho module.
- Fix: support new layout in dm5.
- 2018.11.18
- Add: mission_conflict_action option.
- Fix: failed to download images in qq.
- Fix: failed to download images in youhui.
- 2018.10.24
- Fix: new domain `hhmmoo.com` for hhxiee.
- Fix: ignore comments when analyzing episodes.
- 2018.9.30
- Change: prefix ep title with group name in seemh.
- 2018.9.24
- Add: support user's tag in pixiv.
- 2018.9.23
- Fix: failed to get episodes in pixiv.
- Fix: ``on_success`` is executed when analyzation failed.
- Fix: make 503 error retryable.
- 2018.9.11
- Fix: failed to get next page in gelbooru.
- Add: gufeng module.
- 2018.9.7
- Fix: domains of eight module.
- Fix: batch analyze error is not shown.
- Fix: connection error would crash the entire application.
- 2018.8.20
- Add: new option "noepfolder".
- 2018.8.11
- Fix: title and image URLs in eight.
- 2018.8.10
- Add: mh160 module.
- Add: youhui module.
- Add: grabber_cooldown module constant.
- Add: domain hk.dm5.com in dm5.
- Add: travis.
- Fix: skip 404 pages in weibo.
- Fix: guess the file extension from the content then from the header.
- Change: use a newer user agent.
- 2018.7.18
- Add: new domain in xznj120.
- Fix: get_episodes returns empty list in deviantart.
- 2018.6.21
- Add: make table sortable.
- Add: last_update attribute.
- Fix: analyze error in senmanga.
- 2018.6.14
- Revert: do not normalize whitespaces.
- Fix: escape more characters in safefilepath.
- 2018.6.8
- Refactor: comiccrawler.core is exploded.
- Fix: new interface in pixiv.
- Add: "Check update" command in the library contextmenu.
- Add: rest_analyze constant in modules.
- Drop: migrate command.
- 2018.5.24
- Fix: fail to get images from xznj.
- Refactor: split out select_episodes.
- 2018.5.13
- Add: selectall option.
- Fix: the column check button operates on a wrong range.
- Fix: the column check button appearance.
- Fix: download error in tumblr.
- 2018.5.5
- Add: range reverse.
- Add: xznj120 module.
- Add: gelbooru module.
- Fix: cannot analyze episode list in md5.
- 2018.4.16
- Add: support user page. (weibo)
- Change: remove ``raise_429`` arg in ``grabhtml``. Add ``retry``.
- 2018.4.8
- Add: allow users to login. (tumblr)
- Add: support videos. (tumblr)
- 2018.3.18
- Fix: SMH is not defined error. (seemh) (#106)
- 2018.3.15
- Change: use chapter id in the title of the episode. (qq) (#104)
- 2018.3.9
- Fix: seemh start using https. (#103)
- Add: qq module. (#102)
- 2018.3.7
- Fix: get_episodes error in buka. Note that buka currently only shows images to its own reader app.
- Fix: can't download image in seemh (manhuagui).
- Add: SkipPageError for get_episodes.
- Add: artstation module.
- Update pylint to 1.8.2.
- 2018.1.30.2
- Fix: update seemh.
- 2018.1.30.1
- Fix: get Content-Length error.
- 2018.1.30
- Fix: verify Content-Length.
- Fix: dm5 update.
- 2017.12.15
- Fix: incorrect title in pixiv.
- 2017.12.14
- Fix: insecure_http option in tumblr doesn't work properly.
- 2017.12.9
- Add: full_size, insecure_http options to tumblr.
- Add: Support .ugoira file in pixiv.
- 2017.12.4
- Fix: download original image from tumblr. `#82 <https://github.com/eight04/ComicCrawler/issues/82>`_
- Change: add gid/token to the title in exh. `#83 <https://github.com/eight04/ComicCrawler/issues/83>`_
- 2017.11.29
- Fix: download error in cartoonmad. `#81 <https://github.com/eight04/ComicCrawler/issues/81>`_
- Add: ability to get images from ajax (dmzj). Thanks to `动漫之家助手 <https://greasyfork.org/zh-TW/scripts/33087-%E5%8A%A8%E6%BC%AB%E4%B9%8B%E5%AE%B6%E5%8A%A9%E6%89%8B>`_. `#78 <https://github.com/eight04/ComicCrawler/issues/78>`_
- 2017.9.9
- Fix: image match pattern in cartoonmad.
- 2017.9.5
- Fix: url is not unescaped correctly in sankaku.
- 2017.8.31
- Fix: match nview.js in comicbus.
- Fix: ikanman.com -> manhuagui.com.
- Fix: require login in facebook.
- 2017.8.26
- Fix: html changed in pixiv.
- 2017.8.20.1
- Fix: can't download in comicbus.
- 2017.8.20
- Fix: can't match http in deviantart.
- Fix: can't get images in eight.
- Add setting `proxy`.
- 2017.8.16
- Fix: deviantart login issue.
- 2017.8.13
- Fix: sankaku login issue. `#66 <https://github.com/eight04/ComicCrawler/issues/66>`_
- 2017.6.14
- Fix: comicbus analzye issue.
- 2017.5.29
- Fix: 99 module. `#63 <https://github.com/eight04/ComicCrawler/issues/63>`_
- 2017.5.26
- Fix: ikanman analyze issue.
- 2017.5.22
- Fix: comicbus analyze issue. `#62 <https://github.com/eight04/ComicCrawler/issues/62>`_
- 2017.5.19
- Add nijie module. `#58 <https://github.com/eight04/ComicCrawler/issues/58>`_
- Add core.clean_tags.
- Fix: check update button doesn't work after update checking failed. `#59 <https://github.com/eight04/ComicCrawler/issues/59>`_
- Fix: analyzation failed in comicbus. `#61 <https://github.com/eight04/ComicCrawler/issues/61>`_
- 2017.5.5
- Fix: use raw ``<title>`` as title in search result (pixiv).
- Add .wmv, .mov, and .psd into valid file extensions.
- 2017.4.26
- Change: use table view in dm5. `#54 <https://github.com/eight04/ComicCrawler/issues/54>`_
- Fix: runafterdownload is parsed incorrectly on windows.
- 2017.4.24
- Fix: starred expression inside list.
- 2017.4.23
- Fix: compat with python 3.4, starred expression can only occur inside function call.
- Update node_vm2 to 0.3.0.
- 2017.4.22
- Add .bmp to valid file extensions.
- Fix: unable to check update for multi-page sites.
- 2017.4.18
- Add senmanga. `#49 <https://github.com/eight04/ComicCrawler/issues/49>`_
- Add yoedge. `#47 <https://github.com/eight04/ComicCrawler/issues/47>`_
- Fix: header parser issue. See https://www.ptt.cc/bbs/Python/M.1492438624.A.BBC.html
- Fix: escape trailing dots in file path. `#46 <https://github.com/eight04/ComicCrawler/issues/46>`_
- Add: double-click to launch explorer.
- Add: batch analyze panel. `#45 <https://github.com/eight04/ComicCrawler/issues/45>`_
- 2017.4.6
- Fix: run after download doesn't work properly if path contains spaces.
- Fix: VMError with ugoku in pixiv.
- Fix: automatic update check doesn't record update time when failing.
- 2017.4.3
- Fix: analyze error in dA.
- Fix: subdomain changed in exh.
- Fix: vm error in hh.
- Add .url utils, .core.CycleList, .error.HTTPError.
- Add aacomic.
- Update pyxcute to 0.4.1.
- 2017.3.26
- Fix: cleanup the old files.
- Update pythreadworker to 0.8.0.
- 2017.3.25
- **Switch to node_vm2, drop pyexecjs.**
- Add login check in exh.
- Switch to pylint, drop pyflakes.
- Drop module manhuadao.
- Update pyxcute.
- Refactor.
- 2017.3.9
- Add --profile option. `#36 <https://github.com/eight04/ComicCrawler/issues/36>`__
- 2017.3.6
- Update seemh. `#35 <https://github.com/eight04/ComicCrawler/issues/35>`__
- Escape title in pixiv.
- Strip non-printable characters in safefilepath.
- 2017.2.5
- Add www.dmzj.com module. `#33 <https://github.com/eight04/ComicCrawler/issues/33>`__
- Fix: Sometime the title doesn't include chapter number in buka. `#33 <https://github.com/eight04/ComicCrawler/issues/33>`__
- 2017.1.10
- Add: nowebp option in ikanman. `#31 <https://github.com/eight04/ComicCrawler/issues/31>`__
- Add weibo module.
- Add tuchong module.
- Fix: update table safe_tk error.
- Change: existence check will only check original filename when originalfilename option is true.
- 2017.1.6
- Add: Table class in gui.
- Add: titlenumberformat option in setting.ini. `#30 <https://github.com/eight04/ComicCrawler/pull/30>`__ by `@kuanyui <https://github.com/kuanyui>`__.
- Change: use Table to display domain list.
- 2017.1.3.1
- Fix: schema error (konachan).
- Fix: original filename should be extracted from final url instead of request url.
- Add: now the module can specify image filename with ``comiccrawler.core.Image``.
- 2017.1.3
- Fix: original option doesn't work (exh).
- 2016.12.20
- Change how config works. This will affect the sites requiring cookie information.
- Comic Crawler can save cookie back to config now!
- Change how safefilepath works. Use escape table.
- Make io.move support folders.
- Add io.exists.
- Add migrate command.
- Add originalfilename option.
- 2016.12.6
- Fix: imghdr can't reconize .webp in Python 3.4.
- 2016.12.1
- Fix: analyze error in wix.
- Fix: ``mimetypes.guess_extension`` is not reliable with ``application/octet-stream``
- Add ``.webp`` to valid file type.
- 2016.11.27
- Fix hhxiee module. Use new domain www.hhssee.com.
- 2016.11.25
- Support cartoonmad.
- 2016.11.2
- Fix: scaling issue on Windows XP.
- Fix: login-check in deviantart.
- Use desktop3 to open folder. `#16 <https://github.com/eight04/ComicCrawler/issues/16>`__
- Fix: GUI crahsed if scaling < 1.
- 2016.10.8
- Fix: math.inf is only available in python 3.5.
- 2016.10.4
- Fix: can not download video in flickr.
- Fix: use cookie in grabimg.
- 2016.9.30
- Add ``params`` option to grabber.
- Add flickr module.
- 2016.9.27
- Fix: image pattern in buka.
- Fix: add hhcomic domain.
- 2016.9.11
- Fix: failed to read file encoded with utf-8-sig.
- Fix: ignore empty posts in tumblr.
- 2016.8.24.1
- Use better method to find next page in tumblr.
- Fix unicode referer bug in grabber.
- Update match pattern to avoid redirect in tumblr. See https://github.com/kennethreitz/requests/issues/3078.
- Fix get_title error in tumblr that the title might be empty.
- 2016.8.24
- Fix 429 error still raised by analyze_info.
- Fix next page pattern in tumblr.
- 2016.8.22
- Support hhxiee.
- Fix get_episodes error in ck101.
- Suppress 429 error when analyzing.
- Change title format in yendere. Support pools.
- 2016.8.19
- Fix title not found error in dm5.
- 2016.8.8
- Use a safer method in write_file.
- Add mission_lock for thread safe.
- Use str as runafterdownload.
- Use float as autosave.
- Add debug log.
- Rewrite analyzer. Episodes shouldn't have same title.
- 2016.7.2
- Fix context menu popup bug on linux.
- Fix update checking stops after finished mission.
- 2016.7.1
- Use cross-platform startfile (incomplete).
- Use `clam` theme for GUI under linux.
- Fix the error message of update checking failure.
- Update checking won't block GUI thread anymore.
- Update `pythreadworker` to 0.6.
- Fix import syntax in `gui.get_scale`.
- 2016.6.30
- Support high dpi displays.
- Don't show error in library thread. Only warn the user when update checking fails.
- 2016.6.25
- API changed. Now the errorhandler will recieve ``(error, crawler)`` instead of ``(error, episode)``.
- Add errorhandler in seemh. It will try to use different host if downloading failed.
- Drop mission to the bottom when update checking failed. Update checking process will stop if it had retried 10 times.
- 2016.6.14.1
- Pass pyflakes and fix a bunch of typo.
- 2016.6.14
- Fix: always re-init in crawlpage loop!
- 2016.6.12
- Use GBK instead of GB2312 in grabber.
- Add the ability to get title from non-user page in nico.
- Fix: unable to add mission in chuixue.
- Fix: unable to download image in nico.
- Fix: episode is lost after changing the name of the mission.
- Fix: unable to recheck update after login error.
- 2016.6.10
- Change how to handle HTTP 429 error. Let the mission drop.
- Add login check in sankaku.
- Support .jpe(.jpg), .webm file types.
- 2016.6.4
- Change how saved data works. Comic Crawler will write inactive mission data into ``~/comiccrawler/pool/`` folder to save the memory.
- Fix regex in dA.
- Fix sankaku's hang. Do not suppress 429 error in grabber.
- 2016.6.3
- Minor change to save/load file function to avoid unnecessary copy.
- Comic Crawler will now execute `runafterdownload` command both from the default section and the module section.
- 2016.5.30
- Add module.imagehandler, which can edit the image file before saving to disk.
- Write frame info into ugoku zip in pixiv.
- 2016.5.28
- Change how config work. Now you can specify different setting in each sections. (e.g. use different savepath with different module)
- Save frame info about ugoku in pixiv.
- Drop config.update in module.load_config.
- Try to support additional info in get_images.
- 2016.5.24
- Support buka.
- 2016.5.20
- Find server by executing js in seemh.
- 2016.5.15
- Fix dependency scheme.
- 2016.5.2
- Use `Conten-Type` header to guess file extension.
- Fix a bug that the thread is not removed when recived DOWNLOAD_INVALID.
- Pause download when meeting 509 error in exh.
- Add .mp4 to valid file types.
- 2016.5.1.1
- Fix a bug that Comic Crawler doesn't retry when the first connection failed.
- Add `Episode.image`, so the module can supply image list during constructing Episode.
- 2016.5.1
- Support wix.com.
- 2016.4.27
- Domain changed in seemh.
- 2016.4.26.1
- Fix charset encoding bug.
- 2016.4.26
- Fix config bug with upper-case key.
- Check urls of old episodes to avoid unnecessary analyzing.
- Add option to get original image in exh. It will cost 5x of viewing limit.
- 2016.4.22.3
- Fix retry-after hanged bug.
- Fix cnfig override bug. Use ``ComicCrawler`` section to replace ``DEFAULT`` section.
- Support account login in sankaku.
- Support HTTP error log before raising.
- Show next page url while analyzing.
- 2016.4.22.2
- Move to pythreadworker 0.5.0
- 2016.4.22.1
- Support loading module in python3.4.
- 2016.4.22
- Fix setup.py. Use find_packages.
- 2016.4.21
- Big rewrite.
- Move to requests.
- Move to pythreadworker 0.4.0.
- Add the ability to load module from ``~/comiccrawler/mods``
- Drop migrate command.
- 2016.4.20
- Update install_requires.
- 2016.4.13
- Fix facebook bug.
- Move to doit.
- 2016.4.8
- Fix get_next_page error.
- Fix key error in CLI.
- 2016.4.4
- Use new API!
- Analyzer will check the last episode to decide whether to analyze all pages.
- Support multiple images in one page.
- Change how getimgurl and getimgurls work.
- 2016.4.2
- Add tumblr module.
- Enhance: support sub-domain in ``mods.get_module``.
- 2016.3.27
- Fix: handle deleted post (konachan).
- Fix: enhance dialog. try to fix `#8 <https://github.com/eight04/ComicCrawler/issues/8>`__.
- 2016.2.29
- Fix: use latest comicview.js (8comic).
- 2016.2.27
- Fix: lastcheckupdate doesn't work.
- Add: comicbus domain (8comic).
- 2016.2.15.1
- Fix: can not add mission.
- 2016.2.15
- Add `lastcheckupdate` setting. Now the library will only automatically check updates once a day.
- Refactor. Use MissionProxy, Mission doesn't inherit UserWorker anymore.
- 2016.1.26
- Change: checking updates won't affect mission which is downloading.
- Fix: page won't skip if the savepath contains "~".
- Add: a new url pattern in facebook.
- 2016.1.17
- Fix: an url matching issue in Facebook.
- Enhance: downloader will loop through other episodes rather than stop current mission on crawlpage error.
- 2016.1.15
- Fix: ComicCrawler doesn't save session during downloading.
- 2016.1.13
- Handle HTTPError 429.
- 2016.1.12
- Add facebook module.
- Add ``circular`` option in module. Which should be set to ``True`` if downloader doesn't know which is the last page of the album. (e.g. Facebook)
- 2016.1.3
- Fix downloading failed in seemh.
- 2015.12.9
- Fix build-time dependencies.
- 2015.11.8
- Fix next page issue in danbooru.
- 2015.10.25
- Support nico seiga.
- Try to fix MemoryError when writing files.
- 2015.10.9
- Fix unicode range error in gui. See http://is.gd/F6JfjD
- 2015.10.8
- Fix an error that unable to skip episode in pixiv module.
- 2015.10.7
- Fix errors that unable to create folder if title contains "{}" characters.
- 2015.10.6
- Support search page in pixiv module.
- 2015.9.29
- Support http://www.chuixue.com.
- 2015.8.7
- Fixed sfacg bug.
- 2015.7.31
- Fixed: libraryautocheck option does not work.
- 2015.7.23
- Add module dmzj\_m. Some expunged manga may be accessed from mobile page. ``http://manhua.dmzj.com/name => http://m.dmzj.com/info/name.html``
- 2015.7.22
- Fix bug in module eight.
- 2015.7.17
- Fix episode selecting bug.
- 2015.7.16
- Added:
- Cleanup unused missions after session loads.
- Handle ajax episode list in seemh.
- Show an error if no update to download when clicking "download updates".
- Show an error if failing to load session.
- Changed:
- Always use "UPDATE" state if the mission is not complete after re-analyzing.
- Create backup if failing to load session instead of moving them to "invalid-save" folder.
- Check edit flag in MissionManager.save().
- Fixed:
- Can not download "updated" mission.
- Update checking will stop on error.
- Sankaku module is still using old method to create Episode.
- 2015.7.15
- Add module seemh.
- 2015.7.14
- Refactor: pull out download\_manager, mission\_manager.
- Enhance content\_write: use os.replace.
- Fix mission\_manager save loop interval.
- 2015.7.7
- Fix danbooru bug.
- Fix dmzj bug.
- 2015.7.6
- Fix getepisodes regex in exh.
- 2015.7.5
- Add error handler to dm5.
- Add error handler to acgn.
- 2015.7.4
- Support imgbox.
- 2015.6.22
- Support tsundora.
- 2015.6.18
- Fix url quoting issue.
- 2015.6.14
- Enhance ``safeprint``. Use ``echo`` command.
- Enhance ``content_write``. Add ``append=False`` option.
- Enhance ``Crawler``. Cache imgurl.
- Enhance ``grabber``. Add ``cookie=None`` option. Change errorlog behavior.
- Fix ``grabber`` unicode encoding issue.
- Some module update.
- 2015.6.13
- Fix ``clean_finished``
- Fix ``console_download``
- Enhance ``get_by_state``
Author
------
- eight eight04@gmail.com
Raw data
{
"_id": null,
"home_page": "https://github.com/eight04/ComicCrawler",
"name": "comiccrawler",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "image, crawler",
"author": "eight",
"author_email": "eight04@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/e1/11/6b4c791fba4ebcf32761420c4417729ceb57ec8b1a24ff7540abd9410002/comiccrawler-2024.11.14.tar.gz",
"platform": null,
"description": "Comic Crawler\r\n=============\r\n\r\n.. image:: https://travis-ci.org/eight04/ComicCrawler.svg?branch=master\r\n :target: https://travis-ci.org/eight04/ComicCrawler\r\n\r\nComic Crawler \u662f\u7528\u4f86\u6252\u5716\u7684\u4e00\u652f Python Script\u3002\u64c1\u6709\u7c21\u6613\u7684\u4e0b\u8f09\u7ba1\u7406\u54e1\u3001\u5716\u66f8\u9928\u529f\u80fd\u3001 \u8207\u65b9\u4fbf\u7684\u64f4\u5145\u80fd\u529b\u3002\r\n\r\n\u4e0b\u8f09\u548c\u5b89\u88dd\uff08Windows\uff09\r\n---------------------\r\n\r\nComic Crawler is on\r\n`PyPI <https://pypi.python.org/pypi/comiccrawler/>`__. \u5b89\u88dd\u5b8c\r\npython \u5f8c\uff0c\u53ef\u4ee5\u76f4\u63a5\u7528 pip \u6307\u4ee4\u81ea\u52d5\u5b89\u88dd\u3002\r\n\r\nInstall Python\r\n~~~~~~~~~~~~~~\r\n\r\n\u4f60\u9700\u8981 Python 3.11 \u4ee5\u4e0a\u3002\u5b89\u88dd\u6a94\u53ef\u4ee5\u5f9e\u5b83\u7684\r\n`\u5b98\u65b9\u7db2\u7ad9 <https://www.python.org/>`__ \u4e0b\u8f09\u3002\r\n\r\n\u5b89\u88dd\u6642\u8a18\u5f97\u8981\u9078\u300cAdd python.exe to path\u300d\uff0c\u624d\u80fd\u4f7f\u7528 pip \u6307\u4ee4\u3002\r\n\r\nInstall Deno\r\n~~~~~~~~~~~~\r\n\r\nComic Crawler \u4f7f\u7528 Deno \u4f86\u5206\u6790\u9700\u8981\u57f7\u884c JavaScript \u7684\u7db2\u7ad9\ufe30\r\nhttps://docs.deno.com/runtime/manual/getting_started/installation\r\n\r\nWindows 10 (1709) \u4ee5\u4e0a\u7684\u7248\u672c\uff0c\u53ef\u4ee5\u76f4\u63a5\u5728 cmd \u5e95\u4e0b\u8f38\u5165\u4ee5\u4e0b\u6307\u4ee4\u5b89\u88dd\ufe30\r\n\r\n::\r\n\r\n winget install deno\r\n\r\nInstall Comic Crawler\r\n~~~~~~~~~~~~~~~~~~~~~\r\n\r\n\u5728 cmd \u5e95\u4e0b\u8f38\u5165\u4ee5\u4e0b\u6307\u4ee4\ufe30\r\n\r\n::\r\n\r\n pip install comiccrawler\r\n\r\n\u66f4\u65b0\u6642\ufe30\r\n\r\n::\r\n\r\n pip install comiccrawler --upgrade --upgrade-strategy eager\r\n \r\n\u6700\u5f8c\u5728 cmd \u5e95\u4e0b\u8f38\u5165\u4ee5\u4e0b\u6307\u4ee4\u57f7\u884c Comic Crawler\ufe30\r\n\r\n::\r\n\r\n comiccrawler gui\r\n \r\n\r\nSupported domains\r\n-----------------\r\n\r\n.. DOMAINS\r\n..\r\n\r\n 163.bilibili.com 8comic.com 99.hhxxee.com ac.qq.com beta.sankakucomplex.com chan.sankakucomplex.com comic.acgn.cc comic.sfacg.com comicbus.com coomer.su copymanga.com danbooru.donmai.us deviantart.com e-hentai.org exhentai.org fanbox.cc fantia.jp gelbooru.com hk.dm5.com ikanman.com imgbox.com jpg4.su kemono.party kemono.su konachan.com linevoom.line.me m.dmzj.com m.manhuabei.com m.wuyouhui.net manga.bilibili.com manhua.dmzj.com manhuagui.com nijie.info pixabay.com raw.senmanga.com seemh.com seiga.nicovideo.jp smp.yoedge.com tel.dm5.com tsundora.com tuchong.com tumblr.com tw.weibo.com twitter.com wix.com www.177pic.info www.1manhua.net www.33am.cn www.36rm.cn www.99comic.com www.aacomic.com www.artstation.com www.buka.cn www.cartoonmad.com www.chuixue.com www.chuixue.net www.cocomanhua.com www.comicabc.com www.comicvip.com www.dm5.com www.dmzj.com www.facebook.com www.flickr.com www.gufengmh.com www.gufengmh8.com www.hhcomic.cc www.hheess.com www.hhmmoo.com www.hhssee.com www.hhxiee.com www.iibq.com www.instagram.com www.mangacopy.com www.manhuadui.com www.manhuaren.com www.mh160.com www.mhgui.com www.ohmanhua.com www.pixiv.net www.sankakucomplex.com www.setnmh.com www.tohomh.com www.tohomh123.com www.xznj120.com x.com yande.re\r\n\r\n.. END DOMAINS\r\n\r\n\u4f7f\u7528\u8aaa\u660e\r\n--------\r\n\r\nAs a CLI tool:\r\n\r\n::\r\n\r\n Usage:\r\n comiccrawler [--profile=<profile>] (\r\n domains |\r\n download <url> [--dest=<save_path>] |\r\n gui\r\n )\r\n comiccrawler (--help | --version)\r\n\r\n Commands:\r\n domains \u5217\u51fa\u652f\u63f4\u7684\u7db2\u5740\r\n download \u4e0b\u8f09\u6307\u5b9a\u7684 url\r\n gui \u555f\u52d5\u4e3b\u8996\u7a97\r\n\r\n Options:\r\n --profile \u6307\u5b9a\u8a2d\u5b9a\u6a94\u5b58\u653e\u7684\u8cc7\u6599\u593e\uff08\u9810\u8a2d\u70ba \"~/comiccrawler\"\uff09\r\n --dest \u8a2d\u5b9a\u4e0b\u8f09\u76ee\u9304\uff08\u9810\u8a2d\u70ba \".\"\uff09\r\n --help \u986f\u793a\u5e6b\u52a9\u8a0a\u606f\r\n --version \u986f\u793a\u7248\u672c \r\n \r\nor you can use it in your python script:\r\n\r\n.. code:: python\r\n\r\n from comiccrawler.mission import Mission\r\n from comiccrawler.analyzer import Analyzer\r\n from comiccrawler.crawler import download\r\n \r\n # create a mission\r\n m = Mission(url=\"http://example.com\")\r\n Analyzer(m).analyze()\r\n \r\n # select the episodes you want\r\n for ep in m.episodes:\r\n if ep.title != \"chapter 123\":\r\n ep.skip = True\r\n \r\n # download to savepath\r\n download(m, \"path/to/save\")\r\n \r\n\u5716\u5f62\u4ecb\u9762\r\n--------\r\n\r\n.. figure:: http://i.imgur.com/ZzF0YFx.png\r\n :alt: \u4e3b\u8996\u7a97\r\n\r\n- \u5728\u6587\u5b57\u6b04\u8cbc\u4e0a\u7db2\u5740\u5f8c\u9ede\u300c\u52a0\u5165\u9023\u7d50\u300d\u6216\u662f\u6309 Enter\r\n- \u82e5\u662f\u526a\u8cbc\u7c3f\u88e1\u6709\u652f\u63f4\u7684\u7db2\u5740\uff0c\u4e14\u6587\u5b57\u6b04\u540c\u6642\u662f\u7a7a\u7684\uff0c\u7a0b\u5f0f\u6703\u81ea\u52d5\u8cbc\u4e0a\r\n- \u5c0d\u8457\u4efb\u52d9\u53f3\u9375\uff0c\u53ef\u4ee5\u9078\u64c7\u628a\u4efb\u52d9\u52a0\u5165\u5716\u66f8\u9928\u3002\u5716\u66f8\u9928\u5167\u7684\u4efb\u52d9\uff0c\u5728\u6bcf\u6b21\u7a0b\u5f0f\u555f\u52d5\u6642\uff0c\u90fd\u6703\u6aa2\u67e5\u662f\u5426\u6709\u66f4\u65b0\u3002\r\n\r\n\u8a2d\u5b9a\u6a94\r\n------\r\n\r\n::\r\n\r\n [DEFAULT]\r\n ; \u8a2d\u5b9a\u4e0b\u8f09\u5b8c\u6210\u5f8c\u8981\u57f7\u884c\u7684\u7a0b\u5f0f\uff0c{target} \u6703\u88ab\u66ff\u63db\u6210\u4efb\u52d9\u8cc7\u6599\u593e\u7684\u7d55\u5c0d\u8def\u5f91\r\n runafterdownload = 7z a \"{target}.zip\" \"{target}\"\r\n\r\n ; \u555f\u52d5\u6642\u81ea\u52d5\u6aa2\u67e5\u5716\u66f8\u9928\u66f4\u65b0\r\n libraryautocheck = true\r\n \r\n ; \u6aa2\u67e5\u66f4\u65b0\u9593\u9694\uff08\u55ae\u4f4d\ufe30\u5c0f\u6642\uff09\r\n autocheck_interval = 24\r\n\r\n ; \u4e0b\u8f09\u76ee\u7684\u8cc7\u6599\u593e\u3002\u76f8\u5c0d\u8def\u5f91\u6703\u6839\u64da\u8a2d\u5b9a\u6a94\u8cc7\u6599\u593e\u7684\u4f4d\u7f6e\u3002\r\n savepath = download\r\n\r\n ; \u958b\u555f grabber \u5075\u932f\r\n errorlog = false\r\n\r\n ; \u6bcf\u9694 5 \u5206\u9418\u81ea\u52d5\u5b58\u6a94\r\n autosave = 5\r\n \r\n ; \u5b58\u6a94\u6642\u4f7f\u7528\u4e0b\u8f09\u6642\u7684\u539f\u59cb\u6a94\u540d\u800c\u4e0d\u7528\u9801\u78bc\r\n ; \u5f37\u5217\u5efa\u8b70\u4e0d\u8981\u4f7f\u7528\u9019\u500b\u9078\u9805\uff0c\u898b https://github.com/eight04/ComicCrawler/issues/90\r\n originalfilename = false\r\n \r\n ; \u81ea\u52d5\u8f49\u63db\u96c6\u6578\u540d\u7a31\u4e2d\u6578\u5b57\u7684\u683c\u5f0f\uff0c\u53ef\u4ee5\u7528\u65bc\u88dc0\r\n ; \u4f8b\ufe30\u7b2c1\u96c6 -> \u7b2c001\u96c6\r\n ; \u8a73\u7d30\u7684\u683c\u5f0f\u6307\u5b9a\u65b9\u5f0f\u8acb\u53c3\u8003 https://docs.python.org/3/library/string.html#format-specification-mini-language\r\n ; \u6ce8\u610f\ufe30\u9019\u500b\u8a2d\u5b9a\u6703\u5f71\u97ff\u6a94\u540d\u4e2d\u7684\u6240\u6709\u6578\u5b57\uff0c\u5305\u62ec\u6a94\u540d\u4e2d\u82f1\u6578\u6df7\u5408\u7684ID\u5982instagram\r\n titlenumberformat = {:03d}\r\n \r\n ; \u9023\u7dda\u6642\u4f7f\u7528 http/https proxy\r\n proxy = 127.0.0.1:1080\r\n \r\n ; \u52a0\u5165\u65b0\u4efb\u52d9\u6642\uff0c\u9810\u8a2d\u9078\u64c7\u6240\u6709\u96c6\u6578\r\n selectall = true\r\n \r\n ; \u4e0d\u8981\u6839\u64da\u5404\u96c6\u540d\u7a31\u5efa\u7acb\u5b50\u8cc7\u6599\u593e\uff0c\u5c07\u6240\u6709\u5716\u7247\u653e\u5728\u4efb\u52d9\u8cc7\u6599\u593e\u5167\r\n noepfolder = true\r\n \r\n ; \u9047\u5230\u91cd\u8907\u4efb\u52d9\u6642\u7684\u52d5\u4f5c\r\n ; update: \u6aa2\u67e5\u66f4\u65b0\r\n ; reselect_episodes: \u91cd\u65b0\u9078\u53d6\u96c6\u6578\r\n mission_conflict_action = update\r\n \r\n ; \u662f\u5426\u9a57\u8b49\u52a0\u5bc6\u9023\u7dda\uff08SSL\uff09\uff0c\u9810\u8a2d\u662f true\r\n verify = false\r\n\r\n ; \u5f9e\u700f\u89bd\u5668\u4e2d\u8b80\u53d6 cookies\uff0c\u4f7f\u7528 yt-dlp \u7684 cookies-from-browser\r\n ; https://github.com/yt-dlp/yt-dlp/blob/e5d4f11104ce7ea1717a90eea82c0f7d230ea5d5/yt_dlp/cookies.py#L109\r\n browser = firefox\r\n \r\n ; \u700f\u89bd\u5668 profile \u7684\u540d\u7a31\r\n browser_profile = act3nn7e.default\r\n\r\n- \u8a2d\u5b9a\u6a94\u4f4d\u65bc ``~\\comiccrawler\\setting.ini``\u3002\u53ef\u4ee5\u5728\u57f7\u884c\u6642\u6307\u5b9a ``--profile`` \u9078\u9805\u4ee5\u8b8a\u66f4\u9810\u8a2d\u7684\u4f4d\u7f6e\u3002\uff08\u5728 Windows \u4e2d ``~`` \u6703\u88ab\u5c55\u958b\u70ba ``%HOME%`` \u6216 ``%USERPROFILE%``\uff09\r\n- \u57f7\u884c\u4e00\u6b21 ``comiccrawler gui`` \u5f8c\u95dc\u9589\uff0c\u8a2d\u5b9a\u6a94\u6703\u81ea\u52d5\u7522\u751f\u3002\u82e5 Comic Crawler \u66f4\u65b0\u5f8c\u6709\u65b0\u589e\u7684\u8a2d\u5b9a\uff0c\u5728\u95dc\u9589\u5f8c\u6703\u81ea\u52d5\u5c07\u65b0\u8a2d\u5b9a\u52a0\u5165\u8a2d\u5b9a\u6a94\u3002\r\n- \u5404\u5225\u7684\u7db2\u7ad9\u6703\u6709\u81ea\u5df1\u7684\u8a2d\u5b9a\uff0c\u901a\u5e38\u662f\u8981\u586b\u5165\u4e00\u4e9b\u767b\u5165\u76f8\u95dc\u8cc7\u8a0a\r\n \r\n - \u4ee5 curl \u958b\u982d\u7684\u8a2d\u5b9a\uff0c\u8981\u586b\u5165\u5c0d\u61c9\u7db2\u5740\u7684 curl \u6307\u4ee4\u3002\u4ee5 twitter \u70ba\u4f8b\ufe30https://github.com/eight04/ComicCrawler/issues/241#issuecomment-904411605\r\n - \u4ee5 cookie \u958b\u982d\u7684\u8a2d\u5b9a\uff0c\u8981\u586b\u5165\u5c0d\u61c9\u7684 cookie\u3002\r\n\r\n- \u8a2d\u5b9a\u6a94\u6703\u5728\u91cd\u65b0\u555f\u52d5\u5f8c\u751f\u6548\u3002\u82e5 ComicCrawler \u6b63\u5728\u57f7\u884c\u4e2d\uff0c\u53ef\u4ee5\u9ede\u300c\u91cd\u8f09\u8a2d\u5b9a\u6a94\u300d\u4f86\u8f09\u5165\u65b0\u8a2d\u5b9a\r\n\r\n .. warning::\r\n\r\n \u82e5\u5728\u57f7\u884c\u6642\uff0c\u4fee\u6539\u8a2d\u5b9a\u6a94\u4e26\u5132\u5b58\uff0c\u63a5\u8457\u7d50\u675f ComicCrawler\uff0c\u4fee\u6539\u6703\u907a\u5931\u3002\u56e0\u70ba ComicCrawler \u7d50\u675f\u524d\u6703\u628a\u8a2d\u5b9a\u5beb\u56de\u8a2d\u5b9a\u6a94\u3002\r\n- \u5404\u5225\u7db2\u7ad9\u7684\u8a2d\u5b9a\u4e0d\u6703\u4e92\u76f8\u5f71\u97ff\u3002\u5047\u5982\u5728 [DEFAULT] \u8a2d savepath = a\uff1b\u5728 [Pixiv] \u8a2d savepath = b\uff0c\u90a3\u9ebc\u5f9e pixiv \u4e0b\u8f09\u7684\u90fd\u6703\u5b58\u5230 b \u8cc7\u6599\u593e\uff0c\u5176\u5b83\u7684\u5c31\u7528\u9810\u8a2d\u503c\uff0c\u5b58\u5230 a \u8cc7\u6599\u593e\u3002\r\n\r\nModule example\r\n--------------\r\n\r\nStarting from version 2016.4.21, you can add your own module to ``~/comiccrawler/mods/module_name.py``.\r\n\r\n.. code:: python\r\n\r\n #! python3\r\n \"\"\"\r\n This is an example to show how to write a comiccrawler module.\r\n\r\n \"\"\"\r\n\r\n import re\r\n from urllib.parse import urljoin\r\n from comiccrawler.episode import Episode\r\n\r\n # The header used in grabber method. Optional.\r\n header = {}\r\n \r\n # The cookies. Optional.\r\n cookie = {}\r\n\r\n # Match domain. Support sub-domain, which means \"example.com\" will match\r\n # \"*.example.com\"\r\n domain = [\"www.example.com\", \"comic.example.com\"]\r\n\r\n # Module name\r\n name = \"Example\"\r\n\r\n # With noepfolder = True, Comic Crawler won't generate subfolder for each\r\n # episode. Optional, default to False.\r\n noepfolder = False\r\n \r\n # If False then setup the referer header automatically to mimic browser behavior.\r\n # If True then disable this behavior.\r\n # Default: False\r\n no_referer = True\r\n\r\n # Wait 5 seconds before downloading another image. Optional, default to 0.\r\n rest = 5\r\n \r\n # Wait 5 seconds before analyzing the next page in the analyzer. Optional,\r\n # default to 0.\r\n rest_analyze = 5\r\n\r\n # User settings which could be modified from setting.ini. The keys are\r\n # case-sensitive.\r\n # \r\n # After loading the module, the config dictionary would be converted into \r\n # a ConfigParser section data object so you can e.g. call\r\n # config.getboolean(\"use_large_image\") directly.\r\n #\r\n # Optional.\r\n config = {\r\n # The config value can only be str\r\n \"use_largest_image\": \"true\",\r\n \r\n # These special config starting with `cookie__` will be automatically \r\n # used when grabbing html or image.\r\n \"cookie_user\": \"user-default-value\",\r\n \"cookie_hash\": \"hash-default-value\"\r\n }\r\n \r\n def load_config():\r\n \"\"\"This function will be called each time the config reloads. Optional.\r\n \"\"\"\r\n pass\r\n\r\n def get_title(html, url):\r\n \"\"\"Return mission title.\r\n\r\n The title would be used in saving filepath, so be sure to avoid\r\n duplicated title.\r\n \"\"\"\r\n return re.search(\"<h1 id='title'>(.+?)</h1>\", html).group(1)\r\n\r\n def get_episodes(html, url):\r\n \"\"\"Return episode list.\r\n\r\n The episode list should be sorted by date, oldest first.\r\n If is a multi-page list, specify the URL of the next page in\r\n get_next_page. Comic Crawler would grab the next page and call this\r\n function again.\r\n\r\n The `Episode` object accepts an `image` property which can be a list of `Image`.\r\n However, unlike `get_images`, the `Episode` object is JSON-stringified and saved\r\n to the disk, therefore you must only use JSON-compatible types i.e. no `Image.get_url`.\r\n \"\"\"\r\n match_list = re.findall(\"<a href='(.+?)'>(.+?)</a>\", html)\r\n return [Episode(title, urljoin(url, ep_url))\r\n for ep_url, title in match_list]\r\n\r\n def get_images(html, url):\r\n \"\"\"Get the URL of all images.\r\n \r\n The return value could be:\r\n\r\n - A list of image.\r\n - A generator yielding image.\r\n - An image, when there is only one image on the current page.\r\n \r\n Comic Crawler treats following types as an image:\r\n \r\n - str - the URL of the image\r\n - callable - return a URL when called\r\n - comiccrawler.core.Image - use it to provide customized filename.\r\n \r\n While receiving the value, it is converted to an Image instance. See ``comiccrawler.core.Image.create()``.\r\n \r\n If the episode has multi-pages, uses get_next_page to change page.\r\n \r\n Use generator in caution! If the generator raises any error between\r\n two images, next call to the generator will always result in\r\n StopIteration, which means that Comic Crawler will think it had crawled\r\n all images and navigate to next page. If you have to call grabhtml()\r\n for each image (i.e. it may raise HTTPError), use a list of\r\n callback instead!\r\n \"\"\"\r\n return re.findall(\"<img src='(.+?)'>\", html)\r\n\r\n def get_next_page(html, url):\r\n \"\"\"Return the URL of the next page.\"\"\"\r\n match = re.search(\"<a id='nextpage' href='(.+?)'>next</a>\", html)\r\n if match:\r\n return match.group(1)\r\n\r\n def get_next_image_page(html, url):\r\n \"\"\"Return the URL of the next page.\r\n\r\n If this method is defined, it will be used by the crawler and ``get_next_page`` would be ignored.\r\n Therefore ``get_next_page`` will only be used by the analyzer.\r\n \"\"\"\r\n pass\r\n \r\n def redirecthandler(response, crawler):\r\n \"\"\"Downloader will call this hook if redirect happens during downloading\r\n an image. Sometimes services redirects users to an unexpected URL. You\r\n can check it here.\r\n \"\"\"\r\n if response.url.endswith(\"404.jpg\"):\r\n raise Exception(\"Something went wrong\")\r\n\r\n def errorhandler(error, crawler):\r\n \"\"\"Downloader will call errorhandler if there is an error happened when\r\n downloading image. Normally you can just ignore this function.\r\n \"\"\"\r\n pass\r\n \r\n def imagehandler(ext, b):\r\n \"\"\"If this function exists, Comic Crawler will call it before writing\r\n the image to disk. This allow the module to modify the image after\r\n the download.\r\n \r\n @ext str, file extension, including \".\". (e.g. \".jpg\")\r\n @b The bytes object of the image.\r\n\r\n It should return a (modified_ext, modified_b) tuple.\r\n \"\"\"\r\n return (ext, b)\r\n \r\n def grabhandler(grab_method, url, **kwargs):\r\n \"\"\"Called when the crawler is going to make a web request. Use this hook\r\n to override the default grabber behavior.\r\n \r\n @grab_method function, could be ``grabhtml`` or ``grabimg``.\r\n @url str, request URL.\r\n @kwargs other arguments that will be passed to grabber.\r\n \r\n By returning ``None``\r\n \"\"\"\r\n if \"/api/\" in URL:\r\n kwargs[\"headers\"] = {\"some-api-header\": \"some-value\"}\r\n return grab_method(url, **kwargs)\r\n\r\n def after_request(crawler, response):\r\n \"\"\"Called after the request is made.\"\"\"\r\n if response.url.endswith(\"404.jpg\"):\r\n raise Exception(\"Something went wrong\")\r\n\r\n def session_key(url):\r\n \"\"\"Return a key to identify the session. If the key is the same, the\r\n session would be shared. Otherwise, a new session would be created.\r\n\r\n For example, you may want to separate the session between the main site\r\n and the API endpoint.\r\n\r\n Return None to pass the URL to next key function.\r\n \"\"\"\r\n r = urlparse(url)\r\n if r.path.startswith(\"/api/\"):\r\n return (r.scheme, r.netloc, \"api\")\r\n \r\nTodos\r\n-----\r\n\r\n- Need a better error log system.\r\n- Support pool in Sankaku.\r\n- Add module.get_episode_id to make the module decide how to compare episodes.\r\n- Use HEAD to grab final URL before requesting the image?\r\n\r\nChangelog\r\n---------\r\n\r\n- 2024.11.14\r\n\r\n - Fix: add language tag to seemh.\r\n - Fix: eight module.\r\n - Fix: sankaku.\r\n - Fix: handler 416 content range.\r\n - Add: new domain in kemono.\r\n - Add: try smaller images when 404 on twitter.\r\n\r\n- 2024.8.14\r\n\r\n - Add: extract file extension from URL.\r\n - Fix: 404 error in dm5.\r\n\r\n- 2024.8.9\r\n\r\n - Add: jpg4 module.\r\n - Fix: domain changed to x.com in twitter.\r\n - Fix: failed locating popular posts in sankaku.\r\n - Fix: handle empty thumbnail in fantia.\r\n - Fix: skip posts without files in fanbox.\r\n - Fix: stop detecting zip as docx.\r\n - Fix: wrong image URL in bilibili manga.\r\n\r\n- 2024.4.11\r\n\r\n - Fix: redirect error in sankaku.\r\n\r\n- 2024.4.10\r\n\r\n - Fix: limit retry delay to 10 minutes at most.\r\n - Fix: failed handling http 206 response.\r\n - Fix: username may conatain dash in fanbox.\r\n - Add: ``max_errors`` setting.\r\n - Add: ability to run multiple crawlers. One for each host.\r\n\r\n- 2024.4.2\r\n\r\n - Fix: wrong protocol in seemh.\r\n - Fix: failed downloading webp images.\r\n\r\n- 2024.3.25\r\n\r\n - Fix: skip episodes without images in kemono.\r\n - Fix: sankaku.\r\n - Fix: some posts are missing in twitter.\r\n - Add: new domain kemono.su.\r\n - Add: .clip to valid file extensions.\r\n - Add: ability to write partial data to disk.\r\n - Add: browser and browser_profile settings which are used to extract cookies.\r\n - Add: after_request, session_key hooks.\r\n - Add: session_manager for better control of api sessions.\r\n - Change: set referer and origin header in analyzer.\r\n - Change: wait 3 seconds after analyze error.\r\n\r\n- 2024.1.4\r\n\r\n - Fix: vm error in seemh.\r\n - Change: drop imghdr.\r\n\r\n- 2023.12.24\r\n\r\n - Fix: bili, cartoonmad, seemh module.\r\n - Fix: support python 3.12.\r\n\r\n- 2023.12.11\r\n\r\n - Fix: seemh, twitter modules.\r\n - Add: fanbox module.\r\n\r\n- 2023.10.11\r\n\r\n - Fix: instagram, fantia, seemh, sanka modules.\r\n - Add: progress bar.\r\n - Change: switch to deno_vm.\r\n\r\n- 2023.10.8\r\n\r\n - Fix: unable to download bili free chapters.\r\n - Fix: facebook module.\r\n - Add: copymanga new domain.\r\n - Add: kemono module.\r\n - Add: linevoom module.\r\n - Add: support more audio formats.\r\n - Add: new plugin hook ``get_next_image_page``.\r\n \r\n\r\n- 2022.11.21\r\n\r\n - Fix: switching pages error in 8comic.\r\n - Fix: use url path to guess extension.\r\n - Add: allow to download txt file.\r\n\r\n- 2022.11.11\r\n\r\n - Fix: now danbooru requires curl.\r\n - Fix: 8comic doesn't use ajax anymore.\r\n - Fix: download error in instagram.\r\n - Change: download thumbnail in fantia.\r\n - Change: require python 3.10.\r\n\r\n- 2022.2.6\r\n\r\n - Fix: magic import error.\r\n\r\n - Add: support replacing argument in runafterdownload.\r\n\r\n- 2022.2.3\r\n\r\n - Fix: analyze error in seemh.\r\n - Add: support fantia.jp\r\n\r\n - Add: support br encoding.\r\n\r\n - Add: open episode URL on right-click when selecting episodes.\r\n\r\n - Add: display completed episodes as green.\r\n\r\n - Add: exponential backoff.\r\n\r\n - Change: use curl in sankaku.\r\n\r\n - Change: skip 404 ep on twitter.\r\n - Change: use python-magic to detect file type.\r\n\r\n- 2021.12.2\r\n\r\n - Fix: empty episodes error.\r\n\r\n- 2021.11.15\r\n\r\n - Fix: copymanga.\r\n\r\n- 2021.9.15\r\n\r\n - Fix: ep order is wrong in twitter.\r\n - Fix: copymanga.\r\n\r\n- 2021.8.31\r\n\r\n - Fix: hidden manga in dmzj.\r\n - Fix: skip 404 episode in instagram.\r\n - Fix: support multiple videos in instagram.\r\n - Fix: failed analyzing search page in pixiv.\r\n - Fix: empty episode error in gelbooru.\r\n - Add: copymanga module.\r\n - Add: twitter module.\r\n - Add: ``grabhandler`` hook.\r\n - Change: stop downloading if all available missions fail.\r\n\r\n- 2020.10.29\r\n\r\n - Fix: cartoonmad error.\r\n - Fix: seemh error.\r\n - Fix: show all content in gelbooru.\r\n - Fix: qq error.\r\n - Fix: paging issue in oh.\r\n - Fix: deviantart error.\r\n - Add: new domain for hhxiee.\r\n - Add: new domain for oh.\r\n - Change: send referer when fetching html.\r\n\r\n- 2020.9.2\r\n\r\n - Fix: api is changed in sankaku beta.\r\n - Fix: avoid page limit in sankaku.\r\n - Fix: cannot get title in weibo.\r\n - Fix: cannot fetch nico image.\r\n - Fix: duplicated pic in nijie.\r\n - Fix: cannot fetch image in seemh.\r\n - Add: oh module.\r\n - Add: setnmh module.\r\n - Add: manhuabei module.\r\n - Add: module constant ``no_referer``.\r\n - **Breaking: require Python@3.6+**\r\n\r\n- 2020.6.3\r\n\r\n - Fix: don't navigate to next page in danbooru.\r\n - Fix: analyzation error in eight.\r\n - Fix: instagram now requires login.\r\n - Add: a ``verify`` option to disable security check.\r\n\r\n- 2019.12.25\r\n\r\n - Add: support search page in pixiv.\r\n\r\n- 2019.11.19\r\n\r\n - Fix: handle ``LastPageError`` in ``get_episodes``.\r\n - Fix: download error in nijie.\r\n - Fix: refetch size info if the size is unavailable in flickr.\r\n - Fix: skip unavailable episodes in pixiv.\r\n - Fix: handle filename with broken extension (``jpg@YYYY-mm-dd``).\r\n - Add: instagram.\r\n - Add: sankaku_beta.\r\n - Add: ``redirecthandler`` hook.\r\n - Add: contextmenu to delete missions from both managers.\r\n - Change: decrease max retry from 10 to 3 so a broken mission will fail faster.\r\n\r\n- 2019.11.12\r\n\r\n - Fix: pixiv.\r\n - Add: allow ``get_images`` to raise ``SkipPageError``.\r\n\r\n- 2019.10.28\r\n\r\n - Fix: download too many images in danbooru.\r\n\r\n- 2019.10.19\r\n\r\n - Add: manga.bilibili module.\r\n\r\n- 2019.9.2\r\n\r\n - Add: bilibili module.\r\n - Add: 177pic module.\r\n\r\n- 2019.8.19\r\n\r\n - Fix: can't change page in danbooru.\r\n - Fix: failed to analyze episodes in pixiv.\r\n - Fix: download error in qq.\r\n - Bump dependencies.\r\n\r\n- 2019.7.1\r\n\r\n - Add: autocheck_interval option.\r\n - Add: manhuadui module.\r\n - Fix: chuixue module.\r\n - Fix: handle 404 errors in pixiv.\r\n\r\n- 2019.5.20\r\n\r\n - Fix: ignore empty episodes in youhui.\r\n\r\n- 2019.5.3\r\n\r\n - Fix: can't analyze profile URL with tags in pixiv.\r\n - Add: pixabay module.\r\n\r\n- 2019.3.27\r\n\r\n - Fix: getcookie is not defined in eight.\r\n\r\n- 2019.3.26\r\n\r\n - Add: manhuaren module.\r\n - Fix: failed to switch page in fb.\r\n\r\n- 2019.3.18\r\n\r\n - Fix: handle 403 error in artstation.\r\n\r\n- 2019.3.13\r\n\r\n - Add: new domain gufengmh8.com for gufeng module.\r\n - Add: new domain tohomh123.com for toho module\r\n - Add: new cookie igneous for exh module.\r\n - Fix: download images in cartoonmad.\r\n - Change: drop ck101 module.\r\n\r\n- 2018.12.25\r\n\r\n - Add: new domain hheess.com for hhcomic module.\r\n - Add: new domain 36rm.cn for xznj module.\r\n - Add: toho module.\r\n - Fix: support new layout in dm5.\r\n\r\n- 2018.11.18\r\n\r\n - Add: mission_conflict_action option.\r\n - Fix: failed to download images in qq.\r\n - Fix: failed to download images in youhui.\r\n\r\n- 2018.10.24\r\n\r\n - Fix: new domain `hhmmoo.com` for hhxiee.\r\n - Fix: ignore comments when analyzing episodes.\r\n\r\n- 2018.9.30\r\n\r\n - Change: prefix ep title with group name in seemh.\r\n\r\n- 2018.9.24\r\n\r\n - Add: support user's tag in pixiv.\r\n\r\n- 2018.9.23\r\n\r\n - Fix: failed to get episodes in pixiv.\r\n - Fix: ``on_success`` is executed when analyzation failed.\r\n - Fix: make 503 error retryable.\r\n\r\n- 2018.9.11\r\n\r\n - Fix: failed to get next page in gelbooru.\r\n - Add: gufeng module.\r\n\r\n- 2018.9.7\r\n\r\n - Fix: domains of eight module.\r\n - Fix: batch analyze error is not shown.\r\n - Fix: connection error would crash the entire application.\r\n\r\n- 2018.8.20\r\n\r\n - Add: new option \"noepfolder\".\r\n\r\n- 2018.8.11\r\n\r\n - Fix: title and image URLs in eight.\r\n\r\n- 2018.8.10\r\n\r\n - Add: mh160 module.\r\n - Add: youhui module.\r\n - Add: grabber_cooldown module constant.\r\n - Add: domain hk.dm5.com in dm5.\r\n - Add: travis.\r\n - Fix: skip 404 pages in weibo.\r\n - Fix: guess the file extension from the content then from the header.\r\n - Change: use a newer user agent.\r\n\r\n- 2018.7.18\r\n\r\n - Add: new domain in xznj120.\r\n - Fix: get_episodes returns empty list in deviantart.\r\n\r\n- 2018.6.21\r\n\r\n - Add: make table sortable.\r\n - Add: last_update attribute.\r\n - Fix: analyze error in senmanga.\r\n\r\n- 2018.6.14\r\n\r\n - Revert: do not normalize whitespaces.\r\n - Fix: escape more characters in safefilepath.\r\n\r\n- 2018.6.8\r\n\r\n - Refactor: comiccrawler.core is exploded.\r\n - Fix: new interface in pixiv.\r\n - Add: \"Check update\" command in the library contextmenu.\r\n - Add: rest_analyze constant in modules.\r\n - Drop: migrate command.\r\n\r\n- 2018.5.24\r\n\r\n - Fix: fail to get images from xznj.\r\n - Refactor: split out select_episodes.\r\n\r\n- 2018.5.13\r\n\r\n - Add: selectall option.\r\n - Fix: the column check button operates on a wrong range.\r\n - Fix: the column check button appearance.\r\n - Fix: download error in tumblr.\r\n\r\n- 2018.5.5\r\n\r\n - Add: range reverse.\r\n - Add: xznj120 module.\r\n - Add: gelbooru module.\r\n - Fix: cannot analyze episode list in md5.\r\n\r\n- 2018.4.16\r\n\r\n - Add: support user page. (weibo)\r\n - Change: remove ``raise_429`` arg in ``grabhtml``. Add ``retry``.\r\n\r\n- 2018.4.8\r\n\r\n - Add: allow users to login. (tumblr)\r\n - Add: support videos. (tumblr)\r\n\r\n- 2018.3.18\r\n\r\n - Fix: SMH is not defined error. (seemh) (#106)\r\n\r\n- 2018.3.15\r\n\r\n - Change: use chapter id in the title of the episode. (qq) (#104)\r\n\r\n- 2018.3.9\r\n\r\n - Fix: seemh start using https. (#103)\r\n - Add: qq module. (#102)\r\n\r\n- 2018.3.7\r\n\r\n - Fix: get_episodes error in buka. Note that buka currently only shows images to its own reader app.\r\n - Fix: can't download image in seemh (manhuagui).\r\n - Add: SkipPageError for get_episodes.\r\n - Add: artstation module.\r\n - Update pylint to 1.8.2.\r\n\r\n- 2018.1.30.2\r\n\r\n - Fix: update seemh.\r\n\r\n- 2018.1.30.1\r\n\r\n - Fix: get Content-Length error.\r\n\r\n- 2018.1.30\r\n\r\n - Fix: verify Content-Length.\r\n - Fix: dm5 update.\r\n\r\n- 2017.12.15\r\n\r\n - Fix: incorrect title in pixiv.\r\n\r\n- 2017.12.14\r\n\r\n - Fix: insecure_http option in tumblr doesn't work properly.\r\n\r\n- 2017.12.9\r\n\r\n - Add: full_size, insecure_http options to tumblr.\r\n - Add: Support .ugoira file in pixiv.\r\n\r\n- 2017.12.4\r\n\r\n - Fix: download original image from tumblr. `#82 <https://github.com/eight04/ComicCrawler/issues/82>`_\r\n - Change: add gid/token to the title in exh. `#83 <https://github.com/eight04/ComicCrawler/issues/83>`_\r\n\r\n- 2017.11.29\r\n\r\n - Fix: download error in cartoonmad. `#81 <https://github.com/eight04/ComicCrawler/issues/81>`_\r\n - Add: ability to get images from ajax (dmzj). Thanks to `\u52a8\u6f2b\u4e4b\u5bb6\u52a9\u624b <https://greasyfork.org/zh-TW/scripts/33087-%E5%8A%A8%E6%BC%AB%E4%B9%8B%E5%AE%B6%E5%8A%A9%E6%89%8B>`_. `#78 <https://github.com/eight04/ComicCrawler/issues/78>`_\r\n\r\n- 2017.9.9\r\n\r\n - Fix: image match pattern in cartoonmad.\r\n\r\n- 2017.9.5\r\n\r\n - Fix: url is not unescaped correctly in sankaku.\r\n\r\n- 2017.8.31\r\n\r\n - Fix: match nview.js in comicbus.\r\n - Fix: ikanman.com -> manhuagui.com.\r\n - Fix: require login in facebook.\r\n\r\n- 2017.8.26\r\n\r\n - Fix: html changed in pixiv.\r\n\r\n- 2017.8.20.1\r\n\r\n - Fix: can't download in comicbus.\r\n\r\n- 2017.8.20\r\n\r\n - Fix: can't match http in deviantart.\r\n - Fix: can't get images in eight.\r\n - Add setting `proxy`.\r\n\r\n- 2017.8.16\r\n\r\n - Fix: deviantart login issue.\r\n\r\n- 2017.8.13\r\n\r\n - Fix: sankaku login issue. `#66 <https://github.com/eight04/ComicCrawler/issues/66>`_\r\n\r\n- 2017.6.14\r\n\r\n - Fix: comicbus analzye issue.\r\n\r\n- 2017.5.29\r\n\r\n - Fix: 99 module. `#63 <https://github.com/eight04/ComicCrawler/issues/63>`_\r\n\r\n- 2017.5.26\r\n\r\n - Fix: ikanman analyze issue.\r\n\r\n- 2017.5.22\r\n\r\n - Fix: comicbus analyze issue. `#62 <https://github.com/eight04/ComicCrawler/issues/62>`_\r\n\r\n- 2017.5.19\r\n\r\n - Add nijie module. `#58 <https://github.com/eight04/ComicCrawler/issues/58>`_\r\n - Add core.clean_tags.\r\n - Fix: check update button doesn't work after update checking failed. `#59 <https://github.com/eight04/ComicCrawler/issues/59>`_\r\n - Fix: analyzation failed in comicbus. `#61 <https://github.com/eight04/ComicCrawler/issues/61>`_\r\n\r\n- 2017.5.5\r\n\r\n - Fix: use raw ``<title>`` as title in search result (pixiv).\r\n - Add .wmv, .mov, and .psd into valid file extensions.\r\n\r\n- 2017.4.26\r\n\r\n - Change: use table view in dm5. `#54 <https://github.com/eight04/ComicCrawler/issues/54>`_\r\n - Fix: runafterdownload is parsed incorrectly on windows.\r\n\r\n- 2017.4.24\r\n\r\n - Fix: starred expression inside list.\r\n\r\n- 2017.4.23\r\n\r\n - Fix: compat with python 3.4, starred expression can only occur inside function call.\r\n - Update node_vm2 to 0.3.0.\r\n\r\n- 2017.4.22\r\n\r\n - Add .bmp to valid file extensions.\r\n - Fix: unable to check update for multi-page sites.\r\n\r\n- 2017.4.18\r\n\r\n - Add senmanga. `#49 <https://github.com/eight04/ComicCrawler/issues/49>`_\r\n - Add yoedge. `#47 <https://github.com/eight04/ComicCrawler/issues/47>`_\r\n - Fix: header parser issue. See https://www.ptt.cc/bbs/Python/M.1492438624.A.BBC.html\r\n - Fix: escape trailing dots in file path. `#46 <https://github.com/eight04/ComicCrawler/issues/46>`_\r\n - Add: double-click to launch explorer.\r\n - Add: batch analyze panel. `#45 <https://github.com/eight04/ComicCrawler/issues/45>`_\r\n\r\n- 2017.4.6\r\n\r\n - Fix: run after download doesn't work properly if path contains spaces.\r\n - Fix: VMError with ugoku in pixiv.\r\n - Fix: automatic update check doesn't record update time when failing.\r\n\r\n- 2017.4.3\r\n\r\n - Fix: analyze error in dA.\r\n - Fix: subdomain changed in exh.\r\n - Fix: vm error in hh.\r\n - Add .url utils, .core.CycleList, .error.HTTPError.\r\n - Add aacomic.\r\n - Update pyxcute to 0.4.1.\r\n\r\n- 2017.3.26\r\n\r\n - Fix: cleanup the old files.\r\n - Update pythreadworker to 0.8.0.\r\n\r\n- 2017.3.25\r\n\r\n - **Switch to node_vm2, drop pyexecjs.**\r\n - Add login check in exh.\r\n - Switch to pylint, drop pyflakes.\r\n - Drop module manhuadao.\r\n - Update pyxcute.\r\n - Refactor.\r\n\r\n- 2017.3.9\r\n\r\n - Add --profile option. `#36 <https://github.com/eight04/ComicCrawler/issues/36>`__\r\n\r\n- 2017.3.6\r\n\r\n - Update seemh. `#35 <https://github.com/eight04/ComicCrawler/issues/35>`__\r\n - Escape title in pixiv.\r\n - Strip non-printable characters in safefilepath.\r\n\r\n- 2017.2.5\r\n\r\n - Add www.dmzj.com module. `#33 <https://github.com/eight04/ComicCrawler/issues/33>`__\r\n - Fix: Sometime the title doesn't include chapter number in buka. `#33 <https://github.com/eight04/ComicCrawler/issues/33>`__\r\n\r\n- 2017.1.10\r\n\r\n - Add: nowebp option in ikanman. `#31 <https://github.com/eight04/ComicCrawler/issues/31>`__\r\n - Add weibo module.\r\n - Add tuchong module.\r\n - Fix: update table safe_tk error.\r\n - Change: existence check will only check original filename when originalfilename option is true.\r\n\r\n- 2017.1.6\r\n\r\n - Add: Table class in gui.\r\n - Add: titlenumberformat option in setting.ini. `#30 <https://github.com/eight04/ComicCrawler/pull/30>`__ by `@kuanyui <https://github.com/kuanyui>`__.\r\n - Change: use Table to display domain list.\r\n\r\n- 2017.1.3.1\r\n\r\n - Fix: schema error (konachan).\r\n - Fix: original filename should be extracted from final url instead of request url.\r\n - Add: now the module can specify image filename with ``comiccrawler.core.Image``.\r\n\r\n- 2017.1.3\r\n\r\n - Fix: original option doesn't work (exh).\r\n\r\n- 2016.12.20\r\n\r\n - Change how config works. This will affect the sites requiring cookie information.\r\n - Comic Crawler can save cookie back to config now!\r\n - Change how safefilepath works. Use escape table.\r\n - Make io.move support folders.\r\n - Add io.exists.\r\n - Add migrate command.\r\n - Add originalfilename option.\r\n\r\n- 2016.12.6\r\n\r\n - Fix: imghdr can't reconize .webp in Python 3.4.\r\n\r\n- 2016.12.1\r\n \r\n - Fix: analyze error in wix.\r\n - Fix: ``mimetypes.guess_extension`` is not reliable with ``application/octet-stream``\r\n - Add ``.webp`` to valid file type.\r\n\r\n- 2016.11.27\r\n\r\n - Fix hhxiee module. Use new domain www.hhssee.com.\r\n\r\n- 2016.11.25\r\n\r\n - Support cartoonmad.\r\n\r\n- 2016.11.2\r\n\r\n - Fix: scaling issue on Windows XP.\r\n - Fix: login-check in deviantart.\r\n - Use desktop3 to open folder. `#16 <https://github.com/eight04/ComicCrawler/issues/16>`__\r\n - Fix: GUI crahsed if scaling < 1. \r\n\r\n- 2016.10.8\r\n\r\n - Fix: math.inf is only available in python 3.5.\r\n\r\n- 2016.10.4\r\n\r\n - Fix: can not download video in flickr.\r\n - Fix: use cookie in grabimg.\r\n\r\n- 2016.9.30\r\n\r\n - Add ``params`` option to grabber.\r\n - Add flickr module.\r\n\r\n- 2016.9.27\r\n\r\n - Fix: image pattern in buka.\r\n - Fix: add hhcomic domain.\r\n\r\n- 2016.9.11\r\n\r\n - Fix: failed to read file encoded with utf-8-sig.\r\n - Fix: ignore empty posts in tumblr.\r\n\r\n- 2016.8.24.1\r\n\r\n - Use better method to find next page in tumblr.\r\n - Fix unicode referer bug in grabber.\r\n - Update match pattern to avoid redirect in tumblr. See https://github.com/kennethreitz/requests/issues/3078.\r\n - Fix get_title error in tumblr that the title might be empty.\r\n\r\n- 2016.8.24\r\n\r\n - Fix 429 error still raised by analyze_info.\r\n - Fix next page pattern in tumblr.\r\n\r\n- 2016.8.22\r\n\r\n - Support hhxiee.\r\n - Fix get_episodes error in ck101.\r\n - Suppress 429 error when analyzing.\r\n - Change title format in yendere. Support pools.\r\n\r\n- 2016.8.19\r\n\r\n - Fix title not found error in dm5.\r\n\r\n- 2016.8.8\r\n\r\n - Use a safer method in write_file.\r\n - Add mission_lock for thread safe.\r\n - Use str as runafterdownload.\r\n - Use float as autosave.\r\n - Add debug log.\r\n - Rewrite analyzer. Episodes shouldn't have same title.\r\n\r\n- 2016.7.2\r\n\r\n - Fix context menu popup bug on linux.\r\n - Fix update checking stops after finished mission.\r\n\r\n- 2016.7.1\r\n\r\n - Use cross-platform startfile (incomplete).\r\n - Use `clam` theme for GUI under linux.\r\n - Fix the error message of update checking failure.\r\n - Update checking won't block GUI thread anymore.\r\n - Update `pythreadworker` to 0.6.\r\n - Fix import syntax in `gui.get_scale`.\r\n\r\n- 2016.6.30\r\n\r\n - Support high dpi displays.\r\n - Don't show error in library thread. Only warn the user when update checking fails.\r\n\r\n- 2016.6.25\r\n\r\n - API changed. Now the errorhandler will recieve ``(error, crawler)`` instead of ``(error, episode)``.\r\n - Add errorhandler in seemh. It will try to use different host if downloading failed.\r\n - Drop mission to the bottom when update checking failed. Update checking process will stop if it had retried 10 times.\r\n\r\n- 2016.6.14.1\r\n\r\n - Pass pyflakes and fix a bunch of typo.\r\n\r\n- 2016.6.14\r\n\r\n - Fix: always re-init in crawlpage loop!\r\n\r\n- 2016.6.12\r\n\r\n - Use GBK instead of GB2312 in grabber.\r\n - Add the ability to get title from non-user page in nico.\r\n - Fix: unable to add mission in chuixue.\r\n - Fix: unable to download image in nico.\r\n - Fix: episode is lost after changing the name of the mission.\r\n - Fix: unable to recheck update after login error.\r\n\r\n- 2016.6.10\r\n\r\n - Change how to handle HTTP 429 error. Let the mission drop.\r\n - Add login check in sankaku.\r\n - Support .jpe(.jpg), .webm file types.\r\n\r\n- 2016.6.4\r\n\r\n - Change how saved data works. Comic Crawler will write inactive mission data into ``~/comiccrawler/pool/`` folder to save the memory.\r\n - Fix regex in dA.\r\n - Fix sankaku's hang. Do not suppress 429 error in grabber.\r\n\r\n- 2016.6.3\r\n\r\n - Minor change to save/load file function to avoid unnecessary copy.\r\n - Comic Crawler will now execute `runafterdownload` command both from the default section and the module section.\r\n\r\n- 2016.5.30\r\n\r\n - Add module.imagehandler, which can edit the image file before saving to disk.\r\n - Write frame info into ugoku zip in pixiv.\r\n\r\n- 2016.5.28\r\n\r\n - Change how config work. Now you can specify different setting in each sections. (e.g. use different savepath with different module)\r\n - Save frame info about ugoku in pixiv.\r\n - Drop config.update in module.load_config.\r\n - Try to support additional info in get_images.\r\n\r\n- 2016.5.24\r\n\r\n - Support buka.\r\n\r\n- 2016.5.20\r\n\r\n - Find server by executing js in seemh.\r\n\r\n- 2016.5.15\r\n\r\n - Fix dependency scheme.\r\n\r\n- 2016.5.2\r\n\r\n - Use `Conten-Type` header to guess file extension.\r\n - Fix a bug that the thread is not removed when recived DOWNLOAD_INVALID.\r\n - Pause download when meeting 509 error in exh.\r\n - Add .mp4 to valid file types.\r\n\r\n- 2016.5.1.1\r\n\r\n - Fix a bug that Comic Crawler doesn't retry when the first connection failed.\r\n - Add `Episode.image`, so the module can supply image list during constructing Episode.\r\n\r\n- 2016.5.1\r\n\r\n - Support wix.com.\r\n\r\n- 2016.4.27\r\n\r\n - Domain changed in seemh.\r\n\r\n- 2016.4.26.1\r\n\r\n - Fix charset encoding bug.\r\n\r\n- 2016.4.26\r\n\r\n - Fix config bug with upper-case key.\r\n - Check urls of old episodes to avoid unnecessary analyzing.\r\n - Add option to get original image in exh. It will cost 5x of viewing limit.\r\n\r\n- 2016.4.22.3\r\n\r\n - Fix retry-after hanged bug.\r\n - Fix cnfig override bug. Use ``ComicCrawler`` section to replace ``DEFAULT`` section.\r\n - Support account login in sankaku.\r\n - Support HTTP error log before raising.\r\n - Show next page url while analyzing.\r\n\r\n- 2016.4.22.2\r\n\r\n - Move to pythreadworker 0.5.0\r\n\r\n- 2016.4.22.1\r\n\r\n - Support loading module in python3.4.\r\n\r\n- 2016.4.22\r\n\r\n - Fix setup.py. Use find_packages.\r\n\r\n- 2016.4.21\r\n\r\n - Big rewrite.\r\n - Move to requests.\r\n - Move to pythreadworker 0.4.0.\r\n - Add the ability to load module from ``~/comiccrawler/mods``\r\n - Drop migrate command.\r\n\r\n- 2016.4.20\r\n\r\n - Update install_requires.\r\n\r\n- 2016.4.13\r\n\r\n - Fix facebook bug.\r\n - Move to doit.\r\n\r\n- 2016.4.8\r\n\r\n - Fix get_next_page error.\r\n - Fix key error in CLI.\r\n\r\n- 2016.4.4\r\n\r\n - Use new API!\r\n - Analyzer will check the last episode to decide whether to analyze all pages.\r\n - Support multiple images in one page.\r\n - Change how getimgurl and getimgurls work.\r\n\r\n- 2016.4.2\r\n\r\n - Add tumblr module.\r\n - Enhance: support sub-domain in ``mods.get_module``.\r\n\r\n- 2016.3.27\r\n\r\n - Fix: handle deleted post (konachan).\r\n - Fix: enhance dialog. try to fix `#8 <https://github.com/eight04/ComicCrawler/issues/8>`__.\r\n\r\n- 2016.2.29\r\n\r\n - Fix: use latest comicview.js (8comic).\r\n\r\n- 2016.2.27\r\n\r\n - Fix: lastcheckupdate doesn't work.\r\n - Add: comicbus domain (8comic).\r\n\r\n- 2016.2.15.1\r\n\r\n - Fix: can not add mission.\r\n\r\n- 2016.2.15\r\n\r\n - Add `lastcheckupdate` setting. Now the library will only automatically check updates once a day.\r\n - Refactor. Use MissionProxy, Mission doesn't inherit UserWorker anymore.\r\n\r\n- 2016.1.26\r\n\r\n - Change: checking updates won't affect mission which is downloading.\r\n - Fix: page won't skip if the savepath contains \"~\".\r\n - Add: a new url pattern in facebook.\r\n\r\n- 2016.1.17\r\n\r\n - Fix: an url matching issue in Facebook.\r\n - Enhance: downloader will loop through other episodes rather than stop current mission on crawlpage error.\r\n\r\n- 2016.1.15\r\n\r\n - Fix: ComicCrawler doesn't save session during downloading.\r\n\r\n- 2016.1.13\r\n\r\n - Handle HTTPError 429.\r\n\r\n- 2016.1.12\r\n\r\n - Add facebook module.\r\n - Add ``circular`` option in module. Which should be set to ``True`` if downloader doesn't know which is the last page of the album. (e.g. Facebook)\r\n\r\n- 2016.1.3\r\n\r\n - Fix downloading failed in seemh.\r\n\r\n- 2015.12.9\r\n\r\n - Fix build-time dependencies.\r\n\r\n- 2015.11.8\r\n\r\n - Fix next page issue in danbooru.\r\n\r\n- 2015.10.25\r\n\r\n - Support nico seiga.\r\n - Try to fix MemoryError when writing files.\r\n\r\n- 2015.10.9\r\n\r\n - Fix unicode range error in gui. See http://is.gd/F6JfjD\r\n\r\n- 2015.10.8\r\n\r\n - Fix an error that unable to skip episode in pixiv module.\r\n\r\n- 2015.10.7\r\n\r\n - Fix errors that unable to create folder if title contains \"{}\" characters.\r\n\r\n- 2015.10.6\r\n\r\n - Support search page in pixiv module.\r\n\r\n- 2015.9.29\r\n\r\n - Support http://www.chuixue.com.\r\n\r\n- 2015.8.7\r\n\r\n - Fixed sfacg bug.\r\n\r\n- 2015.7.31\r\n\r\n - Fixed: libraryautocheck option does not work.\r\n\r\n- 2015.7.23\r\n\r\n - Add module dmzj\\_m. Some expunged manga may be accessed from mobile page. ``http://manhua.dmzj.com/name => http://m.dmzj.com/info/name.html``\r\n\r\n- 2015.7.22\r\n\r\n - Fix bug in module eight.\r\n\r\n- 2015.7.17\r\n\r\n - Fix episode selecting bug.\r\n\r\n- 2015.7.16\r\n\r\n - Added:\r\n\r\n - Cleanup unused missions after session loads.\r\n - Handle ajax episode list in seemh.\r\n - Show an error if no update to download when clicking \"download updates\".\r\n - Show an error if failing to load session.\r\n\r\n - Changed:\r\n\r\n - Always use \"UPDATE\" state if the mission is not complete after re-analyzing.\r\n - Create backup if failing to load session instead of moving them to \"invalid-save\" folder.\r\n - Check edit flag in MissionManager.save().\r\n\r\n - Fixed:\r\n\r\n - Can not download \"updated\" mission.\r\n - Update checking will stop on error.\r\n - Sankaku module is still using old method to create Episode.\r\n\r\n- 2015.7.15\r\n\r\n - Add module seemh.\r\n\r\n- 2015.7.14\r\n\r\n - Refactor: pull out download\\_manager, mission\\_manager.\r\n - Enhance content\\_write: use os.replace.\r\n - Fix mission\\_manager save loop interval.\r\n\r\n- 2015.7.7\r\n\r\n - Fix danbooru bug.\r\n - Fix dmzj bug.\r\n\r\n- 2015.7.6\r\n\r\n - Fix getepisodes regex in exh.\r\n\r\n- 2015.7.5\r\n\r\n - Add error handler to dm5.\r\n - Add error handler to acgn.\r\n\r\n- 2015.7.4\r\n\r\n - Support imgbox.\r\n\r\n- 2015.6.22\r\n\r\n - Support tsundora.\r\n\r\n- 2015.6.18\r\n\r\n - Fix url quoting issue.\r\n\r\n- 2015.6.14\r\n\r\n - Enhance ``safeprint``. Use ``echo`` command.\r\n - Enhance ``content_write``. Add ``append=False`` option.\r\n - Enhance ``Crawler``. Cache imgurl.\r\n - Enhance ``grabber``. Add ``cookie=None`` option. Change errorlog behavior.\r\n - Fix ``grabber`` unicode encoding issue.\r\n - Some module update.\r\n\r\n- 2015.6.13\r\n\r\n - Fix ``clean_finished``\r\n - Fix ``console_download``\r\n - Enhance ``get_by_state``\r\n\r\nAuthor\r\n------\r\n\r\n- eight eight04@gmail.com\r\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "An image crawler, including multiple modules and GUI.",
"version": "2024.11.14",
"project_urls": {
"Homepage": "https://github.com/eight04/ComicCrawler"
},
"split_keywords": [
"image",
" crawler"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a0e58812c65ec5074e885af21c3bb49e5c7a58477aced848fbf74b376fab0f54",
"md5": "bae959400007d617f9105d578d004aa2",
"sha256": "822da192c65cae79998297921ee4dbd9e262ed92779ea5a00a4c3450a264f7c7"
},
"downloads": -1,
"filename": "comiccrawler-2024.11.14-py3-none-any.whl",
"has_sig": false,
"md5_digest": "bae959400007d617f9105d578d004aa2",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 118824,
"upload_time": "2024-11-13T20:34:20",
"upload_time_iso_8601": "2024-11-13T20:34:20.082878Z",
"url": "https://files.pythonhosted.org/packages/a0/e5/8812c65ec5074e885af21c3bb49e5c7a58477aced848fbf74b376fab0f54/comiccrawler-2024.11.14-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "e1116b4c791fba4ebcf32761420c4417729ceb57ec8b1a24ff7540abd9410002",
"md5": "b180f5d9a3787764f449c042421652a4",
"sha256": "e2a966dbabb7d01c0004e14318ec432a5109be5a25b21050db7703febe4f885a"
},
"downloads": -1,
"filename": "comiccrawler-2024.11.14.tar.gz",
"has_sig": false,
"md5_digest": "b180f5d9a3787764f449c042421652a4",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 111522,
"upload_time": "2024-11-13T20:34:22",
"upload_time_iso_8601": "2024-11-13T20:34:22.467356Z",
"url": "https://files.pythonhosted.org/packages/e1/11/6b4c791fba4ebcf32761420c4417729ceb57ec8b1a24ff7540abd9410002/comiccrawler-2024.11.14.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-11-13 20:34:22",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "eight04",
"github_project": "ComicCrawler",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [],
"lcname": "comiccrawler"
}