# QuickDownload
A high-performance parallel file downloader that supports both HTTP/HTTPS downloads and BitTorrent downloads, with full support for resuming interrupted downloads.
## Features
- **Parallel Downloads**: Split HTTP files into multiple chunks and download them simultaneously
- **BitTorrent Support**: Download from magnet links, .torrent files, and .torrent URLs
- **Resume Support**: Automatically resume interrupted downloads from where they left off
- **Progress Tracking**: Persistent progress tracking survives network failures and interruptions
- **Configurable Parallelism**: Control the number of parallel download threads (HTTP only)
- **Custom Output**: Specify custom output file names and locations
- **Command Line Interface**: Simple and intuitive CLI for both HTTP and torrent downloads
- **High Performance**: Significantly faster downloads for large files
- **Automatic Retry**: Failed chunks are automatically retried with exponential backoff
- **Seeding Support**: Optional seeding for torrent downloads to contribute back to the network
## Installation
```bash
# Clone the repository
git clone https://github.com/yourusername/QuickDownload.git
cd QuickDownload
# Install the package (includes libtorrent for torrent support)
pip install -e .
```
Note: For torrent support, libtorrent is required. If installation fails, try:
```bash
# On macOS with Homebrew
brew install libtorrent-rasterbar
# On Ubuntu/Debian
sudo apt-get install python3-libtorrent
# Then install QuickDownload
pip install -e .
```
## Usage
### HTTP/HTTPS Downloads
Basic usage:
```bash
quickdownload https://example.com/largefile.zip
```
With custom output filename:
```bash
quickdownload -o myfile.zip https://example.com/largefile.zip
```
With custom parallelism:
```bash
quickdownload -p 8 https://example.com/largefile.zip
```
### Torrent Downloads
Download from magnet link:
```bash
quickdownload "magnet:?xt=urn:btih:1234567890abcdef1234567890abcdef12345678"
```
Download from .torrent file:
```bash
quickdownload ~/Downloads/ubuntu.torrent
```
Download from .torrent URL:
```bash
quickdownload https://example.com/ubuntu.torrent
```
Download to specific directory:
```bash
quickdownload -o ~/Downloads "magnet:?xt=urn:btih:..."
```
Download with seeding:
```bash
quickdownload --seed-time 60 "magnet:?xt=urn:btih:..."
```
### Command Line Options
| Option | Short | Description | Default | Applies To |
|--------|-------|-------------|---------|------------|
| `--output` | `-o` | Custom output filename/directory | Current directory | Both |
| `--parallel` | `-p` | Number of parallel download threads | 4 | HTTP only |
| `--seed-time` | `` | Time to seed after torrent download (minutes) | 0 | Torrent only |
| `--help` | `-h` | Show help message | - | Both |
### HTTP Download Examples
Download a large file with maximum parallelism:
```bash
quickdownload -p 16 https://releases.ubuntu.com/22.04/ubuntu-22.04.3-desktop-amd64.iso
```
Download to a specific directory:
```bash
quickdownload -o ~/Downloads/ubuntu.iso https://releases.ubuntu.com/22.04/ubuntu-22.04.3-desktop-amd64.iso
```
### Torrent Download Examples
Download Linux distribution:
```bash
# Using magnet link
quickdownload "magnet:?xt=urn:btih:a26f24611b7db8c524c6e96b7e25000b9e2ad705"
# Using .torrent file
quickdownload ~/Downloads/ubuntu-22.04.3-desktop-amd64.iso.torrent
# Using .torrent URL
quickdownload https://releases.ubuntu.com/22.04/ubuntu-22.04.3-desktop-amd64.iso.torrent
```
Download to specific directory with seeding:
```bash
quickdownload -o ~/Downloads --seed-time 30 "magnet:?xt=urn:btih:..."
```
### Resume Examples
If a download is interrupted, simply run the same command to resume:
```bash
# Start download
quickdownload -p 8 -o large_file.zip https://example.com/large_file.zip
# ... download gets interrupted (Ctrl+C, network failure, etc.) ...
# Resume download - same command!
quickdownload -p 8 -o large_file.zip https://example.com/large_file.zip
# Output: "📁 Found existing download progress - checking chunks..."
# Output: "🔄 Resuming previous download..."
```
### Real-World Usage Scenarios
**Scenario 1: Large Software Download**
```bash
# Download a large ISO file with 8 parallel connections
quickdownload -p 8 -o ubuntu-22.04.iso \
https://releases.ubuntu.com/22.04/ubuntu-22.04.3-desktop-amd64.iso
# If interrupted, resume with the same command
# Only missing chunks will be downloaded
```
**Scenario 2: Unreliable Network**
```bash
# Start download on unreliable connection
quickdownload -p 4 -o dataset.zip https://data.example.com/large-dataset.zip
# Network fails halfway through - no problem!
# Later, run the same command to continue:
quickdownload -p 4 -o dataset.zip https://data.example.com/large-dataset.zip
# Progress is automatically resumed from where it left off
```
**Scenario 3: Batch Processing**
```bash
#!/bin/bash
# Download multiple files with resume support
files=(
"file1.zip"
"file2.tar.gz"
"file3.iso"
)
for file in "${files[@]}"; do
echo "Downloading $file..."
quickdownload -p 6 -o "$file" "https://downloads.example.com/$file"
# Each file will resume if previously interrupted
done
```
## How It Works
1. **File Analysis**: QuickDownload first analyzes the target file to determine its size and whether the server supports range requests
2. **Resume Detection**: Checks for existing partial downloads and verifies chunk integrity
3. **Chunk Creation**: The file is divided into equal chunks based on the specified parallelism level
4. **Parallel Download**: Each chunk is downloaded simultaneously using separate threads
5. **Progress Tracking**: Download progress is continuously saved to enable resuming
6. **Assembly**: Downloaded chunks are assembled into the final file
7. **Cleanup**: Temporary files and progress data are removed upon successful completion
## Performance Benefits
- **Speed**: Downloads are typically 2-5x faster than single-threaded downloads
- **Reliability**: Failed chunks are automatically retried without affecting other chunks
- **Efficiency**: Network bandwidth is utilized more effectively
- **Resumability**: Large downloads can be resumed exactly where they left off
- **Fault Tolerance**: Individual chunk failures don't stop the entire download
- **Smart Recovery**: Corrupted chunks are automatically detected and re-downloaded
- **Zero Data Loss**: Progress tracking ensures no completed work is lost on interruption
### Performance Examples
```bash
# Traditional wget (single-threaded)
wget https://example.com/1GB-file.zip
# Time: ~10 minutes on 100Mbps connection
# QuickDownload (8 parallel threads)
quickdownload -p 8 https://example.com/1GB-file.zip
# Time: ~3-4 minutes on same connection (2.5x faster)
# If interrupted at 60% completion:
# wget: starts over from 0%
# QuickDownload: resumes from 60% - saves 6+ minutes!
```
### Resume Implementation Details
QuickDownload implements resume functionality through:
1. **Progress Files**: Creates `.progress` files containing:
- Original URL and parameters
- File size and chunk information
- List of completed chunk IDs
- Timestamp for validation
2. **Chunk Verification**: On resume, verifies each "completed" chunk:
- Checks file size matches expected chunk size
- Re-downloads corrupted or missing chunks
- Only downloads what's actually needed
3. **State Management**:
- Temporary `.part{N}` files for each chunk
- Progress saved after each completed chunk
- Automatic cleanup on successful completion
### Current Implementation Status
✅ **Fully Implemented:**
- Parallel downloading with configurable threads
- Complete resume functionality after interruptions
- Progress tracking that survives crashes
- Chunk integrity verification
- Automatic retry with exponential backoff
- Smart fallback for servers without range support
- Comprehensive error handling
- Progress visualization with real-time updates
✅ **Tested Scenarios:**
- Network interruptions (Ctrl+C)
- Computer crashes/restarts
- Corrupted chunk detection
- Invalid URLs and server errors
- Single-threaded fallback
- Multiple resume attempts
⚠️ **Current Limitations:**
- Resume only works when using same parameters (URL, output file, parallel count)
- Servers without HTTP range support fall back to single-threaded (no resume)
- Progress files are stored in the same directory as output file
### Supported Protocols
- HTTP/HTTPS
- Servers that support range requests (HTTP 206 Partial Content)
- Automatic fallback to single-threaded download for servers that don't support ranges
### Requirements
- Python 3.7+
- Network connection
- Sufficient disk space for the target file
## Error Handling
QuickDownload includes robust error handling for:
- Network timeouts and connection errors
- Disk space issues
- Invalid URLs
- Server errors (404, 500, etc.)
- Chunk download failures with automatic retry
- Corrupted chunk detection and recovery
- Interrupted downloads with full resume capability
## FAQ
**Q: How many parallel downloads should I use?**
A: The optimal number depends on your network connection and the server. Start with 4-8 parallel downloads. Too many can actually slow things down due to overhead.
**Q: Does this work with all websites?**
A: QuickDownload works with any server that supports HTTP range requests. If range requests aren't supported, it falls back to a single-threaded download.
**Q: Can I resume a download that was interrupted?**
A: Yes! QuickDownload automatically saves progress and can resume downloads exactly where they left off. Simply run the same command again, and it will detect and resume the previous download.
**Q: What happens if my computer crashes during a download?**
A: No problem! QuickDownload saves progress continuously. When you restart the download, it will verify existing chunks and only download what's missing or corrupted.
**Q: How does chunk verification work?**
A: Each chunk is verified for size and integrity before being considered complete. Corrupted chunks are automatically re-downloaded.
**Q: What files does QuickDownload create during download?**
A: During download, you'll see:
- `filename.part0`, `filename.part1`, etc. (temporary chunk files)
- `filename.progress` (progress tracking file)
These are automatically cleaned up on successful completion.
**Q: Can I change the number of parallel downloads for a resumed download?**
A: No, you must use the same parameters (URL, output filename, parallel count) to resume a download. If you change parameters, it will start fresh.
**Q: Does resume work with all servers?**
A: Resume only works with servers that support HTTP range requests. If a server doesn't support ranges, QuickDownload will fall back to single-threaded download (which can't be resumed).
**Q: How much faster is parallel downloading?**
A: Typically 2-5x faster than single-threaded downloads, depending on your network connection and the server's capabilities. The optimal number of threads varies by situation.
**Q: Is this safe to use?**
A: Yes, QuickDownload only downloads files and doesn't execute any code. However, always be cautious about what files you download from the internet.
### Troubleshooting
**Download stuck at "Analyzing file..."**: Server may be slow to respond. Check your internet connection.
**"Range requests supported: False"**: Server doesn't support parallel downloads. Will use single-threaded mode (still works, just slower).
**Resume not working**: Ensure you're using the exact same command (URL, output file, parallel count) as the original download.
**Chunks failing repeatedly**: May indicate server issues or network instability. Try reducing parallel count with `-p 2`.
---
Made with ❤️ for faster, more reliable downloads
Raw data
{
"_id": null,
"home_page": null,
"name": "quickdownload",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.7",
"maintainer_email": null,
"keywords": "download, parallel, torrent, resume, cli, http, https, magnet",
"author": null,
"author_email": "Nikhil K Singh <nsr.nikhilsingh@gmail.com>",
"download_url": "https://files.pythonhosted.org/packages/ed/17/affe625f2d7bdcb64f8d5afe457059267ac86dfc0bd5ac9aa8e0b753c38a/quickdownload-1.0.1.tar.gz",
"platform": null,
"description": "# QuickDownload\n\nA high-performance parallel file downloader that supports both HTTP/HTTPS downloads and BitTorrent downloads, with full support for resuming interrupted downloads.\n\n## Features\n\n- **Parallel Downloads**: Split HTTP files into multiple chunks and download them simultaneously\n- **BitTorrent Support**: Download from magnet links, .torrent files, and .torrent URLs\n- **Resume Support**: Automatically resume interrupted downloads from where they left off\n- **Progress Tracking**: Persistent progress tracking survives network failures and interruptions\n- **Configurable Parallelism**: Control the number of parallel download threads (HTTP only)\n- **Custom Output**: Specify custom output file names and locations\n- **Command Line Interface**: Simple and intuitive CLI for both HTTP and torrent downloads\n- **High Performance**: Significantly faster downloads for large files\n- **Automatic Retry**: Failed chunks are automatically retried with exponential backoff\n- **Seeding Support**: Optional seeding for torrent downloads to contribute back to the network\n\n## Installation\n\n```bash\n# Clone the repository\ngit clone https://github.com/yourusername/QuickDownload.git\ncd QuickDownload\n\n# Install the package (includes libtorrent for torrent support)\npip install -e .\n```\n\nNote: For torrent support, libtorrent is required. If installation fails, try:\n```bash\n# On macOS with Homebrew\nbrew install libtorrent-rasterbar\n\n# On Ubuntu/Debian\nsudo apt-get install python3-libtorrent\n\n# Then install QuickDownload\npip install -e .\n```\n\n## Usage\n\n### HTTP/HTTPS Downloads\n\nBasic usage:\n```bash\nquickdownload https://example.com/largefile.zip\n```\n\nWith custom output filename:\n```bash\nquickdownload -o myfile.zip https://example.com/largefile.zip\n```\n\nWith custom parallelism:\n```bash\nquickdownload -p 8 https://example.com/largefile.zip\n```\n\n### Torrent Downloads\n\nDownload from magnet link:\n```bash\nquickdownload \"magnet:?xt=urn:btih:1234567890abcdef1234567890abcdef12345678\"\n```\n\nDownload from .torrent file:\n```bash\nquickdownload ~/Downloads/ubuntu.torrent\n```\n\nDownload from .torrent URL:\n```bash\nquickdownload https://example.com/ubuntu.torrent\n```\n\nDownload to specific directory:\n```bash\nquickdownload -o ~/Downloads \"magnet:?xt=urn:btih:...\"\n```\n\nDownload with seeding:\n```bash\nquickdownload --seed-time 60 \"magnet:?xt=urn:btih:...\"\n```\n\n### Command Line Options\n\n| Option | Short | Description | Default | Applies To |\n|--------|-------|-------------|---------|------------|\n| `--output` | `-o` | Custom output filename/directory | Current directory | Both |\n| `--parallel` | `-p` | Number of parallel download threads | 4 | HTTP only |\n| `--seed-time` | `` | Time to seed after torrent download (minutes) | 0 | Torrent only |\n| `--help` | `-h` | Show help message | - | Both |\n\n### HTTP Download Examples\n\nDownload a large file with maximum parallelism:\n```bash\nquickdownload -p 16 https://releases.ubuntu.com/22.04/ubuntu-22.04.3-desktop-amd64.iso\n```\n\nDownload to a specific directory:\n```bash\nquickdownload -o ~/Downloads/ubuntu.iso https://releases.ubuntu.com/22.04/ubuntu-22.04.3-desktop-amd64.iso\n```\n\n### Torrent Download Examples\n\nDownload Linux distribution:\n```bash\n# Using magnet link\nquickdownload \"magnet:?xt=urn:btih:a26f24611b7db8c524c6e96b7e25000b9e2ad705\"\n\n# Using .torrent file\nquickdownload ~/Downloads/ubuntu-22.04.3-desktop-amd64.iso.torrent\n\n# Using .torrent URL\nquickdownload https://releases.ubuntu.com/22.04/ubuntu-22.04.3-desktop-amd64.iso.torrent\n```\n\nDownload to specific directory with seeding:\n```bash\nquickdownload -o ~/Downloads --seed-time 30 \"magnet:?xt=urn:btih:...\"\n```\n\n### Resume Examples\n\nIf a download is interrupted, simply run the same command to resume:\n\n```bash\n# Start download\nquickdownload -p 8 -o large_file.zip https://example.com/large_file.zip\n# ... download gets interrupted (Ctrl+C, network failure, etc.) ...\n\n# Resume download - same command!\nquickdownload -p 8 -o large_file.zip https://example.com/large_file.zip\n# Output: \"\ud83d\udcc1 Found existing download progress - checking chunks...\"\n# Output: \"\ud83d\udd04 Resuming previous download...\"\n```\n\n### Real-World Usage Scenarios\n\n**Scenario 1: Large Software Download**\n```bash\n# Download a large ISO file with 8 parallel connections\nquickdownload -p 8 -o ubuntu-22.04.iso \\\n https://releases.ubuntu.com/22.04/ubuntu-22.04.3-desktop-amd64.iso\n\n# If interrupted, resume with the same command\n# Only missing chunks will be downloaded\n```\n\n**Scenario 2: Unreliable Network**\n```bash\n# Start download on unreliable connection\nquickdownload -p 4 -o dataset.zip https://data.example.com/large-dataset.zip\n\n# Network fails halfway through - no problem!\n# Later, run the same command to continue:\nquickdownload -p 4 -o dataset.zip https://data.example.com/large-dataset.zip\n# Progress is automatically resumed from where it left off\n```\n\n**Scenario 3: Batch Processing**\n```bash\n#!/bin/bash\n# Download multiple files with resume support\nfiles=(\n \"file1.zip\"\n \"file2.tar.gz\" \n \"file3.iso\"\n)\n\nfor file in \"${files[@]}\"; do\n echo \"Downloading $file...\"\n quickdownload -p 6 -o \"$file\" \"https://downloads.example.com/$file\"\n # Each file will resume if previously interrupted\ndone\n```\n\n## How It Works\n\n1. **File Analysis**: QuickDownload first analyzes the target file to determine its size and whether the server supports range requests\n2. **Resume Detection**: Checks for existing partial downloads and verifies chunk integrity\n3. **Chunk Creation**: The file is divided into equal chunks based on the specified parallelism level\n4. **Parallel Download**: Each chunk is downloaded simultaneously using separate threads\n5. **Progress Tracking**: Download progress is continuously saved to enable resuming\n6. **Assembly**: Downloaded chunks are assembled into the final file\n7. **Cleanup**: Temporary files and progress data are removed upon successful completion\n\n## Performance Benefits\n\n- **Speed**: Downloads are typically 2-5x faster than single-threaded downloads\n- **Reliability**: Failed chunks are automatically retried without affecting other chunks\n- **Efficiency**: Network bandwidth is utilized more effectively\n- **Resumability**: Large downloads can be resumed exactly where they left off\n- **Fault Tolerance**: Individual chunk failures don't stop the entire download\n- **Smart Recovery**: Corrupted chunks are automatically detected and re-downloaded\n- **Zero Data Loss**: Progress tracking ensures no completed work is lost on interruption\n\n### Performance Examples\n\n```bash\n# Traditional wget (single-threaded)\nwget https://example.com/1GB-file.zip\n# Time: ~10 minutes on 100Mbps connection\n\n# QuickDownload (8 parallel threads)\nquickdownload -p 8 https://example.com/1GB-file.zip \n# Time: ~3-4 minutes on same connection (2.5x faster)\n\n# If interrupted at 60% completion:\n# wget: starts over from 0%\n# QuickDownload: resumes from 60% - saves 6+ minutes!\n```\n\n### Resume Implementation Details\n\nQuickDownload implements resume functionality through:\n\n1. **Progress Files**: Creates `.progress` files containing:\n - Original URL and parameters\n - File size and chunk information\n - List of completed chunk IDs\n - Timestamp for validation\n\n2. **Chunk Verification**: On resume, verifies each \"completed\" chunk:\n - Checks file size matches expected chunk size\n - Re-downloads corrupted or missing chunks\n - Only downloads what's actually needed\n\n3. **State Management**: \n - Temporary `.part{N}` files for each chunk\n - Progress saved after each completed chunk\n - Automatic cleanup on successful completion\n\n### Current Implementation Status\n\n\u2705 **Fully Implemented:**\n- Parallel downloading with configurable threads\n- Complete resume functionality after interruptions\n- Progress tracking that survives crashes\n- Chunk integrity verification\n- Automatic retry with exponential backoff\n- Smart fallback for servers without range support\n- Comprehensive error handling\n- Progress visualization with real-time updates\n\n\u2705 **Tested Scenarios:**\n- Network interruptions (Ctrl+C)\n- Computer crashes/restarts\n- Corrupted chunk detection\n- Invalid URLs and server errors\n- Single-threaded fallback\n- Multiple resume attempts\n\n\u26a0\ufe0f **Current Limitations:**\n- Resume only works when using same parameters (URL, output file, parallel count)\n- Servers without HTTP range support fall back to single-threaded (no resume)\n- Progress files are stored in the same directory as output file\n\n### Supported Protocols\n\n- HTTP/HTTPS\n- Servers that support range requests (HTTP 206 Partial Content)\n- Automatic fallback to single-threaded download for servers that don't support ranges\n\n### Requirements\n\n- Python 3.7+\n- Network connection\n- Sufficient disk space for the target file\n\n## Error Handling\n\nQuickDownload includes robust error handling for:\n- Network timeouts and connection errors\n- Disk space issues\n- Invalid URLs\n- Server errors (404, 500, etc.)\n- Chunk download failures with automatic retry\n- Corrupted chunk detection and recovery\n- Interrupted downloads with full resume capability\n\n## FAQ\n\n**Q: How many parallel downloads should I use?**\nA: The optimal number depends on your network connection and the server. Start with 4-8 parallel downloads. Too many can actually slow things down due to overhead.\n\n**Q: Does this work with all websites?**\nA: QuickDownload works with any server that supports HTTP range requests. If range requests aren't supported, it falls back to a single-threaded download.\n\n**Q: Can I resume a download that was interrupted?**\nA: Yes! QuickDownload automatically saves progress and can resume downloads exactly where they left off. Simply run the same command again, and it will detect and resume the previous download.\n\n**Q: What happens if my computer crashes during a download?**\nA: No problem! QuickDownload saves progress continuously. When you restart the download, it will verify existing chunks and only download what's missing or corrupted.\n\n**Q: How does chunk verification work?**\nA: Each chunk is verified for size and integrity before being considered complete. Corrupted chunks are automatically re-downloaded.\n\n**Q: What files does QuickDownload create during download?**\nA: During download, you'll see:\n- `filename.part0`, `filename.part1`, etc. (temporary chunk files)\n- `filename.progress` (progress tracking file)\nThese are automatically cleaned up on successful completion.\n\n**Q: Can I change the number of parallel downloads for a resumed download?**\nA: No, you must use the same parameters (URL, output filename, parallel count) to resume a download. If you change parameters, it will start fresh.\n\n**Q: Does resume work with all servers?**\nA: Resume only works with servers that support HTTP range requests. If a server doesn't support ranges, QuickDownload will fall back to single-threaded download (which can't be resumed).\n\n**Q: How much faster is parallel downloading?**\nA: Typically 2-5x faster than single-threaded downloads, depending on your network connection and the server's capabilities. The optimal number of threads varies by situation.\n\n**Q: Is this safe to use?**\nA: Yes, QuickDownload only downloads files and doesn't execute any code. However, always be cautious about what files you download from the internet.\n\n\n### Troubleshooting\n\n**Download stuck at \"Analyzing file...\"**: Server may be slow to respond. Check your internet connection.\n\n**\"Range requests supported: False\"**: Server doesn't support parallel downloads. Will use single-threaded mode (still works, just slower).\n\n**Resume not working**: Ensure you're using the exact same command (URL, output file, parallel count) as the original download.\n\n**Chunks failing repeatedly**: May indicate server issues or network instability. Try reducing parallel count with `-p 2`.\n\n---\n\nMade with \u2764\ufe0f for faster, more reliable downloads\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "High-performance parallel file downloader with resume support and torrent support",
"version": "1.0.1",
"project_urls": {
"homepage": "https://github.com/Nikhil-K-Singh/quickdownload"
},
"split_keywords": [
"download",
" parallel",
" torrent",
" resume",
" cli",
" http",
" https",
" magnet"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "9735d631a1d54c8cd268d7af6e3fa7fd4478b646dcd4cc32ebcf99a33bc52d25",
"md5": "1205b3931365215ed346d8795c0c9d68",
"sha256": "3102bd3fb3f4d820fe593a02788aae2657d37b0646f108f318580634fd3c5684"
},
"downloads": -1,
"filename": "quickdownload-1.0.1-py3-none-any.whl",
"has_sig": false,
"md5_digest": "1205b3931365215ed346d8795c0c9d68",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.7",
"size": 16985,
"upload_time": "2025-08-05T02:15:23",
"upload_time_iso_8601": "2025-08-05T02:15:23.538929Z",
"url": "https://files.pythonhosted.org/packages/97/35/d631a1d54c8cd268d7af6e3fa7fd4478b646dcd4cc32ebcf99a33bc52d25/quickdownload-1.0.1-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "ed17affe625f2d7bdcb64f8d5afe457059267ac86dfc0bd5ac9aa8e0b753c38a",
"md5": "04d01ded9e1579e2379454d137772469",
"sha256": "ec43b3a12e6170c87e2b3a504a6bdddcc3e8480176ad680d471faf4d60e6eff8"
},
"downloads": -1,
"filename": "quickdownload-1.0.1.tar.gz",
"has_sig": false,
"md5_digest": "04d01ded9e1579e2379454d137772469",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.7",
"size": 19674,
"upload_time": "2025-08-05T02:15:25",
"upload_time_iso_8601": "2025-08-05T02:15:25.493968Z",
"url": "https://files.pythonhosted.org/packages/ed/17/affe625f2d7bdcb64f8d5afe457059267ac86dfc0bd5ac9aa8e0b753c38a/quickdownload-1.0.1.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-05 02:15:25",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "Nikhil-K-Singh",
"github_project": "quickdownload",
"github_not_found": true,
"lcname": "quickdownload"
}