# ApiLinker
[](https://badge.fury.io/py/apilinker)
[](https://apilinker.readthedocs.io/en/latest/)
[](https://github.com/kkartas/ApiLinker/actions/workflows/ci.yml)

[](https://opensource.org/licenses/MIT)
<div align="center">
<h3>A universal bridge to connect, map, and automate data transfer between any two REST APIs</h3>
</div>
---
**ApiLinker** is an open-source Python package that simplifies the integration of REST APIs by providing a universal bridging solution. Built for developers, data engineers, and researchers who need to connect different systems without writing repetitive boilerplate code.
---
## π Features
- π **Universal Connectivity** - Connect any two REST APIs with simple configuration
- πΊοΈ **Powerful Mapping** - Transform data between APIs with field mapping and path expressions
- π **Data Transformation** - Apply built-in or custom transformations to your data
- π **Advanced Authentication & Security** - Support for API Key, Bearer Token, Basic Auth, and multiple OAuth2 flows (including PKCE and Device Flow)
- π **Enterprise-Grade Security** - Secure credential storage, request/response encryption, and role-based access control
- π **Flexible Configuration** - Use YAML/JSON or configure programmatically in Python
- π **Automated Scheduling** - Run syncs once, on intervals, or using cron expressions
- π **Data Validation** - Validate data with schemas and custom rules
- π **Plugin Architecture** - Extend with custom connectors, transformers, and authentication methods
- π **Pagination Handling** - Automatic handling of paginated API responses
- π **Robust Error Handling** - Circuit breakers, Dead Letter Queues (DLQ), and configurable recovery strategies
- 𧬠**Scientific Connectors** - Built-in connectors for research APIs (NCBI/PubMed, arXiv) with domain-specific functionality
- π¦ **Minimal Dependencies** - Lightweight core with minimal external requirements
## Security
APILinker provides enterprise-grade security features to protect your API credentials and data:
### Secure Credential Storage
```python
# Store credentials securely with encryption-at-rest
linker.store_credential("github_api", {
"token": "your-api-token"
})
# Retrieve when needed
cred = linker.get_credential("github_api")
```
### Request/Response Encryption
```yaml
# In your config.yaml
security:
encryption_level: "full" # Options: none, headers_only, body_only, full
```
### Role-Based Access Control
```python
# Enable multi-user access with different permission levels
linker = ApiLinker(
security_config={
"enable_access_control": True,
"users": [
{"username": "admin1", "role": "admin"},
{"username": "viewer1", "role": "viewer"}
]
}
)
```
For more details, see the [Security Documentation](docs/security.md).
## π Table of Contents
- [Installation](#installation)
- [Quick Start](#quick-start)
- [Configuration](#configuration)
- [Authentication Methods](#authentication-methods)
- [Field Mapping](#field-mapping)
- [Error Handling](#error-handling)
- [Data Transformations](#data-transformations)
- [Scheduling](#scheduling)
- [Command Line Interface](#command-line-interface)
- [Python API](#python-api)
- [Examples](#examples)
- [Extending ApiLinker](#extending-apilinker)
- [Contributing](#contributing)
- [Documentation](#documentation)
- [License](#license)
## π Installation
### Standard Installation
Install ApiLinker using pip (Python's package manager):
```bash
pip install apilinker
```
If you're using Windows, you might need to use:
```bash
py -m pip install apilinker
```
Make sure you have Python 3.8 or newer installed. To check your Python version:
```bash
python --version
# or
py --version
```
### Development Installation
To install from source (for contributing or customizing):
```bash
# Clone the repository
git clone https://github.com/kkartas/apilinker.git
cd apilinker
# Install in development mode with dev dependencies
pip install -e ".[dev]"
# Install with documentation tools
pip install -e ".[docs]"
```
### Verifying Installation
To verify ApiLinker is correctly installed, run:
```bash
python -c "import apilinker; print(apilinker.__version__)"
```
You should see the version number printed if installation was successful.
## π― Beginner's Guide
New to API integration? Follow this step-by-step guide to get started with ApiLinker.
### Step 1: Install ApiLinker
```bash
pip install apilinker
```
### Step 2: Create Your First API Connection
Let's connect to a public API (Weather API) and print some data:
```python
from apilinker import ApiLinker
# Create an API connection
linker = ApiLinker()
# Configure a simple source
linker.add_source(
type="rest",
base_url="https://api.openweathermap.org/data/2.5",
endpoints={
"get_weather": {
"path": "/weather",
"method": "GET",
"params": {
"q": "London",
"appid": "YOUR_API_KEY" # Get a free key at openweathermap.org
}
}
}
)
# Fetch data from the API
weather_data = linker.fetch("get_weather")
# Print results
print(f"Temperature: {weather_data['main']['temp']} K")
print(f"Conditions: {weather_data['weather'][0]['description']}")
```
### Step 3: Save the Script and Run It
Save the above code as `weather.py` and run it:
```bash
python weather.py
```
### Step 4: Try a Data Transformation
Let's convert the temperature from Kelvin to Celsius:
```python
# Add this to your script
def kelvin_to_celsius(kelvin_value):
return kelvin_value - 273.15
linker.mapper.register_transformer("kelvin_to_celsius", kelvin_to_celsius)
# Get the temperature in Celsius
temp_kelvin = weather_data['main']['temp']
temp_celsius = linker.mapper.transform(temp_kelvin, "kelvin_to_celsius")
print(f"Temperature: {temp_celsius:.1f}Β°C")
```
### Common Beginner Issues
- **ImportError**: Make sure ApiLinker is installed (`pip install apilinker`)
- **API Key errors**: Register for a free API key at the service you're using
- **Connection errors**: Check your internet connection and API endpoint URL
- **TypeError**: Make sure you're passing the correct data types to functions
## π Quick Start
### Using the CLI
Create a configuration file `config.yaml`:
```yaml
source:
type: rest
base_url: https://api.example.com/v1
auth:
type: bearer
token: ${SOURCE_API_TOKEN} # Reference environment variable
endpoints:
list_items:
path: /items
method: GET
params:
updated_since: "{{last_sync}}" # Template variable
pagination:
data_path: data
next_page_path: meta.next_page
page_param: page
target:
type: rest
base_url: https://api.destination.com/v2
auth:
type: api_key
header: X-API-Key
key: ${TARGET_API_KEY}
endpoints:
create_item:
path: /items
method: POST
mapping:
- source: list_items
target: create_item
fields:
- source: id
target: external_id
- source: name
target: title
- source: description
target: body.content
- source: created_at
target: metadata.created
transform: iso_to_timestamp
# Conditional field mapping
- source: tags
target: labels
condition:
field: tags
operator: exists
transform: lowercase
schedule:
type: interval
minutes: 60
logging:
level: INFO
file: apilinker.log
```
Run a sync with:
```bash
apilinker sync --config config.yaml
```
Run a dry run to see what would happen without making changes:
```bash
apilinker sync --config config.yaml --dry-run
```
Run a scheduled sync based on the configuration:
```bash
apilinker run --config config.yaml
```
### Using as a Python Library
```python
from apilinker import ApiLinker
# Initialize with config file
linker = ApiLinker(config_path="config.yaml")
# Or configure programmatically
linker = ApiLinker()
# Step 1: Set up your source API connection
linker.add_source(
type="rest", # API type (REST is most common)
base_url="https://api.github.com", # Base URL of the API
auth={ # Authentication details
"type": "bearer", # Using bearer token authentication
"token": "${GITHUB_TOKEN}" # Reference to an environment variable
},
endpoints={ # Define API endpoints
"list_issues": { # A name you choose for this endpoint
"path": "/repos/owner/repo/issues", # API path
"method": "GET", # HTTP method
"params": {"state": "all"} # Query parameters
}
}
)
# Step 2: Set up your target API connection
linker.add_target(
type="rest",
base_url="https://gitlab.com/api/v4",
auth={
"type": "bearer",
"token": "${GITLAB_TOKEN}"
},
endpoints={
"create_issue": {
"path": "/projects/123/issues",
"method": "POST" # This endpoint will receive data
}
}
)
# Step 3: Define how data maps from source to target
linker.add_mapping(
source="list_issues", # Source endpoint name (from Step 1)
target="create_issue", # Target endpoint name (from Step 2)
fields=[ # Field mapping instructions
{"source": "title", "target": "title"}, # Map source title β target title
{"source": "body", "target": "description"} # Map source body β target description
]
)
# Step 4: Execute the sync (one-time)
result = linker.sync()
print(f"Synced {result.count} records")
# Step 5 (Optional): Set up scheduled syncing
linker.add_schedule(interval_minutes=60) # Run every hour
linker.start_scheduled_sync()
```
#### Step-by-Step Explanation:
1. **Import the library**: `from apilinker import ApiLinker`
2. **Create an instance**: `linker = ApiLinker()`
3. **Configure source API**: Define where to get data from
4. **Configure target API**: Define where to send data to
5. **Create mappings**: Define how fields translate between APIs
6. **Run the sync**: Either once or on a schedule
## π§ Configuration
ApiLinker uses a YAML configuration format with these main sections:
### Source and Target API Configuration
Both `source` and `target` sections follow the same format:
```yaml
source: # or target:
type: rest # API type
base_url: https://api.example.com/v1 # Base URL
auth: # Authentication details
# ...
endpoints: # API endpoints
# ...
timeout: 30 # Request timeout in seconds (optional)
retry_count: 3 # Number of retries (optional)
```
### Authentication Methods
ApiLinker supports multiple authentication methods:
```yaml
# API Key Authentication
auth:
type: api_key
key: your_api_key # Or ${API_KEY_ENV_VAR}
header: X-API-Key # Header name
# Bearer Token Authentication
auth:
type: bearer
token: your_token # Or ${TOKEN_ENV_VAR}
# Basic Authentication
auth:
type: basic
username: your_username # Or ${USERNAME_ENV_VAR}
password: your_password # Or ${PASSWORD_ENV_VAR}
# OAuth2 Client Credentials
auth:
type: oauth2_client_credentials
client_id: your_client_id # Or ${CLIENT_ID_ENV_VAR}
client_secret: your_client_secret # Or ${CLIENT_SECRET_ENV_VAR}
token_url: https://auth.example.com/token
scope: read write # Optional
```
### Field Mapping
Mappings define how data is transformed between source and target:
```yaml
mapping:
- source: source_endpoint_name
target: target_endpoint_name
fields:
# Simple field mapping
- source: id
target: external_id
# Nested field mapping
- source: user.profile.name
target: user_name
# With transformation
- source: created_at
target: timestamp
transform: iso_to_timestamp
# Multiple transformations
- source: description
target: summary
transform:
- strip
- lowercase
# Conditional mapping
- source: status
target: active_status
condition:
field: status
operator: eq # eq, ne, exists, not_exists, gt, lt
value: active
```
## π Data Transformations
ApiLinker provides built-in transformers for common operations:
| Transformer | Description |
|-------------|-------------|
| `iso_to_timestamp` | Convert ISO date to Unix timestamp |
| `timestamp_to_iso` | Convert Unix timestamp to ISO date |
| `lowercase` | Convert string to lowercase |
| `uppercase` | Convert string to uppercase |
| `strip` | Remove whitespace from start/end |
| `to_string` | Convert value to string |
| `to_int` | Convert value to integer |
| `to_float` | Convert value to float |
| `to_bool` | Convert value to boolean |
| `default_empty_string` | Return empty string if null |
| `default_zero` | Return 0 if null |
| `none_if_empty` | Return null if empty string |
You can also create custom transformers:
```python
def phone_formatter(value):
"""Format phone numbers to E.164 format."""
if not value:
return None
digits = re.sub(r'\D', '', value)
if len(digits) == 10:
return f"+1{digits}"
return f"+{digits}"
# Register with ApiLinker
linker.mapper.register_transformer("phone_formatter", phone_formatter)
```
## 𧬠Comprehensive Research Connector Ecosystem
ApiLinker includes **8 specialized research connectors** covering scientific literature, chemical data, researcher profiles, code repositories, and more:
### π¬ Scientific Literature & Data
- **NCBI (PubMed, GenBank)** - Biomedical literature and genetic sequences
- **arXiv** - Academic preprints across all sciences
- **CrossRef** - Citation data and DOI resolution
- **Semantic Scholar** - AI-powered academic search with citation analysis
### π§ͺ Chemical & Biological Data
- **PubChem** - Chemical compounds, bioassays, and drug discovery data
- **ORCID** - Researcher profiles and academic credentials
### π» Code & Implementation Research
- **GitHub** - Code repositories, contribution analysis, and software research
- **NASA** - Earth science, climate data, and space research
### Quick Start with Multiple Connectors
```python
from apilinker import (
NCBIConnector, ArXivConnector, CrossRefConnector,
SemanticScholarConnector, PubChemConnector, ORCIDConnector,
GitHubConnector, NASAConnector
)
# Initialize research connectors
ncbi = NCBIConnector(email="researcher@university.edu")
arxiv = ArXivConnector()
semantic = SemanticScholarConnector(api_key="optional")
pubchem = PubChemConnector()
github = GitHubConnector(token="optional")
# Cross-platform drug discovery research
topic = "BRCA1 inhibitors"
# Literature search
pubmed_papers = ncbi.search_pubmed(topic, max_results=50)
ai_papers = semantic.search_papers(f"machine learning {topic}", max_results=30)
# Chemical compound analysis
compounds = pubchem.search_compounds("BRCA1 inhibitor")
# Implementation code
github_repos = github.search_repositories(f"{topic} drug discovery", language="Python")
print(f"PubMed papers: {len(pubmed_papers.get('esearchresult', {}).get('idlist', []))}")
print(f"AI/ML papers: {len(ai_papers.get('data', []))}")
print(f"GitHub repositories: {len(github_repos.get('items', []))}")
```
### Interdisciplinary Research Workflows
```python
from apilinker import ApiLinker
# Climate science + AI research
linker = ApiLinker()
# Combine NASA climate data with arXiv ML papers
nasa = NASAConnector(api_key="nasa_key")
arxiv = ArXivConnector()
# Get earth observation data
climate_data = nasa.get_earth_imagery(lat=40.7128, lon=-74.0060)
# Find AI methods for climate analysis
ml_climate_papers = arxiv.search_papers("machine learning climate", max_results=100)
# Researcher collaboration analysis
orcid = ORCIDConnector()
climate_researchers = orcid.search_by_research_area(["climate science", "machine learning"])
print(f"Climate data sources: {len(climate_data)}")
print(f"ML climate papers: {len(ml_climate_papers)}")
print(f"Researchers found: {len(climate_researchers.get('result', []))}")
```
## π Examples
### GitHub to GitLab Issue Migration
```python
from apilinker import ApiLinker
# Configure ApiLinker
linker = ApiLinker(
source_config={
"type": "rest",
"base_url": "https://api.github.com",
"auth": {"type": "bearer", "token": github_token},
"endpoints": {
"list_issues": {
"path": f"/repos/{owner}/{repo}/issues",
"method": "GET",
"params": {"state": "all"},
"headers": {"Accept": "application/vnd.github.v3+json"}
}
}
},
target_config={
"type": "rest",
"base_url": "https://gitlab.com/api/v4",
"auth": {"type": "bearer", "token": gitlab_token},
"endpoints": {
"create_issue": {
"path": f"/projects/{project_id}/issues",
"method": "POST"
}
}
}
)
# Custom transformer for labels
linker.mapper.register_transformer(
"github_labels_to_gitlab",
lambda labels: [label["name"] for label in labels] if labels else []
)
# Add mapping
linker.add_mapping(
source="list_issues",
target="create_issue",
fields=[
{"source": "title", "target": "title"},
{"source": "body", "target": "description"},
{"source": "labels", "target": "labels", "transform": "github_labels_to_gitlab"},
{"source": "state", "target": "state"}
]
)
# Run the migration
result = linker.sync()
print(f"Migrated {result.count} issues from GitHub to GitLab")
```
### More Examples
See the `examples` directory for more use cases:
- Salesforce to HubSpot contact sync
- CSV file to REST API import
- Weather API data collection
- Custom plugin development
## π» Common Use Cases with Examples
### 1. Sync Data Between Two APIs
This example shows how to sync customer data from CRM to a marketing platform:
```python
from apilinker import ApiLinker
import os
# Set environment variables securely before running
# os.environ["CRM_API_KEY"] = "your_crm_api_key"
# os.environ["MARKETING_API_KEY"] = "your_marketing_api_key"
# Initialize ApiLinker
linker = ApiLinker()
# Configure CRM source
linker.add_source(
type="rest",
base_url="https://api.crm-platform.com/v2",
auth={
"type": "api_key",
"header": "X-API-Key",
"key": "${CRM_API_KEY}" # Uses environment variable
},
endpoints={
"get_customers": {
"path": "/customers",
"method": "GET",
"params": {"last_modified_after": "2023-01-01"}
}
}
)
# Configure marketing platform target
linker.add_target(
type="rest",
base_url="https://api.marketing-platform.com/v1",
auth={
"type": "api_key",
"header": "Authorization",
"key": "${MARKETING_API_KEY}" # Uses environment variable
},
endpoints={
"create_contact": {
"path": "/contacts",
"method": "POST"
}
}
)
# Define field mapping with transformations
linker.add_mapping(
source="get_customers",
target="create_contact",
fields=[
{"source": "id", "target": "external_id"},
{"source": "first_name", "target": "firstName"},
{"source": "last_name", "target": "lastName"},
{"source": "email", "target": "emailAddress"},
{"source": "phone", "target": "phoneNumber", "transform": "format_phone"},
# Custom field creation with default value
{"target": "source", "value": "CRM Import"}
]
)
# Register a custom transformer for phone formatting
def format_phone(phone):
if not phone:
return ""
# Remove non-digits
digits = ''.join(c for c in phone if c.isdigit())
# Format as (XXX) XXX-XXXX for US numbers
if len(digits) == 10:
return f"({digits[0:3]}) {digits[3:6]}-{digits[6:10]}"
return phone
linker.mapper.register_transformer("format_phone", format_phone)
# Execute the sync
result = linker.sync()
print(f"Synced {result.count} customers to marketing platform")
```
### 2. Scheduled Data Collection
This example collects weather data hourly and saves to a CSV file:
```python
from apilinker import ApiLinker
import csv
import datetime
import time
import os
# Create a function to handle the collected data
def save_weather_data(data, city):
timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
# Create CSV if it doesn't exist
file_exists = os.path.isfile(f"{city}_weather.csv")
with open(f"{city}_weather.csv", mode='a', newline='') as file:
writer = csv.writer(file)
# Write header if file is new
if not file_exists:
writer.writerow(["timestamp", "temperature", "humidity", "conditions"])
# Write data
writer.writerow([
timestamp,
data['main']['temp'] - 273.15, # Convert K to C
data['main']['humidity'],
data['weather'][0]['description']
])
print(f"Weather data saved for {city} at {timestamp}")
# Initialize ApiLinker
linker = ApiLinker()
# Configure weather API
linker.add_source(
type="rest",
base_url="https://api.openweathermap.org/data/2.5",
endpoints={
"get_london_weather": {
"path": "/weather",
"method": "GET",
"params": {
"q": "London,uk",
"appid": "YOUR_API_KEY" # Replace with your API key
}
},
"get_nyc_weather": {
"path": "/weather",
"method": "GET",
"params": {
"q": "New York,us",
"appid": "YOUR_API_KEY" # Replace with your API key
}
}
}
)
# Create a custom handler for the weather data
def collect_weather():
london_data = linker.fetch("get_london_weather")
nyc_data = linker.fetch("get_nyc_weather")
save_weather_data(london_data, "London")
save_weather_data(nyc_data, "NYC")
# Run once to test
collect_weather()
# Then schedule to run hourly
linker.add_schedule(interval_minutes=60, callback=collect_weather)
linker.start_scheduled_sync()
# Keep the script running
try:
print("Weather data collection started. Press Ctrl+C to stop.")
while True:
time.sleep(60)
except KeyboardInterrupt:
print("Weather data collection stopped.")
```
## π Extending ApiLinker
### Creating Custom Plugins
ApiLinker can be extended through plugins. Here's how to create a custom transformer plugin:
```python
from apilinker.core.plugins import TransformerPlugin
class SentimentAnalysisTransformer(TransformerPlugin):
"""A transformer plugin that analyzes text sentiment."""
plugin_name = "sentiment_analysis" # This name is used to reference the plugin
version = "1.0.0" # Optional version information
author = "Your Name" # Optional author information
def transform(self, value, **kwargs):
# Simple sentiment analysis (example)
if not value or not isinstance(value, str):
return {"sentiment": "neutral", "score": 0.0}
# Add your sentiment analysis logic here
positive_words = ["good", "great", "excellent"]
negative_words = ["bad", "poor", "terrible"]
# Count positive and negative words
text = value.lower()
positive_count = sum(1 for word in positive_words if word in text)
negative_count = sum(1 for word in negative_words if word in text)
# Calculate sentiment score
total = positive_count + negative_count
score = 0.0 if total == 0 else (positive_count - negative_count) / total
return {
"sentiment": "positive" if score > 0 else "negative" if score < 0 else "neutral",
"score": score
}
```
### Using Your Custom Plugin
After creating your plugin, you need to register it before using:
```python
from apilinker import ApiLinker
# Create your custom plugin instance
from my_plugins import SentimentAnalysisTransformer
# Initialize ApiLinker
linker = ApiLinker()
# Register the plugin
linker.plugin_manager.register_plugin(SentimentAnalysisTransformer)
# Configure APIs and mappings...
linker.add_mapping(
source="get_reviews",
target="save_analysis",
fields=[
{"source": "user_id", "target": "user_id"},
# Use your custom plugin to transform the review text
{"source": "review_text", "target": "sentiment_data", "transform": "sentiment_analysis"}
]
)
```
## β Troubleshooting Guide
### Installation Issues
1. **Package not found error**
```
ERROR: Could not find a version that satisfies the requirement apilinker
```
- Make sure you're using Python 3.8 or newer
- Check your internet connection
- Try upgrading pip: `pip install --upgrade pip`
2. **Import errors**
```python
ImportError: No module named 'apilinker'
```
- Verify installation: `pip list | grep apilinker`
- Check if you're using the correct Python environment
- Try reinstalling: `pip install --force-reinstall apilinker`
### Connection Issues
1. **API connection failures**
```
ConnectionError: Failed to establish connection to api.example.com
```
- Check your internet connection
- Verify the API base URL is correct
- Make sure the API service is online
- Check if your IP is allowed by the API provider
2. **Authentication errors**
```
AuthenticationError: Invalid credentials
```
- Verify your API key or token is correct
- Check if the token has expired
- Ensure you're using the correct authentication method
### Mapping Issues
1. **Field not found errors**
```
KeyError: 'Field not found in source data: user_profile'
```
- Check the actual response data structure
- Make sure you're referencing the correct field names
- For nested fields, use dot notation (e.g., `user.profile.name`)
2. **Transformation errors**
```
ValueError: Invalid data for transformer 'iso_to_timestamp'
```
- Check if the data matches the expected format
- Make sure the transformer is properly registered
- Add validation to your custom transformers
### Common Code Examples
#### Handling API Rate Limits
```python
from apilinker import ApiLinker
import time
linker = ApiLinker()
# Configure with retry settings
linker.add_source(
type="rest",
base_url="https://api.example.com",
retry={
"max_attempts": 5,
"delay_seconds": 2,
"backoff_factor": 2, # Exponential backoff
"status_codes": [429, 500, 502, 503, 504] # Retry on these status codes
},
endpoints={
"get_data": {"path": "/data", "method": "GET"}
}
)
# Example of manual handling with wait periods
try:
data = linker.fetch("get_data")
print("Success!")
except Exception as e:
if "rate limit" in str(e).lower():
print("Rate limited, waiting and trying again...")
time.sleep(60) # Wait 1 minute
data = linker.fetch("get_data") # Try again
else:
raise e
```
## π Documentation
Documentation is available in the `/docs` directory and will be hosted online soon.
### Core Documentation
1. [Getting Started](docs/getting_started.md) - A beginner-friendly introduction
2. [Installation Guide](docs/installation.md) - Detailed installation instructions
3. [Configuration Guide](docs/configuration.md) - Configuration options and formats
4. [API Reference](docs/api_reference/index.md) - Detailed API reference
### Quick Resources
- [Quick Reference](docs/quick_reference.md) - Essential commands and patterns
- [FAQ](docs/faq.md) - Frequently asked questions
- [Troubleshooting Guide](docs/troubleshooting.md) - Solutions to common problems
### Guides and Examples
- [Cookbook](docs/cookbook.md) - Ready-to-use recipes for common tasks
- [Examples](docs/examples/index.md) - Example use cases and code
- [Extending with Plugins](docs/plugins/index.md) - Creating and using plugins
- [Security Considerations](docs/security.md) - Security best practices
### Technical Documentation
- [Architecture](docs/architecture.md) - System architecture and data flow diagrams
- [Comparison](docs/comparison.md) - How ApiLinker compares to other integration tools
### Step-by-Step Tutorials
- [API-to-API Sync Tutorial](docs/tutorials/api_to_api_sync.md) - Learn to sync data between APIs
- [Custom Transformers Tutorial](docs/tutorials/custom_transformers.md) - Create data transformation functions
- [More tutorials](docs/tutorials/index.md) - Browse all available tutorials
### Comprehensive API Reference
For developers who want to extend ApiLinker or understand its internals, we provide comprehensive API reference documentation that can be generated using Sphinx:
```bash
# Install Sphinx and required packages
pip install sphinx sphinx-rtd-theme myst-parser
# Generate HTML documentation
cd docs/sphinx_setup
sphinx-build -b html . _build/html
```
The generated documentation will be available in `docs/sphinx_setup/_build/html/index.html`
### Community Support
- [GitHub Issues](https://github.com/kkartas/apilinker/issues) - Report bugs or request features
- [Stack Overflow](https://stackoverflow.com/questions/tagged/apilinker) - Ask questions using the `apilinker` tag
## π Security Considerations
When working with APIs that require authentication, follow these security best practices:
1. **Never hardcode credentials** in your code or configuration files. Always use environment variables or secure credential stores.
2. **API Key Storage**: Use environment variables referenced in configuration with the `${ENV_VAR}` syntax.
```yaml
auth:
type: api_key
header: X-API-Key
key: ${MY_API_KEY}
```
3. **OAuth Security**: For OAuth flows, ensure credentials are stored securely and token refresh is handled properly.
4. **Credential Validation**: ApiLinker performs validation checks on authentication configurations to prevent common security issues.
5. **HTTPS Only**: ApiLinker enforces HTTPS for production API endpoints by default. Override only in development environments with explicit configuration.
6. **Rate Limiting**: Built-in rate limiting prevents accidental API abuse that could lead to account suspension.
7. **Audit Logging**: Enable detailed logging for security-relevant events with:
```yaml
logging:
level: INFO
security_audit: true
```
## π€ Contributing
Contributions are welcome! Please see our [Contributing Guide](CONTRIBUTING.md) for details.
1. Fork the repository
2. Create your feature branch (`git checkout -b feature/amazing-feature`)
3. Install development dependencies (`pip install -e ".[dev]"`)
4. Make your changes
5. Run tests (`pytest`)
6. Commit your changes (`git commit -m 'Add amazing feature'`)
7. Push to the branch (`git push origin feature/amazing-feature`)
8. Open a Pull Request
## π Citation
If you use ApiLinker in your research, please cite:
```bibtex
@software{apilinker2025,
author = {Kartas, Kyriakos},
title = {ApiLinker: A Universal Bridge for REST API Integrations},
url = {https://github.com/kkartas/apilinker},
version = {0.3.0},
year = {2025},
doi = {10.21105/joss.12345}
}
```
## π License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
Raw data
{
"_id": null,
"home_page": "https://github.com/kkartas/APILinker",
"name": "apilinker",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": "api, integration, connector, data transfer, rest",
"author": "K. Kartas",
"author_email": "\"K. Kartas\" <kkartas@users.noreply.github.com>",
"download_url": "https://files.pythonhosted.org/packages/1e/80/22dc88fcb4ae278c1ffcd9db8d4faf0a5a35010a9d89d2302041a1fcd5cf/apilinker-0.3.0.tar.gz",
"platform": null,
"description": "# ApiLinker\n\n[](https://badge.fury.io/py/apilinker)\n[](https://apilinker.readthedocs.io/en/latest/)\n[](https://github.com/kkartas/ApiLinker/actions/workflows/ci.yml)\n\n[](https://opensource.org/licenses/MIT)\n\n\n<div align=\"center\">\n <h3>A universal bridge to connect, map, and automate data transfer between any two REST APIs</h3>\n</div>\n\n---\n\n**ApiLinker** is an open-source Python package that simplifies the integration of REST APIs by providing a universal bridging solution. Built for developers, data engineers, and researchers who need to connect different systems without writing repetitive boilerplate code.\n\n---\n\n## \ud83c\udf1f Features\n\n- \ud83d\udd04 **Universal Connectivity** - Connect any two REST APIs with simple configuration\n- \ud83d\uddfa\ufe0f **Powerful Mapping** - Transform data between APIs with field mapping and path expressions\n- \ud83d\udcca **Data Transformation** - Apply built-in or custom transformations to your data\n- \ud83d\udd12 **Advanced Authentication & Security** - Support for API Key, Bearer Token, Basic Auth, and multiple OAuth2 flows (including PKCE and Device Flow)\n- \ud83d\udd12 **Enterprise-Grade Security** - Secure credential storage, request/response encryption, and role-based access control\n- \ud83d\udcdd **Flexible Configuration** - Use YAML/JSON or configure programmatically in Python\n- \ud83d\udd52 **Automated Scheduling** - Run syncs once, on intervals, or using cron expressions\n- \ud83d\udccb **Data Validation** - Validate data with schemas and custom rules\n- \ud83d\udd0c **Plugin Architecture** - Extend with custom connectors, transformers, and authentication methods\n- \ud83d\udcc8 **Pagination Handling** - Automatic handling of paginated API responses\n- \ud83d\udd0d **Robust Error Handling** - Circuit breakers, Dead Letter Queues (DLQ), and configurable recovery strategies\n- \ud83e\uddec **Scientific Connectors** - Built-in connectors for research APIs (NCBI/PubMed, arXiv) with domain-specific functionality\n- \ud83d\udce6 **Minimal Dependencies** - Lightweight core with minimal external requirements\n\n## Security\n\nAPILinker provides enterprise-grade security features to protect your API credentials and data:\n\n### Secure Credential Storage\n\n```python\n# Store credentials securely with encryption-at-rest\nlinker.store_credential(\"github_api\", {\n \"token\": \"your-api-token\"\n})\n\n# Retrieve when needed\ncred = linker.get_credential(\"github_api\")\n```\n\n### Request/Response Encryption\n\n```yaml\n# In your config.yaml\nsecurity:\n encryption_level: \"full\" # Options: none, headers_only, body_only, full\n```\n\n### Role-Based Access Control\n\n```python\n# Enable multi-user access with different permission levels\nlinker = ApiLinker(\n security_config={\n \"enable_access_control\": True,\n \"users\": [\n {\"username\": \"admin1\", \"role\": \"admin\"},\n {\"username\": \"viewer1\", \"role\": \"viewer\"}\n ]\n }\n)\n```\n\nFor more details, see the [Security Documentation](docs/security.md).\n\n## \ud83d\udccb Table of Contents\n\n- [Installation](#installation)\n- [Quick Start](#quick-start)\n- [Configuration](#configuration)\n- [Authentication Methods](#authentication-methods)\n- [Field Mapping](#field-mapping)\n- [Error Handling](#error-handling)\n- [Data Transformations](#data-transformations)\n- [Scheduling](#scheduling)\n- [Command Line Interface](#command-line-interface)\n- [Python API](#python-api)\n- [Examples](#examples)\n- [Extending ApiLinker](#extending-apilinker)\n- [Contributing](#contributing)\n- [Documentation](#documentation)\n- [License](#license)\n\n## \ud83d\ude80 Installation\n\n### Standard Installation\n\nInstall ApiLinker using pip (Python's package manager):\n\n```bash\npip install apilinker\n```\n\nIf you're using Windows, you might need to use:\n\n```bash\npy -m pip install apilinker\n```\n\nMake sure you have Python 3.8 or newer installed. To check your Python version:\n\n```bash\npython --version\n# or\npy --version\n```\n\n### Development Installation\n\nTo install from source (for contributing or customizing):\n\n```bash\n# Clone the repository\ngit clone https://github.com/kkartas/apilinker.git\ncd apilinker\n\n# Install in development mode with dev dependencies\npip install -e \".[dev]\"\n\n# Install with documentation tools\npip install -e \".[docs]\"\n```\n\n### Verifying Installation\n\nTo verify ApiLinker is correctly installed, run:\n\n```bash\npython -c \"import apilinker; print(apilinker.__version__)\"\n```\n\nYou should see the version number printed if installation was successful.\n\n## \ud83c\udfaf Beginner's Guide\n\nNew to API integration? Follow this step-by-step guide to get started with ApiLinker.\n\n### Step 1: Install ApiLinker\n\n```bash\npip install apilinker\n```\n\n### Step 2: Create Your First API Connection\n\nLet's connect to a public API (Weather API) and print some data:\n\n```python\nfrom apilinker import ApiLinker\n\n# Create an API connection\nlinker = ApiLinker()\n\n# Configure a simple source\nlinker.add_source(\n type=\"rest\",\n base_url=\"https://api.openweathermap.org/data/2.5\",\n endpoints={\n \"get_weather\": {\n \"path\": \"/weather\",\n \"method\": \"GET\",\n \"params\": {\n \"q\": \"London\",\n \"appid\": \"YOUR_API_KEY\" # Get a free key at openweathermap.org\n }\n }\n }\n)\n\n# Fetch data from the API\nweather_data = linker.fetch(\"get_weather\")\n\n# Print results\nprint(f\"Temperature: {weather_data['main']['temp']} K\")\nprint(f\"Conditions: {weather_data['weather'][0]['description']}\")\n```\n\n### Step 3: Save the Script and Run It\n\nSave the above code as `weather.py` and run it:\n\n```bash\npython weather.py\n```\n\n### Step 4: Try a Data Transformation\n\nLet's convert the temperature from Kelvin to Celsius:\n\n```python\n# Add this to your script\ndef kelvin_to_celsius(kelvin_value):\n return kelvin_value - 273.15\n\nlinker.mapper.register_transformer(\"kelvin_to_celsius\", kelvin_to_celsius)\n\n# Get the temperature in Celsius\ntemp_kelvin = weather_data['main']['temp']\ntemp_celsius = linker.mapper.transform(temp_kelvin, \"kelvin_to_celsius\")\n\nprint(f\"Temperature: {temp_celsius:.1f}\u00b0C\")\n```\n\n### Common Beginner Issues\n\n- **ImportError**: Make sure ApiLinker is installed (`pip install apilinker`)\n- **API Key errors**: Register for a free API key at the service you're using\n- **Connection errors**: Check your internet connection and API endpoint URL\n- **TypeError**: Make sure you're passing the correct data types to functions\n\n## \ud83c\udfc1 Quick Start\n\n### Using the CLI\n\nCreate a configuration file `config.yaml`:\n\n```yaml\nsource:\n type: rest\n base_url: https://api.example.com/v1\n auth:\n type: bearer\n token: ${SOURCE_API_TOKEN} # Reference environment variable\n endpoints:\n list_items:\n path: /items\n method: GET\n params:\n updated_since: \"{{last_sync}}\" # Template variable\n pagination:\n data_path: data\n next_page_path: meta.next_page\n page_param: page\ntarget:\n type: rest\n base_url: https://api.destination.com/v2\n auth:\n type: api_key\n header: X-API-Key\n key: ${TARGET_API_KEY}\n endpoints:\n create_item:\n path: /items\n method: POST\n\nmapping:\n - source: list_items\n target: create_item\n fields:\n - source: id\n target: external_id\n - source: name\n target: title\n - source: description\n target: body.content\n - source: created_at\n target: metadata.created\n transform: iso_to_timestamp\n # Conditional field mapping\n - source: tags\n target: labels\n condition:\n field: tags\n operator: exists\n transform: lowercase\n\nschedule:\n type: interval\n minutes: 60\n\nlogging:\n level: INFO\n file: apilinker.log\n```\n\nRun a sync with:\n\n```bash\napilinker sync --config config.yaml\n```\n\nRun a dry run to see what would happen without making changes:\n\n```bash\napilinker sync --config config.yaml --dry-run\n```\n\nRun a scheduled sync based on the configuration:\n\n```bash\napilinker run --config config.yaml\n```\n\n### Using as a Python Library\n\n```python\nfrom apilinker import ApiLinker\n\n# Initialize with config file\nlinker = ApiLinker(config_path=\"config.yaml\")\n\n# Or configure programmatically\nlinker = ApiLinker()\n\n# Step 1: Set up your source API connection\nlinker.add_source(\n type=\"rest\", # API type (REST is most common)\n base_url=\"https://api.github.com\", # Base URL of the API\n auth={ # Authentication details\n \"type\": \"bearer\", # Using bearer token authentication\n \"token\": \"${GITHUB_TOKEN}\" # Reference to an environment variable\n },\n endpoints={ # Define API endpoints\n \"list_issues\": { # A name you choose for this endpoint\n \"path\": \"/repos/owner/repo/issues\", # API path\n \"method\": \"GET\", # HTTP method\n \"params\": {\"state\": \"all\"} # Query parameters\n }\n }\n)\n\n# Step 2: Set up your target API connection\nlinker.add_target(\n type=\"rest\",\n base_url=\"https://gitlab.com/api/v4\",\n auth={\n \"type\": \"bearer\",\n \"token\": \"${GITLAB_TOKEN}\"\n },\n endpoints={\n \"create_issue\": {\n \"path\": \"/projects/123/issues\",\n \"method\": \"POST\" # This endpoint will receive data\n }\n }\n)\n\n# Step 3: Define how data maps from source to target\nlinker.add_mapping(\n source=\"list_issues\", # Source endpoint name (from Step 1)\n target=\"create_issue\", # Target endpoint name (from Step 2)\n fields=[ # Field mapping instructions\n {\"source\": \"title\", \"target\": \"title\"}, # Map source title \u2192 target title\n {\"source\": \"body\", \"target\": \"description\"} # Map source body \u2192 target description\n ]\n)\n\n# Step 4: Execute the sync (one-time)\nresult = linker.sync()\nprint(f\"Synced {result.count} records\")\n\n# Step 5 (Optional): Set up scheduled syncing\nlinker.add_schedule(interval_minutes=60) # Run every hour\nlinker.start_scheduled_sync()\n```\n\n#### Step-by-Step Explanation:\n\n1. **Import the library**: `from apilinker import ApiLinker`\n2. **Create an instance**: `linker = ApiLinker()`\n3. **Configure source API**: Define where to get data from\n4. **Configure target API**: Define where to send data to\n5. **Create mappings**: Define how fields translate between APIs\n6. **Run the sync**: Either once or on a schedule\n\n## \ud83d\udd27 Configuration\n\nApiLinker uses a YAML configuration format with these main sections:\n\n### Source and Target API Configuration\n\nBoth `source` and `target` sections follow the same format:\n\n```yaml\nsource: # or target:\n type: rest # API type\n base_url: https://api.example.com/v1 # Base URL\n auth: # Authentication details\n # ...\n endpoints: # API endpoints\n # ...\n timeout: 30 # Request timeout in seconds (optional)\n retry_count: 3 # Number of retries (optional)\n```\n\n### Authentication Methods\n\nApiLinker supports multiple authentication methods:\n\n```yaml\n# API Key Authentication\nauth:\n type: api_key\n key: your_api_key # Or ${API_KEY_ENV_VAR}\n header: X-API-Key # Header name\n\n# Bearer Token Authentication\nauth:\n type: bearer\n token: your_token # Or ${TOKEN_ENV_VAR}\n\n# Basic Authentication\nauth:\n type: basic\n username: your_username # Or ${USERNAME_ENV_VAR}\n password: your_password # Or ${PASSWORD_ENV_VAR}\n\n# OAuth2 Client Credentials\nauth:\n type: oauth2_client_credentials\n client_id: your_client_id # Or ${CLIENT_ID_ENV_VAR}\n client_secret: your_client_secret # Or ${CLIENT_SECRET_ENV_VAR}\n token_url: https://auth.example.com/token\n scope: read write # Optional\n```\n\n### Field Mapping\n\nMappings define how data is transformed between source and target:\n\n```yaml\nmapping:\n - source: source_endpoint_name\n target: target_endpoint_name\n fields:\n # Simple field mapping\n - source: id\n target: external_id\n \n # Nested field mapping\n - source: user.profile.name\n target: user_name\n \n # With transformation\n - source: created_at\n target: timestamp\n transform: iso_to_timestamp\n \n # Multiple transformations\n - source: description\n target: summary\n transform:\n - strip\n - lowercase\n \n # Conditional mapping\n - source: status\n target: active_status\n condition:\n field: status\n operator: eq # eq, ne, exists, not_exists, gt, lt\n value: active\n```\n\n## \ud83d\udd04 Data Transformations\n\nApiLinker provides built-in transformers for common operations:\n\n| Transformer | Description |\n|-------------|-------------|\n| `iso_to_timestamp` | Convert ISO date to Unix timestamp |\n| `timestamp_to_iso` | Convert Unix timestamp to ISO date |\n| `lowercase` | Convert string to lowercase |\n| `uppercase` | Convert string to uppercase |\n| `strip` | Remove whitespace from start/end |\n| `to_string` | Convert value to string |\n| `to_int` | Convert value to integer |\n| `to_float` | Convert value to float |\n| `to_bool` | Convert value to boolean |\n| `default_empty_string` | Return empty string if null |\n| `default_zero` | Return 0 if null |\n| `none_if_empty` | Return null if empty string |\n\nYou can also create custom transformers:\n\n```python\ndef phone_formatter(value):\n \"\"\"Format phone numbers to E.164 format.\"\"\"\n if not value:\n return None\n digits = re.sub(r'\\D', '', value)\n if len(digits) == 10:\n return f\"+1{digits}\"\n return f\"+{digits}\"\n\n# Register with ApiLinker\nlinker.mapper.register_transformer(\"phone_formatter\", phone_formatter)\n```\n\n## \ud83e\uddec Comprehensive Research Connector Ecosystem\n\nApiLinker includes **8 specialized research connectors** covering scientific literature, chemical data, researcher profiles, code repositories, and more:\n\n### \ud83d\udd2c Scientific Literature & Data\n- **NCBI (PubMed, GenBank)** - Biomedical literature and genetic sequences\n- **arXiv** - Academic preprints across all sciences \n- **CrossRef** - Citation data and DOI resolution\n- **Semantic Scholar** - AI-powered academic search with citation analysis\n\n### \ud83e\uddea Chemical & Biological Data\n- **PubChem** - Chemical compounds, bioassays, and drug discovery data\n- **ORCID** - Researcher profiles and academic credentials\n\n### \ud83d\udcbb Code & Implementation Research\n- **GitHub** - Code repositories, contribution analysis, and software research\n- **NASA** - Earth science, climate data, and space research\n\n### Quick Start with Multiple Connectors\n\n```python\nfrom apilinker import (\n NCBIConnector, ArXivConnector, CrossRefConnector, \n SemanticScholarConnector, PubChemConnector, ORCIDConnector,\n GitHubConnector, NASAConnector\n)\n\n# Initialize research connectors\nncbi = NCBIConnector(email=\"researcher@university.edu\")\narxiv = ArXivConnector()\nsemantic = SemanticScholarConnector(api_key=\"optional\")\npubchem = PubChemConnector()\ngithub = GitHubConnector(token=\"optional\")\n\n# Cross-platform drug discovery research\ntopic = \"BRCA1 inhibitors\"\n\n# Literature search\npubmed_papers = ncbi.search_pubmed(topic, max_results=50)\nai_papers = semantic.search_papers(f\"machine learning {topic}\", max_results=30)\n\n# Chemical compound analysis \ncompounds = pubchem.search_compounds(\"BRCA1 inhibitor\")\n\n# Implementation code\ngithub_repos = github.search_repositories(f\"{topic} drug discovery\", language=\"Python\")\n\nprint(f\"PubMed papers: {len(pubmed_papers.get('esearchresult', {}).get('idlist', []))}\")\nprint(f\"AI/ML papers: {len(ai_papers.get('data', []))}\")\nprint(f\"GitHub repositories: {len(github_repos.get('items', []))}\")\n```\n\n### Interdisciplinary Research Workflows\n\n```python\nfrom apilinker import ApiLinker\n\n# Climate science + AI research\nlinker = ApiLinker()\n\n# Combine NASA climate data with arXiv ML papers\nnasa = NASAConnector(api_key=\"nasa_key\")\narxiv = ArXivConnector()\n\n# Get earth observation data\nclimate_data = nasa.get_earth_imagery(lat=40.7128, lon=-74.0060)\n\n# Find AI methods for climate analysis\nml_climate_papers = arxiv.search_papers(\"machine learning climate\", max_results=100)\n\n# Researcher collaboration analysis\norcid = ORCIDConnector()\nclimate_researchers = orcid.search_by_research_area([\"climate science\", \"machine learning\"])\n\nprint(f\"Climate data sources: {len(climate_data)}\")\nprint(f\"ML climate papers: {len(ml_climate_papers)}\")\nprint(f\"Researchers found: {len(climate_researchers.get('result', []))}\")\n```\n\n## \ud83d\udcca Examples\n\n### GitHub to GitLab Issue Migration\n\n```python\nfrom apilinker import ApiLinker\n\n# Configure ApiLinker\nlinker = ApiLinker(\n source_config={\n \"type\": \"rest\",\n \"base_url\": \"https://api.github.com\",\n \"auth\": {\"type\": \"bearer\", \"token\": github_token},\n \"endpoints\": {\n \"list_issues\": {\n \"path\": f\"/repos/{owner}/{repo}/issues\",\n \"method\": \"GET\",\n \"params\": {\"state\": \"all\"},\n \"headers\": {\"Accept\": \"application/vnd.github.v3+json\"}\n }\n }\n },\n target_config={\n \"type\": \"rest\",\n \"base_url\": \"https://gitlab.com/api/v4\",\n \"auth\": {\"type\": \"bearer\", \"token\": gitlab_token},\n \"endpoints\": {\n \"create_issue\": {\n \"path\": f\"/projects/{project_id}/issues\",\n \"method\": \"POST\"\n }\n }\n }\n)\n\n# Custom transformer for labels\nlinker.mapper.register_transformer(\n \"github_labels_to_gitlab\",\n lambda labels: [label[\"name\"] for label in labels] if labels else []\n)\n\n# Add mapping\nlinker.add_mapping(\n source=\"list_issues\",\n target=\"create_issue\",\n fields=[\n {\"source\": \"title\", \"target\": \"title\"},\n {\"source\": \"body\", \"target\": \"description\"},\n {\"source\": \"labels\", \"target\": \"labels\", \"transform\": \"github_labels_to_gitlab\"},\n {\"source\": \"state\", \"target\": \"state\"}\n ]\n)\n\n# Run the migration\nresult = linker.sync()\nprint(f\"Migrated {result.count} issues from GitHub to GitLab\")\n```\n\n### More Examples\n\nSee the `examples` directory for more use cases:\n\n- Salesforce to HubSpot contact sync\n- CSV file to REST API import\n- Weather API data collection\n- Custom plugin development\n\n## \ud83d\udcbb Common Use Cases with Examples\n\n### 1. Sync Data Between Two APIs\n\nThis example shows how to sync customer data from CRM to a marketing platform:\n\n```python\nfrom apilinker import ApiLinker\nimport os\n\n# Set environment variables securely before running\n# os.environ[\"CRM_API_KEY\"] = \"your_crm_api_key\"\n# os.environ[\"MARKETING_API_KEY\"] = \"your_marketing_api_key\"\n\n# Initialize ApiLinker\nlinker = ApiLinker()\n\n# Configure CRM source\nlinker.add_source(\n type=\"rest\",\n base_url=\"https://api.crm-platform.com/v2\",\n auth={\n \"type\": \"api_key\",\n \"header\": \"X-API-Key\",\n \"key\": \"${CRM_API_KEY}\" # Uses environment variable\n },\n endpoints={\n \"get_customers\": {\n \"path\": \"/customers\",\n \"method\": \"GET\",\n \"params\": {\"last_modified_after\": \"2023-01-01\"}\n }\n }\n)\n\n# Configure marketing platform target\nlinker.add_target(\n type=\"rest\",\n base_url=\"https://api.marketing-platform.com/v1\",\n auth={\n \"type\": \"api_key\",\n \"header\": \"Authorization\", \n \"key\": \"${MARKETING_API_KEY}\" # Uses environment variable\n },\n endpoints={\n \"create_contact\": {\n \"path\": \"/contacts\",\n \"method\": \"POST\"\n }\n }\n)\n\n# Define field mapping with transformations\nlinker.add_mapping(\n source=\"get_customers\",\n target=\"create_contact\",\n fields=[\n {\"source\": \"id\", \"target\": \"external_id\"},\n {\"source\": \"first_name\", \"target\": \"firstName\"},\n {\"source\": \"last_name\", \"target\": \"lastName\"},\n {\"source\": \"email\", \"target\": \"emailAddress\"},\n {\"source\": \"phone\", \"target\": \"phoneNumber\", \"transform\": \"format_phone\"},\n # Custom field creation with default value\n {\"target\": \"source\", \"value\": \"CRM Import\"}\n ]\n)\n\n# Register a custom transformer for phone formatting\ndef format_phone(phone):\n if not phone:\n return \"\"\n # Remove non-digits\n digits = ''.join(c for c in phone if c.isdigit())\n # Format as (XXX) XXX-XXXX for US numbers\n if len(digits) == 10:\n return f\"({digits[0:3]}) {digits[3:6]}-{digits[6:10]}\"\n return phone\n\nlinker.mapper.register_transformer(\"format_phone\", format_phone)\n\n# Execute the sync\nresult = linker.sync()\nprint(f\"Synced {result.count} customers to marketing platform\")\n```\n\n### 2. Scheduled Data Collection\n\nThis example collects weather data hourly and saves to a CSV file:\n\n```python\nfrom apilinker import ApiLinker\nimport csv\nimport datetime\nimport time\nimport os\n\n# Create a function to handle the collected data\ndef save_weather_data(data, city):\n timestamp = datetime.datetime.now().strftime(\"%Y-%m-%d %H:%M:%S\")\n \n # Create CSV if it doesn't exist\n file_exists = os.path.isfile(f\"{city}_weather.csv\")\n with open(f\"{city}_weather.csv\", mode='a', newline='') as file:\n writer = csv.writer(file)\n \n # Write header if file is new\n if not file_exists:\n writer.writerow([\"timestamp\", \"temperature\", \"humidity\", \"conditions\"])\n \n # Write data\n writer.writerow([\n timestamp,\n data['main']['temp'] - 273.15, # Convert K to C\n data['main']['humidity'],\n data['weather'][0]['description']\n ])\n print(f\"Weather data saved for {city} at {timestamp}\")\n\n# Initialize ApiLinker\nlinker = ApiLinker()\n\n# Configure weather API\nlinker.add_source(\n type=\"rest\",\n base_url=\"https://api.openweathermap.org/data/2.5\",\n endpoints={\n \"get_london_weather\": {\n \"path\": \"/weather\",\n \"method\": \"GET\",\n \"params\": {\n \"q\": \"London,uk\",\n \"appid\": \"YOUR_API_KEY\" # Replace with your API key\n }\n },\n \"get_nyc_weather\": {\n \"path\": \"/weather\",\n \"method\": \"GET\",\n \"params\": {\n \"q\": \"New York,us\",\n \"appid\": \"YOUR_API_KEY\" # Replace with your API key\n }\n }\n }\n)\n\n# Create a custom handler for the weather data\ndef collect_weather():\n london_data = linker.fetch(\"get_london_weather\")\n nyc_data = linker.fetch(\"get_nyc_weather\")\n \n save_weather_data(london_data, \"London\")\n save_weather_data(nyc_data, \"NYC\")\n\n# Run once to test\ncollect_weather()\n\n# Then schedule to run hourly\nlinker.add_schedule(interval_minutes=60, callback=collect_weather)\nlinker.start_scheduled_sync()\n\n# Keep the script running\ntry:\n print(\"Weather data collection started. Press Ctrl+C to stop.\")\n while True:\n time.sleep(60)\nexcept KeyboardInterrupt:\n print(\"Weather data collection stopped.\")\n```\n\n## \ud83d\udd0c Extending ApiLinker\n\n### Creating Custom Plugins\n\nApiLinker can be extended through plugins. Here's how to create a custom transformer plugin:\n\n```python\nfrom apilinker.core.plugins import TransformerPlugin\n\nclass SentimentAnalysisTransformer(TransformerPlugin):\n \"\"\"A transformer plugin that analyzes text sentiment.\"\"\"\n \n plugin_name = \"sentiment_analysis\" # This name is used to reference the plugin\n version = \"1.0.0\" # Optional version information\n author = \"Your Name\" # Optional author information\n \n def transform(self, value, **kwargs):\n # Simple sentiment analysis (example)\n if not value or not isinstance(value, str):\n return {\"sentiment\": \"neutral\", \"score\": 0.0}\n \n # Add your sentiment analysis logic here\n positive_words = [\"good\", \"great\", \"excellent\"]\n negative_words = [\"bad\", \"poor\", \"terrible\"]\n \n # Count positive and negative words\n text = value.lower()\n positive_count = sum(1 for word in positive_words if word in text)\n negative_count = sum(1 for word in negative_words if word in text)\n \n # Calculate sentiment score\n total = positive_count + negative_count\n score = 0.0 if total == 0 else (positive_count - negative_count) / total\n \n return {\n \"sentiment\": \"positive\" if score > 0 else \"negative\" if score < 0 else \"neutral\",\n \"score\": score\n }\n```\n\n### Using Your Custom Plugin\n\nAfter creating your plugin, you need to register it before using:\n\n```python\nfrom apilinker import ApiLinker\n\n# Create your custom plugin instance\nfrom my_plugins import SentimentAnalysisTransformer\n\n# Initialize ApiLinker\nlinker = ApiLinker()\n\n# Register the plugin\nlinker.plugin_manager.register_plugin(SentimentAnalysisTransformer)\n\n# Configure APIs and mappings...\nlinker.add_mapping(\n source=\"get_reviews\",\n target=\"save_analysis\",\n fields=[\n {\"source\": \"user_id\", \"target\": \"user_id\"},\n # Use your custom plugin to transform the review text\n {\"source\": \"review_text\", \"target\": \"sentiment_data\", \"transform\": \"sentiment_analysis\"}\n ]\n)\n```\n\n## \u2753 Troubleshooting Guide\n\n### Installation Issues\n\n1. **Package not found error**\n ```\n ERROR: Could not find a version that satisfies the requirement apilinker\n ```\n - Make sure you're using Python 3.8 or newer\n - Check your internet connection\n - Try upgrading pip: `pip install --upgrade pip`\n\n2. **Import errors**\n ```python\n ImportError: No module named 'apilinker'\n ```\n - Verify installation: `pip list | grep apilinker`\n - Check if you're using the correct Python environment\n - Try reinstalling: `pip install --force-reinstall apilinker`\n\n### Connection Issues\n\n1. **API connection failures**\n ```\n ConnectionError: Failed to establish connection to api.example.com\n ```\n - Check your internet connection\n - Verify the API base URL is correct\n - Make sure the API service is online\n - Check if your IP is allowed by the API provider\n\n2. **Authentication errors**\n ```\n AuthenticationError: Invalid credentials\n ```\n - Verify your API key or token is correct\n - Check if the token has expired\n - Ensure you're using the correct authentication method\n\n### Mapping Issues\n\n1. **Field not found errors**\n ```\n KeyError: 'Field not found in source data: user_profile'\n ```\n - Check the actual response data structure\n - Make sure you're referencing the correct field names\n - For nested fields, use dot notation (e.g., `user.profile.name`)\n\n2. **Transformation errors**\n ```\n ValueError: Invalid data for transformer 'iso_to_timestamp'\n ```\n - Check if the data matches the expected format\n - Make sure the transformer is properly registered\n - Add validation to your custom transformers\n\n### Common Code Examples\n\n#### Handling API Rate Limits\n\n```python\nfrom apilinker import ApiLinker\nimport time\n\nlinker = ApiLinker()\n\n# Configure with retry settings\nlinker.add_source(\n type=\"rest\",\n base_url=\"https://api.example.com\",\n retry={\n \"max_attempts\": 5,\n \"delay_seconds\": 2,\n \"backoff_factor\": 2, # Exponential backoff\n \"status_codes\": [429, 500, 502, 503, 504] # Retry on these status codes\n },\n endpoints={\n \"get_data\": {\"path\": \"/data\", \"method\": \"GET\"}\n }\n)\n\n# Example of manual handling with wait periods\ntry:\n data = linker.fetch(\"get_data\")\n print(\"Success!\")\nexcept Exception as e:\n if \"rate limit\" in str(e).lower():\n print(\"Rate limited, waiting and trying again...\")\n time.sleep(60) # Wait 1 minute\n data = linker.fetch(\"get_data\") # Try again\n else:\n raise e\n```\n\n## \ud83d\udcda Documentation\n\nDocumentation is available in the `/docs` directory and will be hosted online soon.\n\n### Core Documentation\n\n1. [Getting Started](docs/getting_started.md) - A beginner-friendly introduction\n2. [Installation Guide](docs/installation.md) - Detailed installation instructions\n3. [Configuration Guide](docs/configuration.md) - Configuration options and formats\n4. [API Reference](docs/api_reference/index.md) - Detailed API reference\n\n### Quick Resources\n\n- [Quick Reference](docs/quick_reference.md) - Essential commands and patterns\n- [FAQ](docs/faq.md) - Frequently asked questions\n- [Troubleshooting Guide](docs/troubleshooting.md) - Solutions to common problems\n\n### Guides and Examples\n\n- [Cookbook](docs/cookbook.md) - Ready-to-use recipes for common tasks\n- [Examples](docs/examples/index.md) - Example use cases and code\n- [Extending with Plugins](docs/plugins/index.md) - Creating and using plugins\n- [Security Considerations](docs/security.md) - Security best practices\n\n### Technical Documentation\n\n- [Architecture](docs/architecture.md) - System architecture and data flow diagrams\n- [Comparison](docs/comparison.md) - How ApiLinker compares to other integration tools\n\n### Step-by-Step Tutorials\n\n- [API-to-API Sync Tutorial](docs/tutorials/api_to_api_sync.md) - Learn to sync data between APIs\n- [Custom Transformers Tutorial](docs/tutorials/custom_transformers.md) - Create data transformation functions\n- [More tutorials](docs/tutorials/index.md) - Browse all available tutorials\n\n### Comprehensive API Reference\n\nFor developers who want to extend ApiLinker or understand its internals, we provide comprehensive API reference documentation that can be generated using Sphinx:\n\n```bash\n# Install Sphinx and required packages\npip install sphinx sphinx-rtd-theme myst-parser\n\n# Generate HTML documentation\ncd docs/sphinx_setup\nsphinx-build -b html . _build/html\n```\n\nThe generated documentation will be available in `docs/sphinx_setup/_build/html/index.html`\n\n### Community Support\n\n- [GitHub Issues](https://github.com/kkartas/apilinker/issues) - Report bugs or request features\n- [Stack Overflow](https://stackoverflow.com/questions/tagged/apilinker) - Ask questions using the `apilinker` tag\n\n## \ud83d\udd12 Security Considerations\n\nWhen working with APIs that require authentication, follow these security best practices:\n\n1. **Never hardcode credentials** in your code or configuration files. Always use environment variables or secure credential stores.\n\n2. **API Key Storage**: Use environment variables referenced in configuration with the `${ENV_VAR}` syntax.\n ```yaml\n auth:\n type: api_key\n header: X-API-Key\n key: ${MY_API_KEY}\n ```\n\n3. **OAuth Security**: For OAuth flows, ensure credentials are stored securely and token refresh is handled properly.\n\n4. **Credential Validation**: ApiLinker performs validation checks on authentication configurations to prevent common security issues.\n\n5. **HTTPS Only**: ApiLinker enforces HTTPS for production API endpoints by default. Override only in development environments with explicit configuration.\n\n6. **Rate Limiting**: Built-in rate limiting prevents accidental API abuse that could lead to account suspension.\n\n7. **Audit Logging**: Enable detailed logging for security-relevant events with:\n ```yaml\n logging:\n level: INFO\n security_audit: true\n ```\n\n## \ud83e\udd1d Contributing\n\nContributions are welcome! Please see our [Contributing Guide](CONTRIBUTING.md) for details.\n\n1. Fork the repository\n2. Create your feature branch (`git checkout -b feature/amazing-feature`)\n3. Install development dependencies (`pip install -e \".[dev]\"`)\n4. Make your changes\n5. Run tests (`pytest`)\n6. Commit your changes (`git commit -m 'Add amazing feature'`)\n7. Push to the branch (`git push origin feature/amazing-feature`)\n8. Open a Pull Request\n\n## \ud83d\udcc4 Citation\n\nIf you use ApiLinker in your research, please cite:\n\n```bibtex\n@software{apilinker2025,\n author = {Kartas, Kyriakos},\n title = {ApiLinker: A Universal Bridge for REST API Integrations},\n url = {https://github.com/kkartas/apilinker},\n version = {0.3.0},\n year = {2025},\n doi = {10.21105/joss.12345}\n}\n```\n\n## \ud83d\udcc3 License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "A universal bridge to connect, map, and automate data transfer between any two REST APIs",
"version": "0.3.0",
"project_urls": {
"Bug Tracker": "https://github.com/kkartas/APILinker/issues",
"Documentation": "https://apilinker.readthedocs.io/",
"Homepage": "https://github.com/kkartas/APILinker"
},
"split_keywords": [
"api",
" integration",
" connector",
" data transfer",
" rest"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "05e4e2e4679b71efcf9ffdbd64d1479bc61e3eb5b96d2bd5c64a4ac121ec2124",
"md5": "a05ffc7aa85af584d8dcca60224211f9",
"sha256": "7d8e74828aa3023d4bcec3676fc4c32305c6944bf20c6255643585f7885dc995"
},
"downloads": -1,
"filename": "apilinker-0.3.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "a05ffc7aa85af584d8dcca60224211f9",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 97528,
"upload_time": "2025-08-04T22:28:19",
"upload_time_iso_8601": "2025-08-04T22:28:19.631705Z",
"url": "https://files.pythonhosted.org/packages/05/e4/e2e4679b71efcf9ffdbd64d1479bc61e3eb5b96d2bd5c64a4ac121ec2124/apilinker-0.3.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "1e8022dc88fcb4ae278c1ffcd9db8d4faf0a5a35010a9d89d2302041a1fcd5cf",
"md5": "f5b932720ed0548ed60888570ad95bfd",
"sha256": "fb728e9fe73399c0b3049f516d247a60f6aa4392567d3c83d5c34c45e6ddc2e6"
},
"downloads": -1,
"filename": "apilinker-0.3.0.tar.gz",
"has_sig": false,
"md5_digest": "f5b932720ed0548ed60888570ad95bfd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 103963,
"upload_time": "2025-08-04T22:28:20",
"upload_time_iso_8601": "2025-08-04T22:28:20.908161Z",
"url": "https://files.pythonhosted.org/packages/1e/80/22dc88fcb4ae278c1ffcd9db8d4faf0a5a35010a9d89d2302041a1fcd5cf/apilinker-0.3.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-08-04 22:28:20",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "kkartas",
"github_project": "APILinker",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"requirements": [
{
"name": "sphinx",
"specs": [
[
">=",
"7.2.0"
]
]
},
{
"name": "sphinx-rtd-theme",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "sphinx-autodoc-typehints",
"specs": [
[
">=",
"2.0.0"
]
]
},
{
"name": "sphinx-autoapi",
"specs": [
[
">=",
"3.0.0"
]
]
},
{
"name": "httpx",
"specs": [
[
">=",
"0.23.0"
]
]
},
{
"name": "pyyaml",
"specs": [
[
">=",
"6.0"
]
]
},
{
"name": "typer",
"specs": [
[
">=",
"0.7.0"
]
]
},
{
"name": "pydantic",
"specs": [
[
">=",
"1.10.2"
]
]
},
{
"name": "croniter",
"specs": [
[
">=",
"1.3.8"
]
]
},
{
"name": "rich",
"specs": [
[
">=",
"12.6.0"
]
]
}
],
"lcname": "apilinker"
}