# Legnext Python SDK
Official Python client library for the Legnext Midjourney API - Professional image and video generation.
## Installation
```bash
pip install legnext
```
## Quick Start
```python
from legnext import Client
# Initialize client
client = Client(api_key="your-api-key")
# Generate an image
response = client.midjourney.diffusion(
text="a beautiful sunset over mountains"
)
# Wait for completion
result = client.tasks.wait_for_completion(response.job_id)
print(result.output.image_urls)
```
## API Methods
### Image Generation
#### `diffusion(text, callback=None)`
Create a new text-to-image generation task.
```python
response = client.midjourney.diffusion(
text="a serene mountain landscape at sunset",
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `text` (str): Text prompt for image generation (1-8192 characters)
- `callback` (str, optional): Webhook URL for completion notification
**Returns:** `TaskResponse` with job_id and status
---
#### `variation(job_id, image_no, type, remix_prompt=None, callback=None)`
Create variations of a generated image.
```python
response = client.midjourney.variation(
job_id="original-job-id",
image_no=0, # Image index (0-3)
type=1, # 0=Subtle, 1=Strong
remix_prompt="add more clouds", # Optional
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `job_id` (str): ID of the original image generation task
- `image_no` (int): Image number to vary (0-3)
- `type` (int): Variation intensity (0=Subtle, 1=Strong)
- `remix_prompt` (str, optional): Additional prompt for guided variation
- `callback` (str, optional): Webhook URL
---
#### `upscale(job_id, image_no, type, callback=None)`
Upscale a generated image.
```python
response = client.midjourney.upscale(
job_id="original-job-id",
image_no=0, # Image index (0-3)
type=1, # 0=Subtle, 1=Creative
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `job_id` (str): ID of the original image generation task
- `image_no` (int): Image number to upscale (0-3)
- `type` (int): Upscaling type (0=Subtle, 1=Creative)
- `callback` (str, optional): Webhook URL
---
#### `reroll(job_id, callback=None)`
Re-generate with the same prompt to get new variations.
```python
response = client.midjourney.reroll(
job_id="original-job-id",
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `job_id` (str): ID of the task to reroll
- `callback` (str, optional): Webhook URL
---
### Image Composition
#### `blend(img_urls, aspect_ratio, callback=None)`
Blend 2-5 images together.
```python
response = client.midjourney.blend(
img_urls=[
"https://example.com/image1.png",
"https://example.com/image2.png"
],
aspect_ratio="1:1", # Required: "2:3", "1:1", or "3:2"
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `img_urls` (list[str]): 2-5 image URLs to blend
- `aspect_ratio` (str): Aspect ratio - "2:3", "1:1", or "3:2"
- `callback` (str, optional): Webhook URL
---
#### `describe(img_url, callback=None)`
Generate text descriptions from an image.
```python
response = client.midjourney.describe(
img_url="https://example.com/image.png",
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `img_url` (str): URL of image to describe
- `callback` (str, optional): Webhook URL
---
#### `shorten(prompt, callback=None)`
Optimize and shorten a prompt.
```python
response = client.midjourney.shorten(
prompt="a very detailed and long prompt text that needs optimization",
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `prompt` (str): Prompt to shorten (1-8192 characters)
- `callback` (str, optional): Webhook URL
---
### Image Extension
#### `pan(job_id, image_no, direction, scale, remix_prompt=None, callback=None)`
Extend an image in a specific direction.
```python
response = client.midjourney.pan(
job_id="original-job-id",
image_no=0, # Image index (0-3)
direction=0, # 0=UP, 1=DOWN, 2=LEFT, 3=RIGHT
scale=1.5, # Extension scale (1.1-3.0)
remix_prompt="add mountains", # Optional
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `job_id` (str): ID of the original image
- `image_no` (int): Image number to extend (0-3)
- `direction` (int): Direction to extend (0=UP, 1=DOWN, 2=LEFT, 3=RIGHT)
- `scale` (float): Extension scale ratio (1.1-3.0)
- `remix_prompt` (str, optional): Text prompt for the extended area
- `callback` (str, optional): Webhook URL
---
#### `outpaint(job_id, image_no, scale, remix_prompt=None, callback=None)`
Expand an image in all directions.
```python
response = client.midjourney.outpaint(
job_id="original-job-id",
image_no=0, # Image index (0-3)
scale=1.3, # Extension scale (1.1-2.0)
remix_prompt="add forest background", # Optional
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `job_id` (str): ID of the original image
- `image_no` (int): Image number to extend (0-3)
- `scale` (float): Extension scale ratio (1.1-2.0)
- `remix_prompt` (str, optional): Text prompt for the extended areas
- `callback` (str, optional): Webhook URL
---
### Image Editing
#### `inpaint(job_id, image_no, mask, remix_prompt=None, callback=None)`
Edit specific regions of an image using a mask.
```python
with open("mask.png", "rb") as f:
mask_data = f.read()
response = client.midjourney.inpaint(
job_id="original-job-id",
image_no=0, # Image index (0-3)
mask=mask_data, # PNG mask file
remix_prompt="add a rainbow in the sky", # Optional
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `job_id` (str): ID of the original image
- `image_no` (int): Image number to edit (0-3)
- `mask` (bytes): Mask image (PNG) indicating regions to modify
- `remix_prompt` (str, optional): Text prompt for the edited region (1-8192 characters)
- `callback` (str, optional): Webhook URL
---
#### `remix(job_id, image_no, remix_prompt, mode=None, callback=None)`
Transform an image with a new prompt.
```python
response = client.midjourney.remix(
job_id="original-job-id",
image_no=0, # Image index (0-3)
remix_prompt="turn into a watercolor painting",
mode=0, # Optional: 0=Low, 1=High
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `job_id` (str): ID of the original image
- `image_no` (int): Image number to remix (0-3)
- `remix_prompt` (str): New prompt for remix (1-8192 characters)
- `mode` (int, optional): Remix mode (0=Low, 1=High)
- `callback` (str, optional): Webhook URL
---
#### `edit(job_id, image_no, canvas, img_pos, remix_prompt, mask=None, callback=None)`
Edit specific areas of an image with canvas positioning.
```python
from legnext.types import Canvas, CanvasImg, Mask, Polygon
response = client.midjourney.edit(
job_id="original-job-id",
image_no=0,
canvas=Canvas(width=1024, height=1024),
img_pos=CanvasImg(width=512, height=512, x=256, y=256),
remix_prompt="change the sky to sunset colors",
mask=Mask(areas=[
Polygon(width=1024, height=1024, points=[100, 100, 500, 100, 500, 500, 100, 500])
]),
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `job_id` (str): ID of the original image
- `image_no` (int): Image number to edit (0-3)
- `canvas` (Canvas): Target canvas dimensions
- `img_pos` (CanvasImg): Image position and size on canvas
- `remix_prompt` (str): Edit instructions (1-8192 characters)
- `mask` (Mask, optional): Areas to repaint (polygon areas or mask URL)
- `callback` (str, optional): Webhook URL
---
#### `upload_paint(img_url, canvas, img_pos, remix_prompt, mask, callback=None)`
Advanced painting on uploaded images with canvas positioning.
```python
from legnext.types import Canvas, CanvasImg, Mask
response = client.midjourney.upload_paint(
img_url="https://example.com/image.png",
canvas=Canvas(width=1024, height=1024),
img_pos=CanvasImg(width=768, height=768, x=128, y=128),
remix_prompt="add magical effects",
mask=Mask(url="https://example.com/mask.png"),
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `img_url` (str): URL of the image to edit
- `canvas` (Canvas): Target canvas dimensions
- `img_pos` (CanvasImg): Image position and size on canvas
- `remix_prompt` (str): Painting instructions (1-8192 characters)
- `mask` (Mask): Areas to edit (required - polygon areas or mask URL)
- `callback` (str, optional): Webhook URL
---
### Image Enhancement
#### `retexture(img_url, remix_prompt, callback=None)`
Change textures and surfaces of an image.
```python
response = client.midjourney.retexture(
img_url="https://example.com/image.png",
remix_prompt="metallic and shiny surfaces",
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `img_url` (str): URL of the image to retexture
- `remix_prompt` (str): Texture description (1-8192 characters)
- `callback` (str, optional): Webhook URL
---
#### `remove_background(img_url, callback=None)`
Remove the background from an image.
```python
response = client.midjourney.remove_background(
img_url="https://example.com/image.png",
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `img_url` (str): URL of the image to process
- `callback` (str, optional): Webhook URL
---
#### `enhance(job_id, image_no, callback=None)`
Enhance image quality (draft to high-res).
```python
response = client.midjourney.enhance(
job_id="draft-job-id",
image_no=0, # Image index (0-3)
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `job_id` (str): ID of the draft mode image
- `image_no` (int): Image number to enhance (0-3)
- `callback` (str, optional): Webhook URL
---
### Video Generation
#### `video_diffusion(prompt, video_type=None, callback=None)`
Generate a video from text prompt with image URL.
```python
# Generate video with image URL in prompt
response = client.midjourney.video_diffusion(
prompt="https://example.com/image.png a flowing river through mountains",
video_type=1, # Optional: 0=480p, 1=720p
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `prompt` (str): Video generation prompt. Format: "[image_url] your prompt text" (1-8192 characters)
- `video_type` (int, optional): Video quality type (0: 480p, 1: 720p)
- `callback` (str, optional): Webhook URL
---
#### `extend_video(job_id, video_no, prompt=None, callback=None)`
Extend an existing video.
```python
response = client.midjourney.extend_video(
job_id="original-video-job-id",
video_no=0, # Video index (0 or 1)
prompt="continue with dramatic lighting", # Optional
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `job_id` (str): ID of the original video task
- `video_no` (int): Video number to extend (0 or 1)
- `prompt` (str, optional): Text prompt to guide the extension (1-8192 characters)
- `callback` (str, optional): Webhook URL
---
#### `video_upscale(job_id, video_no, callback=None)`
Upscale a video to higher resolution.
```python
response = client.midjourney.video_upscale(
job_id="original-video-job-id",
video_no=0, # Video index (0 or 1)
callback="https://your-domain.com/webhook" # Optional
)
```
**Parameters:**
- `job_id` (str): ID of the original video task
- `video_no` (int): Video number to upscale (0 or 1)
- `callback` (str, optional): Webhook URL
---
## Task Management
### `tasks.get(job_id)`
Get the current status of a task.
```python
task = client.tasks.get(job_id="job-123")
print(f"Status: {task.status}")
if task.status == "completed":
print(f"Images: {task.output.image_urls}")
```
---
### `tasks.wait_for_completion(job_id, timeout=300, poll_interval=3, on_progress=None)`
Wait for a task to complete with automatic polling.
```python
def show_progress(task):
print(f"Status: {task.status}")
result = client.tasks.wait_for_completion(
job_id="job-123",
timeout=600, # Max wait time in seconds
poll_interval=5, # Check every 5 seconds
on_progress=show_progress # Optional callback
)
print(f"Completed! Images: {result.output.image_urls}")
```
**Parameters:**
- `job_id` (str): The job ID to wait for
- `timeout` (float, optional): Maximum time to wait in seconds (default: 300)
- `poll_interval` (float, optional): Time between status checks in seconds (default: 3)
- `on_progress` (callable, optional): Callback function called on each status check
---
## Async Support
All methods are available in async form using `AsyncClient`:
```python
import asyncio
from legnext import AsyncClient
async def main():
async with AsyncClient(api_key="your-api-key") as client:
# Generate image
response = await client.midjourney.diffusion(
text="a futuristic cityscape"
)
# Wait for completion
result = await client.tasks.wait_for_completion(response.job_id)
print(result.output.image_urls)
asyncio.run(main())
```
### Batch Processing
```python
async def generate_multiple():
async with AsyncClient(api_key="your-api-key") as client:
# Start multiple tasks
tasks = [
client.midjourney.diffusion(text=f"image prompt {i}")
for i in range(5)
]
responses = await asyncio.gather(*tasks)
# Wait for all completions
results = await asyncio.gather(*[
client.tasks.wait_for_completion(r.job_id)
for r in responses
])
return results
```
---
## Error Handling
```python
from legnext import Client, LegnextAPIError, RateLimitError, ValidationError
client = Client(api_key="your-api-key")
try:
response = client.midjourney.diffusion(text="a beautiful landscape")
except ValidationError as e:
print(f"Invalid parameters: {e.message}")
except RateLimitError as e:
print(f"Rate limit exceeded: {e.message}")
except LegnextAPIError as e:
print(f"API error: {e.message} (status: {e.status_code})")
```
**Exception Types:**
- `ValidationError` (400): Invalid request parameters
- `AuthenticationError` (401): Invalid API key
- `NotFoundError` (404): Resource not found
- `RateLimitError` (429): Rate limit exceeded
- `ServerError` (5xx): Server errors
- `LegnextAPIError`: Base exception for all API errors
---
## Requirements
- Python 3.10+
- httpx >= 0.27.0
- pydantic >= 2.0.0
---
## Support
For questions or issues, please contact:
- Email: support@legnext.ai
- Website: https://legnext.ai
---
## License
Apache License 2.0 - See [LICENSE](LICENSE) for details.
Raw data
{
"_id": null,
"home_page": null,
"name": "legnext",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.10",
"maintainer_email": null,
"keywords": "ai, api-client, image-generation, legnext, midjourney, video-generation",
"author": null,
"author_email": "tobias-hui <huikai@tritonix.cn>",
"download_url": "https://files.pythonhosted.org/packages/9f/48/b8b6ec7e49596e94d98605b8e489c45420c055a32ab5aa8a01e36ce4c463/legnext-0.2.3.tar.gz",
"platform": null,
"description": "# Legnext Python SDK\n\nOfficial Python client library for the Legnext Midjourney API - Professional image and video generation.\n\n## Installation\n\n```bash\npip install legnext\n```\n\n## Quick Start\n\n```python\nfrom legnext import Client\n\n# Initialize client\nclient = Client(api_key=\"your-api-key\")\n\n# Generate an image\nresponse = client.midjourney.diffusion(\n text=\"a beautiful sunset over mountains\"\n)\n\n# Wait for completion\nresult = client.tasks.wait_for_completion(response.job_id)\nprint(result.output.image_urls)\n```\n\n## API Methods\n\n### Image Generation\n\n#### `diffusion(text, callback=None)`\n\nCreate a new text-to-image generation task.\n\n```python\nresponse = client.midjourney.diffusion(\n text=\"a serene mountain landscape at sunset\",\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `text` (str): Text prompt for image generation (1-8192 characters)\n- `callback` (str, optional): Webhook URL for completion notification\n\n**Returns:** `TaskResponse` with job_id and status\n\n---\n\n#### `variation(job_id, image_no, type, remix_prompt=None, callback=None)`\n\nCreate variations of a generated image.\n\n```python\nresponse = client.midjourney.variation(\n job_id=\"original-job-id\",\n image_no=0, # Image index (0-3)\n type=1, # 0=Subtle, 1=Strong\n remix_prompt=\"add more clouds\", # Optional\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `job_id` (str): ID of the original image generation task\n- `image_no` (int): Image number to vary (0-3)\n- `type` (int): Variation intensity (0=Subtle, 1=Strong)\n- `remix_prompt` (str, optional): Additional prompt for guided variation\n- `callback` (str, optional): Webhook URL\n\n---\n\n#### `upscale(job_id, image_no, type, callback=None)`\n\nUpscale a generated image.\n\n```python\nresponse = client.midjourney.upscale(\n job_id=\"original-job-id\",\n image_no=0, # Image index (0-3)\n type=1, # 0=Subtle, 1=Creative\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `job_id` (str): ID of the original image generation task\n- `image_no` (int): Image number to upscale (0-3)\n- `type` (int): Upscaling type (0=Subtle, 1=Creative)\n- `callback` (str, optional): Webhook URL\n\n---\n\n#### `reroll(job_id, callback=None)`\n\nRe-generate with the same prompt to get new variations.\n\n```python\nresponse = client.midjourney.reroll(\n job_id=\"original-job-id\",\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `job_id` (str): ID of the task to reroll\n- `callback` (str, optional): Webhook URL\n\n---\n\n### Image Composition\n\n#### `blend(img_urls, aspect_ratio, callback=None)`\n\nBlend 2-5 images together.\n\n```python\nresponse = client.midjourney.blend(\n img_urls=[\n \"https://example.com/image1.png\",\n \"https://example.com/image2.png\"\n ],\n aspect_ratio=\"1:1\", # Required: \"2:3\", \"1:1\", or \"3:2\"\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `img_urls` (list[str]): 2-5 image URLs to blend\n- `aspect_ratio` (str): Aspect ratio - \"2:3\", \"1:1\", or \"3:2\"\n- `callback` (str, optional): Webhook URL\n\n---\n\n#### `describe(img_url, callback=None)`\n\nGenerate text descriptions from an image.\n\n```python\nresponse = client.midjourney.describe(\n img_url=\"https://example.com/image.png\",\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `img_url` (str): URL of image to describe\n- `callback` (str, optional): Webhook URL\n\n---\n\n#### `shorten(prompt, callback=None)`\n\nOptimize and shorten a prompt.\n\n```python\nresponse = client.midjourney.shorten(\n prompt=\"a very detailed and long prompt text that needs optimization\",\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `prompt` (str): Prompt to shorten (1-8192 characters)\n- `callback` (str, optional): Webhook URL\n\n---\n\n### Image Extension\n\n#### `pan(job_id, image_no, direction, scale, remix_prompt=None, callback=None)`\n\nExtend an image in a specific direction.\n\n```python\nresponse = client.midjourney.pan(\n job_id=\"original-job-id\",\n image_no=0, # Image index (0-3)\n direction=0, # 0=UP, 1=DOWN, 2=LEFT, 3=RIGHT\n scale=1.5, # Extension scale (1.1-3.0)\n remix_prompt=\"add mountains\", # Optional\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `job_id` (str): ID of the original image\n- `image_no` (int): Image number to extend (0-3)\n- `direction` (int): Direction to extend (0=UP, 1=DOWN, 2=LEFT, 3=RIGHT)\n- `scale` (float): Extension scale ratio (1.1-3.0)\n- `remix_prompt` (str, optional): Text prompt for the extended area\n- `callback` (str, optional): Webhook URL\n\n---\n\n#### `outpaint(job_id, image_no, scale, remix_prompt=None, callback=None)`\n\nExpand an image in all directions.\n\n```python\nresponse = client.midjourney.outpaint(\n job_id=\"original-job-id\",\n image_no=0, # Image index (0-3)\n scale=1.3, # Extension scale (1.1-2.0)\n remix_prompt=\"add forest background\", # Optional\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `job_id` (str): ID of the original image\n- `image_no` (int): Image number to extend (0-3)\n- `scale` (float): Extension scale ratio (1.1-2.0)\n- `remix_prompt` (str, optional): Text prompt for the extended areas\n- `callback` (str, optional): Webhook URL\n\n---\n\n### Image Editing\n\n#### `inpaint(job_id, image_no, mask, remix_prompt=None, callback=None)`\n\nEdit specific regions of an image using a mask.\n\n```python\nwith open(\"mask.png\", \"rb\") as f:\n mask_data = f.read()\n\nresponse = client.midjourney.inpaint(\n job_id=\"original-job-id\",\n image_no=0, # Image index (0-3)\n mask=mask_data, # PNG mask file\n remix_prompt=\"add a rainbow in the sky\", # Optional\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `job_id` (str): ID of the original image\n- `image_no` (int): Image number to edit (0-3)\n- `mask` (bytes): Mask image (PNG) indicating regions to modify\n- `remix_prompt` (str, optional): Text prompt for the edited region (1-8192 characters)\n- `callback` (str, optional): Webhook URL\n\n---\n\n#### `remix(job_id, image_no, remix_prompt, mode=None, callback=None)`\n\nTransform an image with a new prompt.\n\n```python\nresponse = client.midjourney.remix(\n job_id=\"original-job-id\",\n image_no=0, # Image index (0-3)\n remix_prompt=\"turn into a watercolor painting\",\n mode=0, # Optional: 0=Low, 1=High\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `job_id` (str): ID of the original image\n- `image_no` (int): Image number to remix (0-3)\n- `remix_prompt` (str): New prompt for remix (1-8192 characters)\n- `mode` (int, optional): Remix mode (0=Low, 1=High)\n- `callback` (str, optional): Webhook URL\n\n---\n\n#### `edit(job_id, image_no, canvas, img_pos, remix_prompt, mask=None, callback=None)`\n\nEdit specific areas of an image with canvas positioning.\n\n```python\nfrom legnext.types import Canvas, CanvasImg, Mask, Polygon\n\nresponse = client.midjourney.edit(\n job_id=\"original-job-id\",\n image_no=0,\n canvas=Canvas(width=1024, height=1024),\n img_pos=CanvasImg(width=512, height=512, x=256, y=256),\n remix_prompt=\"change the sky to sunset colors\",\n mask=Mask(areas=[\n Polygon(width=1024, height=1024, points=[100, 100, 500, 100, 500, 500, 100, 500])\n ]),\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `job_id` (str): ID of the original image\n- `image_no` (int): Image number to edit (0-3)\n- `canvas` (Canvas): Target canvas dimensions\n- `img_pos` (CanvasImg): Image position and size on canvas\n- `remix_prompt` (str): Edit instructions (1-8192 characters)\n- `mask` (Mask, optional): Areas to repaint (polygon areas or mask URL)\n- `callback` (str, optional): Webhook URL\n\n---\n\n#### `upload_paint(img_url, canvas, img_pos, remix_prompt, mask, callback=None)`\n\nAdvanced painting on uploaded images with canvas positioning.\n\n```python\nfrom legnext.types import Canvas, CanvasImg, Mask\n\nresponse = client.midjourney.upload_paint(\n img_url=\"https://example.com/image.png\",\n canvas=Canvas(width=1024, height=1024),\n img_pos=CanvasImg(width=768, height=768, x=128, y=128),\n remix_prompt=\"add magical effects\",\n mask=Mask(url=\"https://example.com/mask.png\"),\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `img_url` (str): URL of the image to edit\n- `canvas` (Canvas): Target canvas dimensions\n- `img_pos` (CanvasImg): Image position and size on canvas\n- `remix_prompt` (str): Painting instructions (1-8192 characters)\n- `mask` (Mask): Areas to edit (required - polygon areas or mask URL)\n- `callback` (str, optional): Webhook URL\n\n---\n\n### Image Enhancement\n\n#### `retexture(img_url, remix_prompt, callback=None)`\n\nChange textures and surfaces of an image.\n\n```python\nresponse = client.midjourney.retexture(\n img_url=\"https://example.com/image.png\",\n remix_prompt=\"metallic and shiny surfaces\",\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `img_url` (str): URL of the image to retexture\n- `remix_prompt` (str): Texture description (1-8192 characters)\n- `callback` (str, optional): Webhook URL\n\n---\n\n#### `remove_background(img_url, callback=None)`\n\nRemove the background from an image.\n\n```python\nresponse = client.midjourney.remove_background(\n img_url=\"https://example.com/image.png\",\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `img_url` (str): URL of the image to process\n- `callback` (str, optional): Webhook URL\n\n---\n\n#### `enhance(job_id, image_no, callback=None)`\n\nEnhance image quality (draft to high-res).\n\n```python\nresponse = client.midjourney.enhance(\n job_id=\"draft-job-id\",\n image_no=0, # Image index (0-3)\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `job_id` (str): ID of the draft mode image\n- `image_no` (int): Image number to enhance (0-3)\n- `callback` (str, optional): Webhook URL\n\n---\n\n### Video Generation\n\n#### `video_diffusion(prompt, video_type=None, callback=None)`\n\nGenerate a video from text prompt with image URL.\n\n```python\n# Generate video with image URL in prompt\nresponse = client.midjourney.video_diffusion(\n prompt=\"https://example.com/image.png a flowing river through mountains\",\n video_type=1, # Optional: 0=480p, 1=720p\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `prompt` (str): Video generation prompt. Format: \"[image_url] your prompt text\" (1-8192 characters)\n- `video_type` (int, optional): Video quality type (0: 480p, 1: 720p)\n- `callback` (str, optional): Webhook URL\n\n---\n\n#### `extend_video(job_id, video_no, prompt=None, callback=None)`\n\nExtend an existing video.\n\n```python\nresponse = client.midjourney.extend_video(\n job_id=\"original-video-job-id\",\n video_no=0, # Video index (0 or 1)\n prompt=\"continue with dramatic lighting\", # Optional\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `job_id` (str): ID of the original video task\n- `video_no` (int): Video number to extend (0 or 1)\n- `prompt` (str, optional): Text prompt to guide the extension (1-8192 characters)\n- `callback` (str, optional): Webhook URL\n\n---\n\n#### `video_upscale(job_id, video_no, callback=None)`\n\nUpscale a video to higher resolution.\n\n```python\nresponse = client.midjourney.video_upscale(\n job_id=\"original-video-job-id\",\n video_no=0, # Video index (0 or 1)\n callback=\"https://your-domain.com/webhook\" # Optional\n)\n```\n\n**Parameters:**\n- `job_id` (str): ID of the original video task\n- `video_no` (int): Video number to upscale (0 or 1)\n- `callback` (str, optional): Webhook URL\n\n---\n\n## Task Management\n\n### `tasks.get(job_id)`\n\nGet the current status of a task.\n\n```python\ntask = client.tasks.get(job_id=\"job-123\")\nprint(f\"Status: {task.status}\")\nif task.status == \"completed\":\n print(f\"Images: {task.output.image_urls}\")\n```\n\n---\n\n### `tasks.wait_for_completion(job_id, timeout=300, poll_interval=3, on_progress=None)`\n\nWait for a task to complete with automatic polling.\n\n```python\ndef show_progress(task):\n print(f\"Status: {task.status}\")\n\nresult = client.tasks.wait_for_completion(\n job_id=\"job-123\",\n timeout=600, # Max wait time in seconds\n poll_interval=5, # Check every 5 seconds\n on_progress=show_progress # Optional callback\n)\n\nprint(f\"Completed! Images: {result.output.image_urls}\")\n```\n\n**Parameters:**\n- `job_id` (str): The job ID to wait for\n- `timeout` (float, optional): Maximum time to wait in seconds (default: 300)\n- `poll_interval` (float, optional): Time between status checks in seconds (default: 3)\n- `on_progress` (callable, optional): Callback function called on each status check\n\n---\n\n## Async Support\n\nAll methods are available in async form using `AsyncClient`:\n\n```python\nimport asyncio\nfrom legnext import AsyncClient\n\nasync def main():\n async with AsyncClient(api_key=\"your-api-key\") as client:\n # Generate image\n response = await client.midjourney.diffusion(\n text=\"a futuristic cityscape\"\n )\n\n # Wait for completion\n result = await client.tasks.wait_for_completion(response.job_id)\n print(result.output.image_urls)\n\nasyncio.run(main())\n```\n\n### Batch Processing\n\n```python\nasync def generate_multiple():\n async with AsyncClient(api_key=\"your-api-key\") as client:\n # Start multiple tasks\n tasks = [\n client.midjourney.diffusion(text=f\"image prompt {i}\")\n for i in range(5)\n ]\n responses = await asyncio.gather(*tasks)\n\n # Wait for all completions\n results = await asyncio.gather(*[\n client.tasks.wait_for_completion(r.job_id)\n for r in responses\n ])\n\n return results\n```\n\n---\n\n## Error Handling\n\n```python\nfrom legnext import Client, LegnextAPIError, RateLimitError, ValidationError\n\nclient = Client(api_key=\"your-api-key\")\n\ntry:\n response = client.midjourney.diffusion(text=\"a beautiful landscape\")\nexcept ValidationError as e:\n print(f\"Invalid parameters: {e.message}\")\nexcept RateLimitError as e:\n print(f\"Rate limit exceeded: {e.message}\")\nexcept LegnextAPIError as e:\n print(f\"API error: {e.message} (status: {e.status_code})\")\n```\n\n**Exception Types:**\n- `ValidationError` (400): Invalid request parameters\n- `AuthenticationError` (401): Invalid API key\n- `NotFoundError` (404): Resource not found\n- `RateLimitError` (429): Rate limit exceeded\n- `ServerError` (5xx): Server errors\n- `LegnextAPIError`: Base exception for all API errors\n\n---\n\n## Requirements\n\n- Python 3.10+\n- httpx >= 0.27.0\n- pydantic >= 2.0.0\n\n---\n\n## Support\n\nFor questions or issues, please contact:\n- Email: support@legnext.ai\n- Website: https://legnext.ai\n\n---\n\n## License\n\nApache License 2.0 - See [LICENSE](LICENSE) for details.\n",
"bugtrack_url": null,
"license": "Apache-2.0",
"summary": "Python SDK for Legnext AI API - Professional image and video generation using Midjourney models",
"version": "0.2.3",
"project_urls": {
"Documentation": "https://legnext.ai/docs",
"Homepage": "https://legnext.ai"
},
"split_keywords": [
"ai",
" api-client",
" image-generation",
" legnext",
" midjourney",
" video-generation"
],
"urls": [
{
"comment_text": null,
"digests": {
"blake2b_256": "b521f115ac2b84e270250cbd862a31f371bb577e3aa58aaa6b980ca97afbeef9",
"md5": "db2d1584beab250c489ebb58cdf09030",
"sha256": "e0a9c3ef41e4c15032b284464c6aa173181abaeec9ca96fa7ccd3de4046039b6"
},
"downloads": -1,
"filename": "legnext-0.2.3-py3-none-any.whl",
"has_sig": false,
"md5_digest": "db2d1584beab250c489ebb58cdf09030",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.10",
"size": 29458,
"upload_time": "2025-10-21T11:52:21",
"upload_time_iso_8601": "2025-10-21T11:52:21.921504Z",
"url": "https://files.pythonhosted.org/packages/b5/21/f115ac2b84e270250cbd862a31f371bb577e3aa58aaa6b980ca97afbeef9/legnext-0.2.3-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": null,
"digests": {
"blake2b_256": "9f48b8b6ec7e49596e94d98605b8e489c45420c055a32ab5aa8a01e36ce4c463",
"md5": "7cc28bd093e8f7c2f2157d3b39ead464",
"sha256": "874ccd5f725a2c3a6d78b433383cc9bafef589e067cebd55d94c52c272545aa2"
},
"downloads": -1,
"filename": "legnext-0.2.3.tar.gz",
"has_sig": false,
"md5_digest": "7cc28bd093e8f7c2f2157d3b39ead464",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.10",
"size": 68418,
"upload_time": "2025-10-21T11:52:23",
"upload_time_iso_8601": "2025-10-21T11:52:23.106073Z",
"url": "https://files.pythonhosted.org/packages/9f/48/b8b6ec7e49596e94d98605b8e489c45420c055a32ab5aa8a01e36ce4c463/legnext-0.2.3.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-21 11:52:23",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "legnext"
}