# Team Query
> The ORM you get when you love SQL but hate typing it twice.
A SQL-first code generator inspired by [sqlc](https://github.com/sqlc-dev/sqlc) that creates type-safe database clients for multiple languages from SQL query files.
[](https://github.com/jechenique/team-query)
[](LICENSE)
## **Why *Not* Using Team Query?**
*A bold tale of chaos, creativity, and the art of duplicating SQL in every language imaginable.*
Ah, Team Query—so preachy, so organized, so… *collaborative*. But let’s be honest: where’s the fun in shared logic, code reusability, and consistent data access when you could just wing it?
---
### **1. Embrace Creative SQL Expression**
Why settle for one boring, correct version of a SQL query when each engineer can write their *own* slightly different, definitely-not-bug-free version?
**Analytics Team:**
``` sql
SELECT u.id AS user_id, AVG(o.total_amount) AS avg_order_value
FROM users u
JOIN orders o ON u.id = o.user_id
WHERE o.created_at >= CURRENT_DATE - INTERVAL '30 days'
GROUP BY u.id
```
**Backend Team:**
``` sql
SELECT o.user_id, SUM(o.total_amount) / COUNT(DISTINCT o.id) AS avg_order_value
FROM orders o
WHERE o.created_at >= NOW() - INTERVAL '1 month'
GROUP BY o.user_id
```
**Data Engineering:**
``` sql
SELECT user_id, ROUND(AVG(order_value)::numeric, 2) AS avg_order_value
FROM (
SELECT user_id, SUM(amount) AS order_value
FROM order_items
WHERE created_at >= DATE_TRUNC('day', NOW() - INTERVAL '30 days')
GROUP BY order_id, user_id
) sub
GROUP BY user_id
```
Who needs consistency when you’ve got *character*?
---
### **2. Promote Job Security Through Mystery**
When no one knows which version of the query is “the real one,” debugging becomes an exciting, career-building treasure hunt.
**SQL spelunking**: a new sport for engineers.
---
### **3. Strengthen Team Bonding Over Cross-Team Blame**
- Frontend team broke the app? Backend says “works on my query.”
- Backend returns bad data? Frontend says “not my logic.”
Thanks to the *absence* of a shared source of truth, everyone gets to participate in a lively blame game—perfect for team morale!
---
### **4. Enable Infinite Customization**
Why let one boring Team Query dictate your style when each team can implement their own caching, parameterization, pagination, and bug?
Think of all the slightly-off implementations you can proudly call your own.
---
### **5. Make Onboarding Unforgettable**
New engineer asks:
>“Where’s the query for revenue?”
You say:
> “Depends. Which version do you want? The Python one, the JavaScript one, or the undocumented legacy one from 2017?”
They’ll never forget their first week. Neither will HR.
---
So next time someone preaches Team Query, just smile and say,
> “No thanks. I prefer a *little* chaos in my SQL.”
## Table of Contents
1. [Overview](#overview)
2. [Installation](#installation)
3. [Quick Start](#quick-start)
4. [Connection Flexibility](#connection-flexibility)
5. [Writing SQL Queries](#writing-sql-queries)
- [Query Annotations](#query-annotations)
- [Parameter Types](#parameter-types)
- [Dynamic Queries](#dynamic-queries)
- [Conditional SQL Blocks](#conditional-sql-blocks)
- [Result Types](#result-types)
6. [Using Generated Code](#using-generated-code)
- [Python Usage](#python-usage)
- [JavaScript Usage](#javascript-usage)
- [Logging](#logging)
- [Performance Monitoring](#performance-monitoring)
- [Transaction Support](#transaction-support)
7. [Recommended Team Workflow](#recommended-team-workflow)
8. [Configuration](#configuration)
9. [Advanced Features](#advanced-features)
10. [Building and Publishing](#building-and-publishing)
11. [Contributing](#contributing)
12. [License](#license)
## Overview
Team Query lets you write SQL queries once and generate type-safe database clients for multiple programming languages. It's designed for developers who:
- Prefer writing SQL directly rather than using an ORM
- Want type safety and IDE autocompletion
- Need to support multiple programming languages
- Want to avoid duplicating database logic across languages
**Key Features:**
- Write SQL queries in `.sql` files with type annotations
- Generate type-safe clients for Python and JavaScript
- Support for dynamic queries with conditional blocks
- Built-in logging and performance monitoring
- Transaction support
- Parameter validation to prevent SQL injection
## Installation
```bash
# Quick installation using the provided shell script (recommended)
curl -sSL https://raw.githubusercontent.com/jechenique/team-query/master/install.sh | bash
# Install as a library
pip install team-query
# Install as a CLI tool
pipx install team-query
```
## Quick Start
1. **Create SQL query files**
```sql
-- name: GetUserById :one
-- param: id int User ID
SELECT * FROM users WHERE id = :id;
-- name: CreateUser :execresult
-- param: name string User's name
-- param: email string User's email
INSERT INTO users (name, email)
VALUES (:name, :email)
RETURNING id, name, email;
```
2. **Create a configuration file** (`team-query.yaml`)
```yaml
version: 1
project:
name: myproject
version: "1.0.0"
sql:
- queries: ["./queries/*.sql"]
schema: ["public"]
engine: postgresql
gen:
- plugin: python
out: "./generated/python"
- plugin: javascript
out: "./generated/javascript"
```
3. **Generate code**
```bash
team-query generate --config team-query.yaml
```
4. **Use the generated code**
Python:
```python
from generated.python import GetUserById, ListActiveUsers, SearchUsers
# Connect to database
import psycopg
conn = psycopg.connect("postgresql://user:password@localhost/dbname")
# Simple query with required parameters
user = GetUserById(conn, id=1)
print(f"Found user: {user['name']}")
# Query with limit
users = ListActiveUsers(conn, limit=10)
for user in users:
print(f"Active user: {user['name']}")
# Dynamic query with optional parameters
search_results = SearchUsers(
conn,
name="John", # Optional - will be included in WHERE clause
email=None, # Optional - will be excluded from WHERE clause
limit=10,
offset=0
)
```
JavaScript:
```javascript
// Import generated functions
const { GetUserById, ListActiveUsers, SearchUsers } = require('./generated/javascript');
// Connect to database
const { Pool } = require('pg');
const pool = new Pool({
connectionString: 'postgresql://user:password@localhost/dbname'
});
async function main() {
// Simple query with required parameters
const user = await GetUserById(pool, { id: 1 });
console.log(`Found user: ${user.name}`);
// Query with limit
const users = await ListActiveUsers(pool, { limit: 10 });
users.forEach(user => {
console.log(`Active user: ${user.name}`);
});
// Dynamic query with optional parameters
const searchResults = await SearchUsers(pool, {
name: "John", // Optional - will be included in WHERE clause
limit: 10,
offset: 0
});
}
main().catch(console.error);
```
## Connection Flexibility
The generated query functions are designed to be flexible with database connections:
- **Python**: Functions can accept either:
- An existing connection object (`psycopg.Connection`)
- A connection string (e.g., `"postgresql://user:password@localhost/dbname"`)
- **JavaScript**: Functions can accept either:
- A Pool object from the `pg` package
- A Client object from the `pg` package
- A connection string (e.g., `"postgresql://user:password@localhost/dbname"`)
This flexibility allows you to manage connections however best suits your application's needs - either by passing connection strings directly to query functions or by managing connection pools externally.
## Writing SQL Queries
### Query Annotations
Each SQL query needs annotations to define its name and return type:
```sql
-- name: QueryName :returnType
-- param: paramName paramType Description
SQL_QUERY_HERE;
```
Return types:
- `:one` - Returns a single record
- `:many` - Returns multiple records
- `:exec` - Executes without returning data
- `:execrows` - Returns affected row count
- `:execresult` - Returns result data (for INSERT/UPDATE with RETURNING)
Example:
```sql
-- name: ListActiveUsers :many
-- param: limit int Maximum number of users to return
SELECT id, name, email FROM users
WHERE active = true
ORDER BY created_at DESC
LIMIT :limit;
```
### Parameter Types
Define parameters with type annotations:
```sql
-- param: paramName paramType [optional description]
```
Supported types:
- `int`, `integer`, `bigint`, `smallint`
- `string`, `text`, `varchar`, `char`
- `bool`, `boolean`
- `float`, `real`, `double`
- `decimal`, `numeric`
- `date`, `time`, `timestamp`, `timestamptz`
- `json`, `jsonb`
- `uuid`
- `bytea`
### Dynamic Queries
Create dynamic queries with optional parameters:
```sql
-- name: SearchUsers :many
-- param: name string Optional name filter
-- param: email string Optional email filter
-- param: limit int Maximum results to return
-- param: offset int Pagination offset
SELECT * FROM users
WHERE
(:name IS NULL OR name ILIKE '%' || :name || '%') AND
(:email IS NULL OR email ILIKE '%' || :email || '%')
ORDER BY name
LIMIT :limit OFFSET :offset;
```
### Conditional SQL Blocks
For better performance with dynamic queries, use conditional blocks:
```sql
-- name: SearchUsers :many
-- param: name string Optional name filter
-- param: email string Optional email filter
-- param: limit int Maximum results to return
-- param: offset int Pagination offset
SELECT * FROM users
WHERE
-- {name
AND name ILIKE '%' || :name || '%'
-- }
-- {email
AND email ILIKE '%' || :email || '%'
-- }
ORDER BY name
LIMIT :limit OFFSET :offset;
```
When a parameter is null/undefined, its entire block is removed from the query. This creates more efficient SQL that can better utilize indexes.
### Result Types
Query results are returned as:
- **Python**: Dictionary objects with column names as keys
- **JavaScript**: Plain JavaScript objects with column names as properties
This makes the results easy to work with and compatible with most frameworks and libraries.
## Using Generated Code
### Python Usage
The generated Python code provides a clean, type-safe API for database operations:
```python
# Import generated functions
from generated.python import GetUserById, ListActiveUsers, SearchUsers
# Connect to database
import psycopg
conn = psycopg.connect("postgresql://user:password@localhost/dbname")
# Simple query with required parameters
user = GetUserById(conn, id=1)
print(f"Found user: {user['name']}")
# Query with limit
users = ListActiveUsers(conn, limit=10)
for user in users:
print(f"Active user: {user['name']}")
# Dynamic query with optional parameters
search_results = SearchUsers(
conn,
name="John", # Optional - will be included in WHERE clause
email=None, # Optional - will be excluded from WHERE clause
limit=10,
offset=0
)
```
### JavaScript Usage
The generated JavaScript code provides an async/await API for database operations:
```javascript
// Import generated functions
const { GetUserById, ListActiveUsers, SearchUsers } = require('./generated/javascript');
// Connect to database
const { Pool } = require('pg');
const pool = new Pool({
connectionString: 'postgresql://user:password@localhost/dbname'
});
async function main() {
// Simple query with required parameters
const user = await GetUserById(pool, { id: 1 });
console.log(`Found user: ${user.name}`);
// Query with limit
const users = await ListActiveUsers(pool, { limit: 10 });
users.forEach(user => {
console.log(`Active user: ${user.name}`);
});
// Dynamic query with optional parameters
const searchResults = await SearchUsers(pool, {
name: "John", // Optional - will be included in WHERE clause
// email is undefined - will be excluded from WHERE clause
limit: 10,
offset: 0
});
}
main().catch(console.error);
```
### Logging
Both Python and JavaScript clients include built-in logging and performance monitoring:
#### Python Logging
```python
from generated.python import set_log_level, set_logger
# Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)
set_log_level("DEBUG")
# Use a custom logger if desired
from logging import getLogger
custom_logger = getLogger("my_app")
set_logger(custom_logger)
# Now all database operations will log at the specified level
user = GetUserById(conn, id=1) # Will log query details at DEBUG level
```
#### JavaScript Logging
```javascript
const { setLogLevel, setLogger } = require('./generated/javascript');
// Set log level (debug, info, warn, error)
setLogLevel('debug');
// Use a custom logger if desired
const customLogger = {
debug: (msg) => console.debug(`[DB] ${msg}`),
info: (msg) => console.info(`[DB] ${msg}`),
warn: (msg) => console.warn(`[DB] ${msg}`),
error: (msg) => console.error(`[DB] ${msg}`)
};
setLogger(customLogger);
// Now all database operations will log at the specified level
const user = await GetUserById(pool, { id: 1 }); // Will log query details at debug level
```
### Performance Monitoring
Team Query includes basic performance monitoring to help track query execution times:
#### Python Monitoring
```python
from generated.python import configure_monitoring
# Option 1: No monitoring (default)
configure_monitoring(mode="none")
# Option 2: Basic monitoring (logs execution time)
configure_monitoring(mode="basic")
# Use queries normally - they'll be monitored according to configuration
user = GetUserById(conn, id=1) # Will log execution time at DEBUG level
```
#### JavaScript Monitoring
```javascript
const { configureMonitoring } = require('./generated/javascript');
// Option 1: No monitoring (default)
configureMonitoring("none");
// Option 2: Basic monitoring (logs execution time)
configureMonitoring("basic");
// Use queries normally - they'll be monitored according to configuration
const user = await GetUserById(pool, { id: 1 }); // Will log execution time at debug level
});
// Use queries normally - they will be monitored according to configuration
const user = await GetUserById(pool, { id: 1 });
```
### Transaction Support
#### Python Transactions
```python
// Using psycopg transaction support
with conn.cursor() as cur:
conn.autocommit = False
try:
author = CreateAuthor(conn, name="John", bio="A writer")
book = CreateBook(conn, title="My Book", author_id=author["id"])
conn.commit()
except Exception as e:
conn.rollback()
raise e
```
#### JavaScript Transactions
```javascript
// Using the built-in transaction manager
const { createTransaction } = require('./generated/javascript');
async function createAuthorWithBooks() {
const tx = createTransaction(pool);
try {
await tx.begin();
const author = await CreateAuthor(tx.client, { name: "John", bio: "A writer" });
const book = await CreateBook(tx.client, {
title: "My Book",
author_id: author.id
});
await tx.commit();
return { author, book };
} catch (error) {
await tx.rollback();
throw error;
}
}
```
#### Getting a Client Directly
For more flexibility, you can get a client directly and manage transactions manually:
##### Python
```python
// Using psycopg directly
import psycopg
// Get a connection
conn = psycopg.connect("postgresql://user:password@localhost/dbname")
// Start a transaction
conn.autocommit = False
try:
// Execute multiple queries in the same transaction
user = CreateUser(conn, name="Alice", email="alice@example.com")
profile = CreateUserProfile(conn, user_id=user["id"], bio="Software Engineer")
// Custom SQL if needed
with conn.cursor() as cur:
cur.execute("UPDATE user_stats SET last_login = NOW() WHERE user_id = %s", (user["id"],))
// Commit when done
conn.commit()
except Exception as e:
conn.rollback()
raise e
finally:
conn.close()
```
##### JavaScript
```javascript
// Using pg directly
const { Pool } = require('pg');
const pool = new Pool({
connectionString: 'postgresql://user:password@localhost/dbname'
});
async function complexTransaction() {
// Get a client from the pool
const client = await pool.connect();
try {
// Start transaction
await client.query('BEGIN');
// Execute generated queries with the client
const user = await CreateUser(client, { name: "Alice", email: "alice@example.com" });
const profile = await CreateUserProfile(client, { userId: user.id, bio: "Software Engineer" });
// Mix with custom queries if needed
await client.query(
'UPDATE user_stats SET last_login = NOW() WHERE user_id = $1',
[user.id]
);
// Commit transaction
await client.query('COMMIT');
return { user, profile };
} catch (error) {
// Rollback on error
await client.query('ROLLBACK');
throw error;
} finally {
// Release client back to pool
client.release();
}
}
```
## Recommended Team Workflow
For teams working on multiple projects that share database access, we recommend the following approach:
#### 1. Centralized SQL Repository
Create a dedicated repository for your SQL queries, organized by domain or responsibility:
```
sql-repository/
├── team-query.yaml # Configuration file
├── schema/
│ └── schema.sql # Database schema
└── queries/
├── users/ # User-related queries
│ ├── auth.sql # Authentication queries
│ └── profiles.sql # User profile queries
├── content/ # Content-related queries
│ ├── posts.sql # Blog post queries
│ └── comments.sql # Comment queries
└── analytics/ # Analytics queries
└── metrics.sql # Usage metrics queries
```
This structure:
- Keeps all SQL in one place (single source of truth)
- Makes it easy to review SQL changes
- Allows for domain-specific organization
- Facilitates code reviews by domain experts
#### 2. Individual Code Generation
Team members clone the SQL repository and generate code for their specific projects:
```bash
// Clone the SQL repository
git clone https://github.com/your-org/sql-repository.git
// Generate code for your specific project
cd sql-repository
team-query generate --config team-query.yaml --output ../my-project/src/generated
```
#### 3. Continuous Integration
For larger teams, set up CI/CD to automatically generate and publish client packages:
1. Set up a GitHub Action or other CI pipeline that:
- Triggers on changes to the SQL repository
- Generates code for each supported language
- Publishes packages to your package registry (npm, PyPI, etc.)
2. Projects then depend on these published packages:
```bash
// Python project
pip install your-org-db-client
// JavaScript project
npm install @your-org/db-client
```
This workflow ensures:
- Consistent database access across all projects
- Type safety and validation in all languages
- Easy updates when SQL changes
- Minimal duplication of database logic
## Configuration
Create a `team-query.yaml` configuration file:
```yaml
version: 1
project:
name: myproject
version: "1.0.0"
sql:
- queries: ["./queries/*.sql"] # Path to SQL files (glob pattern)
schema: ["public"] # Database schema
engine: postgresql # Database engine
gen:
- plugin: python # Generate Python client
out: "./generated/python" # Output directory
- plugin: javascript # Generate JavaScript client
out: "./generated/javascript"
```
## Advanced Features
### Type Safety and SQL Injection Prevention
Team Query provides built-in type safety and SQL injection prevention:
- Generated code includes type validation for all parameters
- Parameters are passed separately from SQL (never string concatenation)
- Leverages database driver's built-in protection (psycopg for Python, pg for Node.js)
```python
// This will raise a TypeError because id should be an integer
user = GetUserById(conn, id="not_an_integer")
```
## Building and Publishing
If you've made changes to Team Query and want to build and publish your own version, follow these steps:
### Prerequisites
- Python 3.8 or higher
- Poetry (dependency management)
- A PyPI account (for publishing)
### Development Workflow
1. Clone the repository:
```bash
git clone https://github.com/jechenique/team-query.git
cd team-query
```
2. Install development dependencies:
```bash
poetry install
```
3. Run code formatters:
```bash
// Format code with Black
poetry run black src tests
// Sort imports with isort
poetry run isort src tests
```
4. Run linters and type checking:
```bash
// Run pylint
poetry run pylint src
// Run mypy for type checking
poetry run mypy src
```
> **Note:** If you're using Python 3.13, you may encounter issues with pylint due to compatibility problems with new language features like `typealias`. In this case, you can skip the pylint check or use an earlier Python version for development.
>
> **Note on type checking:** The codebase has some mypy errors related to Optional types and missing type annotations. These don't affect functionality but should be addressed over time. To fix missing stubs, run `python -m pip install types-PyYAML`.
5. Run tests to ensure everything works:
```bash
poetry run pytest
```
### Building the Package
1. Build the package:
```bash
poetry build
```
This will create distribution files in the `dist/` directory.
### Publishing to PyPI
1. Configure Poetry with your PyPI credentials:
```bash
poetry config pypi-token.pypi your-pypi-token
```
2. Publish the package:
```bash
poetry publish
```
Or build and publish in one step:
```bash
poetry publish --build
```
### Publishing to a Private Repository
For organizations using a private package repository:
1. Configure Poetry with your private repository:
```bash
poetry config repositories.my-repo https://your-repo-url.com/simple/
poetry config http-basic.my-repo username password
```
2. Publish to your private repository:
```bash
poetry publish -r my-repo
```
### Automated Publishing with GitHub Actions
You can also set up GitHub Actions to automatically build and publish the package when you create a new release:
1. Create a `.github/workflows/publish.yml` file:
```yaml
name: Publish to PyPI
on:
release:
types: [created]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install poetry
- name: Build and publish
env:
PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}
run: |
poetry config pypi-token.pypi $PYPI_TOKEN
poetry build
poetry publish
```
2. Add your PyPI token as a secret in your GitHub repository settings.
3. Create a new release on GitHub to trigger the workflow.
This automated approach ensures consistent builds and simplifies the release process.
## Contributing
Contributions to Team Query are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines on how to contribute to this project, including code style requirements, testing procedures, and the process for submitting pull requests.
## License
MIT License
Raw data
{
"_id": null,
"home_page": "https://github.com/jechenique/team-query",
"name": "team-query",
"maintainer": null,
"docs_url": null,
"requires_python": "<4.0,>=3.9",
"maintainer_email": null,
"keywords": "sql, orm, database, code-generator, postgresql",
"author": "Javier Echenique",
"author_email": "akerbeltz@gmail.com",
"download_url": "https://files.pythonhosted.org/packages/ab/a5/d5972e48b973bda8c3f7ee244ab1eaa049a31c1537c3ae9b2a30fb46d068/team_query-1.0.15.tar.gz",
"platform": null,
"description": "# Team Query\n\n> The ORM you get when you love SQL but hate typing it twice.\n\nA SQL-first code generator inspired by [sqlc](https://github.com/sqlc-dev/sqlc) that creates type-safe database clients for multiple languages from SQL query files.\n\n[](https://github.com/jechenique/team-query)\n[](LICENSE)\n\n## **Why *Not* Using Team Query?** \n*A bold tale of chaos, creativity, and the art of duplicating SQL in every language imaginable.*\n\nAh, Team Query\u2014so preachy, so organized, so\u2026 *collaborative*. But let\u2019s be honest: where\u2019s the fun in shared logic, code reusability, and consistent data access when you could just wing it?\n\n---\n\n### **1. Embrace Creative SQL Expression** \nWhy settle for one boring, correct version of a SQL query when each engineer can write their *own* slightly different, definitely-not-bug-free version?\n\n**Analytics Team:**\n``` sql\nSELECT u.id AS user_id, AVG(o.total_amount) AS avg_order_value\nFROM users u\nJOIN orders o ON u.id = o.user_id\nWHERE o.created_at >= CURRENT_DATE - INTERVAL '30 days'\nGROUP BY u.id\n```\n\n**Backend Team:**\n``` sql\nSELECT o.user_id, SUM(o.total_amount) / COUNT(DISTINCT o.id) AS avg_order_value\nFROM orders o\nWHERE o.created_at >= NOW() - INTERVAL '1 month'\nGROUP BY o.user_id\n```\n\n**Data Engineering:**\n``` sql\nSELECT user_id, ROUND(AVG(order_value)::numeric, 2) AS avg_order_value\nFROM (\n SELECT user_id, SUM(amount) AS order_value\n FROM order_items\n WHERE created_at >= DATE_TRUNC('day', NOW() - INTERVAL '30 days')\n GROUP BY order_id, user_id\n) sub\nGROUP BY user_id\n```\n\nWho needs consistency when you\u2019ve got *character*?\n\n---\n\n### **2. Promote Job Security Through Mystery** \nWhen no one knows which version of the query is \u201cthe real one,\u201d debugging becomes an exciting, career-building treasure hunt. \n**SQL spelunking**: a new sport for engineers.\n\n---\n\n### **3. Strengthen Team Bonding Over Cross-Team Blame** \n- Frontend team broke the app? Backend says \u201cworks on my query.\u201d \n- Backend returns bad data? Frontend says \u201cnot my logic.\u201d \n\nThanks to the *absence* of a shared source of truth, everyone gets to participate in a lively blame game\u2014perfect for team morale!\n\n---\n\n### **4. Enable Infinite Customization** \nWhy let one boring Team Query dictate your style when each team can implement their own caching, parameterization, pagination, and bug? \nThink of all the slightly-off implementations you can proudly call your own.\n\n---\n\n### **5. Make Onboarding Unforgettable** \nNew engineer asks:\n>\u201cWhere\u2019s the query for revenue?\u201d\n\nYou say:\n> \u201cDepends. Which version do you want? The Python one, the JavaScript one, or the undocumented legacy one from 2017?\u201d\n\nThey\u2019ll never forget their first week. Neither will HR.\n\n---\n\nSo next time someone preaches Team Query, just smile and say, \n> \u201cNo thanks. I prefer a *little* chaos in my SQL.\u201d\n\n## Table of Contents\n1. [Overview](#overview)\n2. [Installation](#installation)\n3. [Quick Start](#quick-start)\n4. [Connection Flexibility](#connection-flexibility)\n5. [Writing SQL Queries](#writing-sql-queries)\n - [Query Annotations](#query-annotations)\n - [Parameter Types](#parameter-types)\n - [Dynamic Queries](#dynamic-queries)\n - [Conditional SQL Blocks](#conditional-sql-blocks)\n - [Result Types](#result-types)\n6. [Using Generated Code](#using-generated-code)\n - [Python Usage](#python-usage)\n - [JavaScript Usage](#javascript-usage)\n - [Logging](#logging)\n - [Performance Monitoring](#performance-monitoring)\n - [Transaction Support](#transaction-support)\n7. [Recommended Team Workflow](#recommended-team-workflow)\n8. [Configuration](#configuration)\n9. [Advanced Features](#advanced-features)\n10. [Building and Publishing](#building-and-publishing)\n11. [Contributing](#contributing)\n12. [License](#license)\n## Overview\n\nTeam Query lets you write SQL queries once and generate type-safe database clients for multiple programming languages. It's designed for developers who:\n\n- Prefer writing SQL directly rather than using an ORM\n- Want type safety and IDE autocompletion\n- Need to support multiple programming languages\n- Want to avoid duplicating database logic across languages\n\n**Key Features:**\n- Write SQL queries in `.sql` files with type annotations\n- Generate type-safe clients for Python and JavaScript \n- Support for dynamic queries with conditional blocks\n- Built-in logging and performance monitoring\n- Transaction support\n- Parameter validation to prevent SQL injection\n\n## Installation\n\n```bash\n# Quick installation using the provided shell script (recommended)\ncurl -sSL https://raw.githubusercontent.com/jechenique/team-query/master/install.sh | bash\n\n# Install as a library\npip install team-query\n\n# Install as a CLI tool\npipx install team-query\n```\n\n## Quick Start\n\n1. **Create SQL query files**\n\n```sql\n-- name: GetUserById :one\n-- param: id int User ID\nSELECT * FROM users WHERE id = :id;\n\n-- name: CreateUser :execresult\n-- param: name string User's name\n-- param: email string User's email\nINSERT INTO users (name, email) \nVALUES (:name, :email)\nRETURNING id, name, email;\n```\n\n2. **Create a configuration file** (`team-query.yaml`)\n\n```yaml\nversion: 1\nproject:\n name: myproject\n version: \"1.0.0\"\nsql:\n - queries: [\"./queries/*.sql\"]\n schema: [\"public\"]\n engine: postgresql\n gen:\n - plugin: python\n out: \"./generated/python\"\n - plugin: javascript\n out: \"./generated/javascript\"\n```\n\n3. **Generate code**\n\n```bash\nteam-query generate --config team-query.yaml\n```\n\n4. **Use the generated code**\n\nPython:\n```python\nfrom generated.python import GetUserById, ListActiveUsers, SearchUsers\n\n# Connect to database\nimport psycopg\nconn = psycopg.connect(\"postgresql://user:password@localhost/dbname\")\n\n# Simple query with required parameters\nuser = GetUserById(conn, id=1)\nprint(f\"Found user: {user['name']}\")\n\n# Query with limit\nusers = ListActiveUsers(conn, limit=10)\nfor user in users:\n print(f\"Active user: {user['name']}\")\n\n# Dynamic query with optional parameters\nsearch_results = SearchUsers(\n conn,\n name=\"John\", # Optional - will be included in WHERE clause\n email=None, # Optional - will be excluded from WHERE clause\n limit=10,\n offset=0\n)\n```\n\nJavaScript:\n```javascript\n// Import generated functions\nconst { GetUserById, ListActiveUsers, SearchUsers } = require('./generated/javascript');\n\n// Connect to database\nconst { Pool } = require('pg');\nconst pool = new Pool({\n connectionString: 'postgresql://user:password@localhost/dbname'\n});\n\nasync function main() {\n // Simple query with required parameters\n const user = await GetUserById(pool, { id: 1 });\n console.log(`Found user: ${user.name}`);\n \n // Query with limit\n const users = await ListActiveUsers(pool, { limit: 10 });\n users.forEach(user => {\n console.log(`Active user: ${user.name}`);\n });\n \n // Dynamic query with optional parameters\n const searchResults = await SearchUsers(pool, {\n name: \"John\", // Optional - will be included in WHERE clause\n limit: 10,\n offset: 0\n });\n}\n\nmain().catch(console.error);\n```\n\n## Connection Flexibility\n\nThe generated query functions are designed to be flexible with database connections:\n\n- **Python**: Functions can accept either:\n - An existing connection object (`psycopg.Connection`)\n - A connection string (e.g., `\"postgresql://user:password@localhost/dbname\"`)\n\n- **JavaScript**: Functions can accept either:\n - A Pool object from the `pg` package\n - A Client object from the `pg` package\n - A connection string (e.g., `\"postgresql://user:password@localhost/dbname\"`)\n\nThis flexibility allows you to manage connections however best suits your application's needs - either by passing connection strings directly to query functions or by managing connection pools externally.\n\n## Writing SQL Queries\n\n### Query Annotations\n\nEach SQL query needs annotations to define its name and return type:\n\n```sql\n-- name: QueryName :returnType\n-- param: paramName paramType Description\nSQL_QUERY_HERE;\n```\n\nReturn types:\n- `:one` - Returns a single record\n- `:many` - Returns multiple records\n- `:exec` - Executes without returning data\n- `:execrows` - Returns affected row count\n- `:execresult` - Returns result data (for INSERT/UPDATE with RETURNING)\n\nExample:\n```sql\n-- name: ListActiveUsers :many\n-- param: limit int Maximum number of users to return\nSELECT id, name, email FROM users \nWHERE active = true \nORDER BY created_at DESC\nLIMIT :limit;\n```\n\n### Parameter Types\n\nDefine parameters with type annotations:\n\n```sql\n-- param: paramName paramType [optional description]\n```\n\nSupported types:\n- `int`, `integer`, `bigint`, `smallint`\n- `string`, `text`, `varchar`, `char`\n- `bool`, `boolean`\n- `float`, `real`, `double`\n- `decimal`, `numeric`\n- `date`, `time`, `timestamp`, `timestamptz`\n- `json`, `jsonb`\n- `uuid`\n- `bytea`\n\n### Dynamic Queries\n\nCreate dynamic queries with optional parameters:\n\n```sql\n-- name: SearchUsers :many\n-- param: name string Optional name filter\n-- param: email string Optional email filter\n-- param: limit int Maximum results to return\n-- param: offset int Pagination offset\nSELECT * FROM users\nWHERE \n (:name IS NULL OR name ILIKE '%' || :name || '%') AND\n (:email IS NULL OR email ILIKE '%' || :email || '%')\nORDER BY name\nLIMIT :limit OFFSET :offset;\n```\n\n### Conditional SQL Blocks\n\nFor better performance with dynamic queries, use conditional blocks:\n\n```sql\n-- name: SearchUsers :many\n-- param: name string Optional name filter\n-- param: email string Optional email filter\n-- param: limit int Maximum results to return\n-- param: offset int Pagination offset\nSELECT * FROM users\nWHERE \n -- {name\n AND name ILIKE '%' || :name || '%'\n -- }\n -- {email\n AND email ILIKE '%' || :email || '%'\n -- }\nORDER BY name\nLIMIT :limit OFFSET :offset;\n```\n\nWhen a parameter is null/undefined, its entire block is removed from the query. This creates more efficient SQL that can better utilize indexes.\n\n### Result Types\n\nQuery results are returned as:\n\n- **Python**: Dictionary objects with column names as keys\n- **JavaScript**: Plain JavaScript objects with column names as properties\n\nThis makes the results easy to work with and compatible with most frameworks and libraries.\n\n## Using Generated Code\n\n### Python Usage\n\nThe generated Python code provides a clean, type-safe API for database operations:\n\n```python\n# Import generated functions\nfrom generated.python import GetUserById, ListActiveUsers, SearchUsers\n\n# Connect to database\nimport psycopg\nconn = psycopg.connect(\"postgresql://user:password@localhost/dbname\")\n\n# Simple query with required parameters\nuser = GetUserById(conn, id=1)\nprint(f\"Found user: {user['name']}\")\n\n# Query with limit\nusers = ListActiveUsers(conn, limit=10)\nfor user in users:\n print(f\"Active user: {user['name']}\")\n\n# Dynamic query with optional parameters\nsearch_results = SearchUsers(\n conn,\n name=\"John\", # Optional - will be included in WHERE clause\n email=None, # Optional - will be excluded from WHERE clause\n limit=10,\n offset=0\n)\n```\n\n### JavaScript Usage\n\nThe generated JavaScript code provides an async/await API for database operations:\n\n```javascript\n// Import generated functions\nconst { GetUserById, ListActiveUsers, SearchUsers } = require('./generated/javascript');\n\n// Connect to database\nconst { Pool } = require('pg');\nconst pool = new Pool({\n connectionString: 'postgresql://user:password@localhost/dbname'\n});\n\nasync function main() {\n // Simple query with required parameters\n const user = await GetUserById(pool, { id: 1 });\n console.log(`Found user: ${user.name}`);\n \n // Query with limit\n const users = await ListActiveUsers(pool, { limit: 10 });\n users.forEach(user => {\n console.log(`Active user: ${user.name}`);\n });\n \n // Dynamic query with optional parameters\n const searchResults = await SearchUsers(pool, {\n name: \"John\", // Optional - will be included in WHERE clause\n // email is undefined - will be excluded from WHERE clause\n limit: 10,\n offset: 0\n });\n}\n\nmain().catch(console.error);\n```\n\n### Logging\n\nBoth Python and JavaScript clients include built-in logging and performance monitoring:\n\n#### Python Logging\n\n```python\nfrom generated.python import set_log_level, set_logger\n\n# Set log level (DEBUG, INFO, WARNING, ERROR, CRITICAL)\nset_log_level(\"DEBUG\")\n\n# Use a custom logger if desired\nfrom logging import getLogger\ncustom_logger = getLogger(\"my_app\")\nset_logger(custom_logger)\n\n# Now all database operations will log at the specified level\nuser = GetUserById(conn, id=1) # Will log query details at DEBUG level\n```\n\n#### JavaScript Logging\n\n```javascript\nconst { setLogLevel, setLogger } = require('./generated/javascript');\n\n// Set log level (debug, info, warn, error)\nsetLogLevel('debug');\n\n// Use a custom logger if desired\nconst customLogger = {\n debug: (msg) => console.debug(`[DB] ${msg}`),\n info: (msg) => console.info(`[DB] ${msg}`),\n warn: (msg) => console.warn(`[DB] ${msg}`),\n error: (msg) => console.error(`[DB] ${msg}`)\n};\nsetLogger(customLogger);\n\n// Now all database operations will log at the specified level\nconst user = await GetUserById(pool, { id: 1 }); // Will log query details at debug level\n```\n\n### Performance Monitoring\n\nTeam Query includes basic performance monitoring to help track query execution times:\n\n#### Python Monitoring\n\n```python\nfrom generated.python import configure_monitoring\n\n# Option 1: No monitoring (default)\nconfigure_monitoring(mode=\"none\")\n\n# Option 2: Basic monitoring (logs execution time)\nconfigure_monitoring(mode=\"basic\")\n\n# Use queries normally - they'll be monitored according to configuration\nuser = GetUserById(conn, id=1) # Will log execution time at DEBUG level\n```\n\n#### JavaScript Monitoring\n\n```javascript\nconst { configureMonitoring } = require('./generated/javascript');\n\n// Option 1: No monitoring (default)\nconfigureMonitoring(\"none\");\n\n// Option 2: Basic monitoring (logs execution time)\nconfigureMonitoring(\"basic\");\n\n// Use queries normally - they'll be monitored according to configuration\nconst user = await GetUserById(pool, { id: 1 }); // Will log execution time at debug level\n});\n\n// Use queries normally - they will be monitored according to configuration\nconst user = await GetUserById(pool, { id: 1 });\n```\n\n### Transaction Support\n\n#### Python Transactions\n\n```python\n// Using psycopg transaction support\nwith conn.cursor() as cur:\n conn.autocommit = False\n try:\n author = CreateAuthor(conn, name=\"John\", bio=\"A writer\")\n book = CreateBook(conn, title=\"My Book\", author_id=author[\"id\"])\n conn.commit()\n except Exception as e:\n conn.rollback()\n raise e\n```\n\n#### JavaScript Transactions\n\n```javascript\n// Using the built-in transaction manager\nconst { createTransaction } = require('./generated/javascript');\n\nasync function createAuthorWithBooks() {\n const tx = createTransaction(pool);\n try {\n await tx.begin();\n const author = await CreateAuthor(tx.client, { name: \"John\", bio: \"A writer\" });\n const book = await CreateBook(tx.client, { \n title: \"My Book\", \n author_id: author.id\n });\n await tx.commit();\n return { author, book };\n } catch (error) {\n await tx.rollback();\n throw error;\n }\n}\n```\n\n#### Getting a Client Directly\n\nFor more flexibility, you can get a client directly and manage transactions manually:\n\n##### Python\n\n```python\n// Using psycopg directly\nimport psycopg\n\n// Get a connection\nconn = psycopg.connect(\"postgresql://user:password@localhost/dbname\")\n\n// Start a transaction\nconn.autocommit = False\ntry:\n // Execute multiple queries in the same transaction\n user = CreateUser(conn, name=\"Alice\", email=\"alice@example.com\")\n profile = CreateUserProfile(conn, user_id=user[\"id\"], bio=\"Software Engineer\")\n \n // Custom SQL if needed\n with conn.cursor() as cur:\n cur.execute(\"UPDATE user_stats SET last_login = NOW() WHERE user_id = %s\", (user[\"id\"],))\n \n // Commit when done\n conn.commit()\nexcept Exception as e:\n conn.rollback()\n raise e\nfinally:\n conn.close()\n```\n\n##### JavaScript\n\n```javascript\n// Using pg directly\nconst { Pool } = require('pg');\nconst pool = new Pool({\n connectionString: 'postgresql://user:password@localhost/dbname'\n});\n\nasync function complexTransaction() {\n // Get a client from the pool\n const client = await pool.connect();\n \n try {\n // Start transaction\n await client.query('BEGIN');\n \n // Execute generated queries with the client\n const user = await CreateUser(client, { name: \"Alice\", email: \"alice@example.com\" });\n const profile = await CreateUserProfile(client, { userId: user.id, bio: \"Software Engineer\" });\n \n // Mix with custom queries if needed\n await client.query(\n 'UPDATE user_stats SET last_login = NOW() WHERE user_id = $1',\n [user.id]\n );\n \n // Commit transaction\n await client.query('COMMIT');\n return { user, profile };\n } catch (error) {\n // Rollback on error\n await client.query('ROLLBACK');\n throw error;\n } finally {\n // Release client back to pool\n client.release();\n }\n}\n```\n\n## Recommended Team Workflow\n\nFor teams working on multiple projects that share database access, we recommend the following approach:\n\n#### 1. Centralized SQL Repository\n\nCreate a dedicated repository for your SQL queries, organized by domain or responsibility:\n\n```\nsql-repository/\n\u251c\u2500\u2500 team-query.yaml # Configuration file\n\u251c\u2500\u2500 schema/\n\u2502 \u2514\u2500\u2500 schema.sql # Database schema\n\u2514\u2500\u2500 queries/\n \u251c\u2500\u2500 users/ # User-related queries\n \u2502 \u251c\u2500\u2500 auth.sql # Authentication queries\n \u2502 \u2514\u2500\u2500 profiles.sql # User profile queries\n \u251c\u2500\u2500 content/ # Content-related queries\n \u2502 \u251c\u2500\u2500 posts.sql # Blog post queries\n \u2502 \u2514\u2500\u2500 comments.sql # Comment queries\n \u2514\u2500\u2500 analytics/ # Analytics queries\n \u2514\u2500\u2500 metrics.sql # Usage metrics queries\n```\n\nThis structure:\n- Keeps all SQL in one place (single source of truth)\n- Makes it easy to review SQL changes\n- Allows for domain-specific organization\n- Facilitates code reviews by domain experts\n\n#### 2. Individual Code Generation\n\nTeam members clone the SQL repository and generate code for their specific projects:\n\n```bash\n// Clone the SQL repository\ngit clone https://github.com/your-org/sql-repository.git\n\n// Generate code for your specific project\ncd sql-repository\nteam-query generate --config team-query.yaml --output ../my-project/src/generated\n```\n\n#### 3. Continuous Integration\n\nFor larger teams, set up CI/CD to automatically generate and publish client packages:\n\n1. Set up a GitHub Action or other CI pipeline that:\n - Triggers on changes to the SQL repository\n - Generates code for each supported language\n - Publishes packages to your package registry (npm, PyPI, etc.)\n\n2. Projects then depend on these published packages:\n\n```bash\n// Python project\npip install your-org-db-client\n\n// JavaScript project\nnpm install @your-org/db-client\n```\n\nThis workflow ensures:\n- Consistent database access across all projects\n- Type safety and validation in all languages\n- Easy updates when SQL changes\n- Minimal duplication of database logic\n\n## Configuration\n\nCreate a `team-query.yaml` configuration file:\n\n```yaml\nversion: 1\nproject:\n name: myproject\n version: \"1.0.0\"\nsql:\n - queries: [\"./queries/*.sql\"] # Path to SQL files (glob pattern)\n schema: [\"public\"] # Database schema\n engine: postgresql # Database engine\n gen:\n - plugin: python # Generate Python client\n out: \"./generated/python\" # Output directory\n - plugin: javascript # Generate JavaScript client\n out: \"./generated/javascript\"\n```\n\n## Advanced Features\n\n### Type Safety and SQL Injection Prevention\n\nTeam Query provides built-in type safety and SQL injection prevention:\n\n- Generated code includes type validation for all parameters\n- Parameters are passed separately from SQL (never string concatenation)\n- Leverages database driver's built-in protection (psycopg for Python, pg for Node.js)\n\n```python\n// This will raise a TypeError because id should be an integer\nuser = GetUserById(conn, id=\"not_an_integer\")\n```\n\n## Building and Publishing\n\nIf you've made changes to Team Query and want to build and publish your own version, follow these steps:\n\n### Prerequisites\n\n- Python 3.8 or higher\n- Poetry (dependency management)\n- A PyPI account (for publishing)\n\n### Development Workflow\n\n1. Clone the repository:\n ```bash\n git clone https://github.com/jechenique/team-query.git\n cd team-query\n ```\n\n2. Install development dependencies:\n ```bash\n poetry install\n ```\n\n3. Run code formatters:\n ```bash\n // Format code with Black\n poetry run black src tests\n\n // Sort imports with isort\n poetry run isort src tests\n ```\n\n4. Run linters and type checking:\n ```bash\n // Run pylint\n poetry run pylint src\n\n // Run mypy for type checking\n poetry run mypy src\n ```\n\n > **Note:** If you're using Python 3.13, you may encounter issues with pylint due to compatibility problems with new language features like `typealias`. In this case, you can skip the pylint check or use an earlier Python version for development.\n >\n > **Note on type checking:** The codebase has some mypy errors related to Optional types and missing type annotations. These don't affect functionality but should be addressed over time. To fix missing stubs, run `python -m pip install types-PyYAML`.\n\n5. Run tests to ensure everything works:\n ```bash\n poetry run pytest\n ```\n\n### Building the Package\n\n1. Build the package:\n ```bash\n poetry build\n ```\n \n This will create distribution files in the `dist/` directory.\n\n### Publishing to PyPI\n\n1. Configure Poetry with your PyPI credentials:\n ```bash\n poetry config pypi-token.pypi your-pypi-token\n ```\n\n2. Publish the package:\n ```bash\n poetry publish\n ```\n\n Or build and publish in one step:\n ```bash\n poetry publish --build\n ```\n\n### Publishing to a Private Repository\n\nFor organizations using a private package repository:\n\n1. Configure Poetry with your private repository:\n ```bash\n poetry config repositories.my-repo https://your-repo-url.com/simple/\n poetry config http-basic.my-repo username password\n ```\n\n2. Publish to your private repository:\n ```bash\n poetry publish -r my-repo\n ```\n\n### Automated Publishing with GitHub Actions\n\nYou can also set up GitHub Actions to automatically build and publish the package when you create a new release:\n\n1. Create a `.github/workflows/publish.yml` file:\n ```yaml\n name: Publish to PyPI\n\n on:\n release:\n types: [created]\n\n jobs:\n deploy:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v3\n - name: Set up Python\n uses: actions/setup-python@v4\n with:\n python-version: '3.10'\n - name: Install dependencies\n run: |\n python -m pip install --upgrade pip\n pip install poetry\n - name: Build and publish\n env:\n PYPI_TOKEN: ${{ secrets.PYPI_TOKEN }}\n run: |\n poetry config pypi-token.pypi $PYPI_TOKEN\n poetry build\n poetry publish\n ```\n\n2. Add your PyPI token as a secret in your GitHub repository settings.\n\n3. Create a new release on GitHub to trigger the workflow.\n\nThis automated approach ensures consistent builds and simplifies the release process.\n\n## Contributing\n\nContributions to Team Query are welcome! Please see [CONTRIBUTING.md](CONTRIBUTING.md) for detailed guidelines on how to contribute to this project, including code style requirements, testing procedures, and the process for submitting pull requests.\n\n## License\n\nMIT License\n",
"bugtrack_url": null,
"license": "MIT",
"summary": "The ORM you get when you love SQL but hate typing it twice.",
"version": "1.0.15",
"project_urls": {
"Documentation": "https://github.com/jechenique/team-query#readme",
"Homepage": "https://github.com/jechenique/team-query",
"Repository": "https://github.com/jechenique/team-query"
},
"split_keywords": [
"sql",
" orm",
" database",
" code-generator",
" postgresql"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "9bf7ad294df527e06b5b018b2d1e101acd5ca00c5e3a89dfd563e082eb42cae0",
"md5": "c88f0fe18d49b95573fed93b088cac3e",
"sha256": "ee152930494f29a878920b772910253f2b9b4dacbccc6aef2a17c9e4dd314c37"
},
"downloads": -1,
"filename": "team_query-1.0.15-py3-none-any.whl",
"has_sig": false,
"md5_digest": "c88f0fe18d49b95573fed93b088cac3e",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": "<4.0,>=3.9",
"size": 45013,
"upload_time": "2025-07-25T13:55:45",
"upload_time_iso_8601": "2025-07-25T13:55:45.068287Z",
"url": "https://files.pythonhosted.org/packages/9b/f7/ad294df527e06b5b018b2d1e101acd5ca00c5e3a89dfd563e082eb42cae0/team_query-1.0.15-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "aba5d5972e48b973bda8c3f7ee244ab1eaa049a31c1537c3ae9b2a30fb46d068",
"md5": "7e36fb3ba423d3d60fcadc1730957039",
"sha256": "929e7d2363ce90e12cdc2d631e5ca075f49b032b2f523c9ff136137d755e94e6"
},
"downloads": -1,
"filename": "team_query-1.0.15.tar.gz",
"has_sig": false,
"md5_digest": "7e36fb3ba423d3d60fcadc1730957039",
"packagetype": "sdist",
"python_version": "source",
"requires_python": "<4.0,>=3.9",
"size": 44390,
"upload_time": "2025-07-25T13:55:46",
"upload_time_iso_8601": "2025-07-25T13:55:46.367927Z",
"url": "https://files.pythonhosted.org/packages/ab/a5/d5972e48b973bda8c3f7ee244ab1eaa049a31c1537c3ae9b2a30fb46d068/team_query-1.0.15.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-07-25 13:55:46",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "jechenique",
"github_project": "team-query",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"lcname": "team-query"
}