toulouse


Nametoulouse JSON
Version 1.0.1 PyPI version JSON
download
home_pageNone
SummaryHigh-performance card library for ML and RL applications.
upload_time2025-07-10 10:18:36
maintainerNone
docs_urlNone
authormlabarrere
requires_python>=3.9
licenseNone
keywords cards game reinforcement-learning mcts numpy
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # ⚡ Toulouse: High-Performance Card Library for Machine Learning & Reinforcement Learning

Toulouse is a modern, lightning-fast Python library for representing, manipulating, and vectorizing card games—designed for the needs of the ML and RL community.

- 🚀 **Blazing Fast**: O(1) card lookup, object pooling, and pre-allocated numpy buffers
- 🧩 **Extensible**: Easily add new card systems (Italian, Spanish, custom...)
- 🧑‍💻 **ML/RL Ready**: One-hot numpy state vectors for cards and decks
- 🌍 **Multilingual**: Card names in multiple languages (EN, IT, ES)
- 🧪 **Tested & Typed**: Robust, well-typed, and ready for research or production

---

## Installation

```bash
pip install toulouse
```
or
```bash
uv add toulouse
```
---

## Quick Start

```python
from toulouse import Card, Deck, get_card

# Create a new Italian 40-card deck (sorted)
deck = Deck.new_deck(card_system_key="italian_40")
print(deck)  # Deck of 40 cards (italian_40)

# Draw a card
drawn = deck.draw(1)[0]
print(drawn)  # Ace of Denari

# Check if a card is in the deck
card = get_card(value=1, suit=0)  # Ace of Denari
print(deck.contains(card))  # False (if just drawn)

# Get the deck state as a numpy vector (for ML/RL)
print(deck.state.shape)  # (40,)

# Reset and shuffle the deck
deck.reset()
deck.shuffle()
```

---

## Supported Card Systems

- **italian_40**: Denari, Coppe, Spade, Bastoni
- **spanish_40**: Oros, Copas, Espadas, Bastos
- *Add your own system easily (see below)*

---

## API Reference

### Card

```python
from toulouse import Card, get_card

card = get_card(value=7, suit=2, card_system_key="italian_40")
print(card)           # Seven of Spade
print(card.to_index()) # Unique index in the deck
print(card.state)     # One-hot numpy array (length deck_size)
```

- `Card(value, suit, card_system_key="italian_40")`: Immutable, hashable card instance
- `.to_index()`: Returns unique index for the card in its system
- `.state`: One-hot numpy array (length = deck size)
- `__str__`, `__repr__`: Human-readable

### Deck

```python
from toulouse import Deck

deck = Deck.new_deck(card_system_key="spanish_40", sorted_deck=False)
print(len(deck))      # 40
hand = deck.draw(3)   # Draw 3 cards
print(deck.state)     # Numpy binary vector (remaining cards)
deck.append(hand[0])  # Add a card back
deck.shuffle()        # Shuffle the deck
deck.sort()           # Sort the deck
deck.reset()          # Restore to full deck
```

- `Deck.new_deck(card_system_key="italian_40", language="en", sorted_deck=True)`: Create a new deck
- `.draw(n)`: Draw n cards (removes from deck)
- `.append(card)`: Add a card
- `.remove(card)`: Remove a card
- `.contains(card)`: O(1) check for card presence
- `.reset()`: Restore to full deck
- `.shuffle()`, `.sort()`: Shuffle or sort
- `.state`: Numpy binary vector (length = deck size)
- `.pretty_print()`: Grouped by suit, human-readable
- `.move_card_to(card, other_deck)`: Move card between decks

### Card System Management

```python
from toulouse import register_card_system, get_card_system

my_system = {
    "suits": ["Red", "Blue"],
    "values": [1, 2, 3],
    "deck_size": 6,
}
register_card_system("mini_6", my_system)
print(get_card_system("mini_6"))
```

- `register_card_system(key, config)`: Add a new card system
- `get_card_system(key)`: Retrieve system config

---

## Machine Learning & RL Integration

- **Card state**: `card.state` is a one-hot numpy array (length = deck size)
- **Deck state**: `deck.state` is a binary numpy array (1 if card present)
- **Fast vectorization**: Pre-allocated, cached numpy buffers for speed

Example:

```python
from toulouse import Deck

deck = Deck.new_deck()
obs = deck.state  # Use as RL agent observation
```

---

## Performance

Toulouse is engineered for speed. Here are real benchmark results from the test suite (Apple Silicon, Python 3.11):

```
Deck creation (1000x): 0.0062 seconds
Shuffle+draw+reset (1000x): 0.0099 seconds
Card lookup (10000x): 0.0006 seconds
State vectorization (deck+card, 10000x): 0.0042 seconds
```

- Creating 1000 decks takes less than 7 milliseconds
- 10,000 card lookups in under 1 millisecond
- Deck and card state vectorization is nearly instantaneous

This makes Toulouse ideal for RL/ML environments where speed is critical.

---

## Extending Toulouse

Add new card systems for custom games:

```python
from toulouse import register_card_system, Deck

register_card_system("custom_8", {
    "suits": ["Alpha", "Beta"],
    "values": [1, 2, 3, 4],
    "deck_size": 8,
})
deck = Deck.new_deck(card_system_key="custom_8")
print(deck)
```

---

## Testing

Run the test suite with pytest:

```bash
pytest tests/
```

---

## License

MIT — Use, modify, and share freely.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "toulouse",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "cards, game, reinforcement-learning, mcts, numpy",
    "author": "mlabarrere",
    "author_email": "Your Name <your.email@example.com>",
    "download_url": "https://files.pythonhosted.org/packages/91/0c/c1a7d2ff929379db69fa57a330723cb0bd6342b6f8afe1bad6f8e4035c85/toulouse-1.0.1.tar.gz",
    "platform": null,
    "description": "# \u26a1 Toulouse: High-Performance Card Library for Machine Learning & Reinforcement Learning\n\nToulouse is a modern, lightning-fast Python library for representing, manipulating, and vectorizing card games\u2014designed for the needs of the ML and RL community.\n\n- \ud83d\ude80 **Blazing Fast**: O(1) card lookup, object pooling, and pre-allocated numpy buffers\n- \ud83e\udde9 **Extensible**: Easily add new card systems (Italian, Spanish, custom...)\n- \ud83e\uddd1\u200d\ud83d\udcbb **ML/RL Ready**: One-hot numpy state vectors for cards and decks\n- \ud83c\udf0d **Multilingual**: Card names in multiple languages (EN, IT, ES)\n- \ud83e\uddea **Tested & Typed**: Robust, well-typed, and ready for research or production\n\n---\n\n## Installation\n\n```bash\npip install toulouse\n```\nor\n```bash\nuv add toulouse\n```\n---\n\n## Quick Start\n\n```python\nfrom toulouse import Card, Deck, get_card\n\n# Create a new Italian 40-card deck (sorted)\ndeck = Deck.new_deck(card_system_key=\"italian_40\")\nprint(deck)  # Deck of 40 cards (italian_40)\n\n# Draw a card\ndrawn = deck.draw(1)[0]\nprint(drawn)  # Ace of Denari\n\n# Check if a card is in the deck\ncard = get_card(value=1, suit=0)  # Ace of Denari\nprint(deck.contains(card))  # False (if just drawn)\n\n# Get the deck state as a numpy vector (for ML/RL)\nprint(deck.state.shape)  # (40,)\n\n# Reset and shuffle the deck\ndeck.reset()\ndeck.shuffle()\n```\n\n---\n\n## Supported Card Systems\n\n- **italian_40**: Denari, Coppe, Spade, Bastoni\n- **spanish_40**: Oros, Copas, Espadas, Bastos\n- *Add your own system easily (see below)*\n\n---\n\n## API Reference\n\n### Card\n\n```python\nfrom toulouse import Card, get_card\n\ncard = get_card(value=7, suit=2, card_system_key=\"italian_40\")\nprint(card)           # Seven of Spade\nprint(card.to_index()) # Unique index in the deck\nprint(card.state)     # One-hot numpy array (length deck_size)\n```\n\n- `Card(value, suit, card_system_key=\"italian_40\")`: Immutable, hashable card instance\n- `.to_index()`: Returns unique index for the card in its system\n- `.state`: One-hot numpy array (length = deck size)\n- `__str__`, `__repr__`: Human-readable\n\n### Deck\n\n```python\nfrom toulouse import Deck\n\ndeck = Deck.new_deck(card_system_key=\"spanish_40\", sorted_deck=False)\nprint(len(deck))      # 40\nhand = deck.draw(3)   # Draw 3 cards\nprint(deck.state)     # Numpy binary vector (remaining cards)\ndeck.append(hand[0])  # Add a card back\ndeck.shuffle()        # Shuffle the deck\ndeck.sort()           # Sort the deck\ndeck.reset()          # Restore to full deck\n```\n\n- `Deck.new_deck(card_system_key=\"italian_40\", language=\"en\", sorted_deck=True)`: Create a new deck\n- `.draw(n)`: Draw n cards (removes from deck)\n- `.append(card)`: Add a card\n- `.remove(card)`: Remove a card\n- `.contains(card)`: O(1) check for card presence\n- `.reset()`: Restore to full deck\n- `.shuffle()`, `.sort()`: Shuffle or sort\n- `.state`: Numpy binary vector (length = deck size)\n- `.pretty_print()`: Grouped by suit, human-readable\n- `.move_card_to(card, other_deck)`: Move card between decks\n\n### Card System Management\n\n```python\nfrom toulouse import register_card_system, get_card_system\n\nmy_system = {\n    \"suits\": [\"Red\", \"Blue\"],\n    \"values\": [1, 2, 3],\n    \"deck_size\": 6,\n}\nregister_card_system(\"mini_6\", my_system)\nprint(get_card_system(\"mini_6\"))\n```\n\n- `register_card_system(key, config)`: Add a new card system\n- `get_card_system(key)`: Retrieve system config\n\n---\n\n## Machine Learning & RL Integration\n\n- **Card state**: `card.state` is a one-hot numpy array (length = deck size)\n- **Deck state**: `deck.state` is a binary numpy array (1 if card present)\n- **Fast vectorization**: Pre-allocated, cached numpy buffers for speed\n\nExample:\n\n```python\nfrom toulouse import Deck\n\ndeck = Deck.new_deck()\nobs = deck.state  # Use as RL agent observation\n```\n\n---\n\n## Performance\n\nToulouse is engineered for speed. Here are real benchmark results from the test suite (Apple Silicon, Python 3.11):\n\n```\nDeck creation (1000x): 0.0062 seconds\nShuffle+draw+reset (1000x): 0.0099 seconds\nCard lookup (10000x): 0.0006 seconds\nState vectorization (deck+card, 10000x): 0.0042 seconds\n```\n\n- Creating 1000 decks takes less than 7 milliseconds\n- 10,000 card lookups in under 1 millisecond\n- Deck and card state vectorization is nearly instantaneous\n\nThis makes Toulouse ideal for RL/ML environments where speed is critical.\n\n---\n\n## Extending Toulouse\n\nAdd new card systems for custom games:\n\n```python\nfrom toulouse import register_card_system, Deck\n\nregister_card_system(\"custom_8\", {\n    \"suits\": [\"Alpha\", \"Beta\"],\n    \"values\": [1, 2, 3, 4],\n    \"deck_size\": 8,\n})\ndeck = Deck.new_deck(card_system_key=\"custom_8\")\nprint(deck)\n```\n\n---\n\n## Testing\n\nRun the test suite with pytest:\n\n```bash\npytest tests/\n```\n\n---\n\n## License\n\nMIT \u2014 Use, modify, and share freely.\n",
    "bugtrack_url": null,
    "license": null,
    "summary": "High-performance card library for ML and RL applications.",
    "version": "1.0.1",
    "project_urls": {
        "Bug Tracker": "https://github.com/yourusername/toulouse/issues",
        "Documentation": "https://github.com/yourusername/toulouse#readme",
        "Homepage": "https://github.com/yourusername/toulouse",
        "Repository": "https://github.com/yourusername/toulouse"
    },
    "split_keywords": [
        "cards",
        " game",
        " reinforcement-learning",
        " mcts",
        " numpy"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "0ee85488241daa796f2087322cc59b0057edbf88b990b4887693bdb616f4a235",
                "md5": "b0b3b1a9d903e3039185062f9ca90d61",
                "sha256": "cefa13091edc6cfd00cae1ee728bb695b69dd5ea09a4a6464aa025e7df9a7af6"
            },
            "downloads": -1,
            "filename": "toulouse-1.0.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "b0b3b1a9d903e3039185062f9ca90d61",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 10462,
            "upload_time": "2025-07-10T10:18:35",
            "upload_time_iso_8601": "2025-07-10T10:18:35.721198Z",
            "url": "https://files.pythonhosted.org/packages/0e/e8/5488241daa796f2087322cc59b0057edbf88b990b4887693bdb616f4a235/toulouse-1.0.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "910cc1a7d2ff929379db69fa57a330723cb0bd6342b6f8afe1bad6f8e4035c85",
                "md5": "8111b6a67722c5e1aeed645660b48b3d",
                "sha256": "f27905d2016c06ab745522a8b6a65f2c3e47a9d574432b6cd0203050174d10c5"
            },
            "downloads": -1,
            "filename": "toulouse-1.0.1.tar.gz",
            "has_sig": false,
            "md5_digest": "8111b6a67722c5e1aeed645660b48b3d",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 13202,
            "upload_time": "2025-07-10T10:18:36",
            "upload_time_iso_8601": "2025-07-10T10:18:36.547650Z",
            "url": "https://files.pythonhosted.org/packages/91/0c/c1a7d2ff929379db69fa57a330723cb0bd6342b6f8afe1bad6f8e4035c85/toulouse-1.0.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-10 10:18:36",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "yourusername",
    "github_project": "toulouse",
    "github_not_found": true,
    "lcname": "toulouse"
}
        
Elapsed time: 0.44685s