glin-profanity


Nameglin-profanity JSON
Version 2.3.3 PyPI version JSON
download
home_pageNone
SummaryGlin-Profanity is a lightweight and efficient Python package designed to detect and filter profane language in text inputs across multiple languages.
upload_time2025-07-27 01:18:19
maintainerNone
docs_urlNone
authorNone
requires_python>=3.10
licenseNone
keywords censorship chat comment content detection filter glin glincker language moderation nlp profanity social text
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # Glin-Profanity Python Package

A lightweight and efficient Python package designed to detect and filter profane language in text inputs across multiple languages.

## Features

- 🌍 **Multi-language Support**: Supports 25+ languages including English, Spanish, French, German, Arabic, Chinese, and many more
- 🎯 **Context-Aware Filtering**: Advanced context analysis to reduce false positives
- ⚙️ **Highly Configurable**: Customize word lists, severity levels, and filtering behavior
- 🚀 **High Performance**: Optimized for speed and efficiency
- 🔧 **Easy Integration**: Simple API that works with any Python application
- 📝 **TypeScript Compatible**: Mirrors the API of the TypeScript version

## Installation

```bash
pip install glin-profanity
```

## Quick Start

```python
from glin_profanity import Filter

# Basic usage
filter_instance = Filter()

# Check if text contains profanity
if filter_instance.is_profane("This is a damn example"):
    print("Profanity detected!")

# Get detailed results
result = filter_instance.check_profanity("This is a damn example")
print(result["profane_words"])  # ['damn']
print(result["contains_profanity"])  # True
```

## Configuration Options

```python
from glin_profanity import Filter, SeverityLevel

# Advanced configuration
config = {
    "languages": ["english", "spanish"],  # Specific languages
    "case_sensitive": False,              # Case sensitivity
    "word_boundaries": True,              # Enforce word boundaries
    "replace_with": "***",                # Replacement text
    "severity_levels": True,              # Enable severity detection
    "custom_words": ["badword"],          # Add custom words
    "ignore_words": ["exception"],        # Ignore specific words
    "allow_obfuscated_match": True,       # Detect obfuscated text
    "fuzzy_tolerance_level": 0.8,         # Fuzzy matching threshold
}

filter_instance = Filter(config)
```

## API Reference

### Filter Class

#### `__init__(config: Optional[FilterConfig] = None)`
Initialize the filter with optional configuration.

#### `is_profane(text: str) -> bool`
Check if text contains profanity. Returns `True` if profanity is detected.

#### `check_profanity(text: str) -> CheckProfanityResult`
Perform comprehensive profanity analysis with detailed results.

#### `matches(word: str) -> bool`
Check if a single word matches profanity patterns. Alias for `is_profane()`.

#### `check_profanity_with_min_severity(text: str, min_severity: SeverityLevel) -> Dict`
Check profanity with minimum severity filtering.

### Types

#### `CheckProfanityResult`
```python
{
    "contains_profanity": bool,
    "profane_words": List[str],
    "processed_text": Optional[str],      # If replace_with is set
    "severity_map": Optional[Dict],       # If severity_levels is True
    "matches": Optional[List[Match]],     # Detailed match information
    "context_score": Optional[float],     # Context analysis score
    "reason": Optional[str]               # Analysis reason
}
```

#### `SeverityLevel`
- `SeverityLevel.EXACT`: Exact word match
- `SeverityLevel.FUZZY`: Fuzzy/approximate match

## Supported Languages

Arabic, Chinese, Czech, Danish, English, Esperanto, Finnish, French, German, Hindi, Hungarian, Italian, Japanese, Korean, Norwegian, Persian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish

## Development

### Setup

```bash
# Clone the repository
git clone https://github.com/GLINCKER/glin-profanity
cd glin-profanity/packages/py

# Install with development dependencies
pip install -e ".[dev]"
```

### Testing

```bash
# Run tests
pytest

# Run tests with coverage
pytest --cov=glin_profanity
```

### Code Quality

```bash
# Format code
black glin_profanity tests

# Sort imports
isort glin_profanity tests

# Type checking
mypy glin_profanity

# Linting
ruff check glin_profanity tests
```

## License

This project is licensed under the MIT License - see the [LICENSE](../../LICENSE) file for details.

## Contributing

Contributions are welcome! Please read our [Contributing Guide](../../CONTRIBUTING.md) for details on our code of conduct and the process for submitting pull requests.

## Support

- 📖 [Documentation](https://github.com/GLINCKER/glin-profanity)
- 🐛 [Issue Tracker](https://github.com/GLINCKER/glin-profanity/issues)
- 💬 [Discussions](https://github.com/GLINCKER/glin-profanity/discussions)

## Changelog

See [CHANGELOG.md](../../CHANGELOG.md) for a list of changes and updates.
            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "glin-profanity",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.10",
    "maintainer_email": "glinr <contact@glincker.com>",
    "keywords": "censorship, chat, comment, content, detection, filter, glin, glincker, language, moderation, nlp, profanity, social, text",
    "author": null,
    "author_email": "glinr <contact@glincker.com>",
    "download_url": "https://files.pythonhosted.org/packages/2b/44/b1e339a8a74729e8247c14937e183418924e34929519b54bacdef61a6c7c/glin_profanity-2.3.3.tar.gz",
    "platform": null,
    "description": "# Glin-Profanity Python Package\n\nA lightweight and efficient Python package designed to detect and filter profane language in text inputs across multiple languages.\n\n## Features\n\n- \ud83c\udf0d **Multi-language Support**: Supports 25+ languages including English, Spanish, French, German, Arabic, Chinese, and many more\n- \ud83c\udfaf **Context-Aware Filtering**: Advanced context analysis to reduce false positives\n- \u2699\ufe0f **Highly Configurable**: Customize word lists, severity levels, and filtering behavior\n- \ud83d\ude80 **High Performance**: Optimized for speed and efficiency\n- \ud83d\udd27 **Easy Integration**: Simple API that works with any Python application\n- \ud83d\udcdd **TypeScript Compatible**: Mirrors the API of the TypeScript version\n\n## Installation\n\n```bash\npip install glin-profanity\n```\n\n## Quick Start\n\n```python\nfrom glin_profanity import Filter\n\n# Basic usage\nfilter_instance = Filter()\n\n# Check if text contains profanity\nif filter_instance.is_profane(\"This is a damn example\"):\n    print(\"Profanity detected!\")\n\n# Get detailed results\nresult = filter_instance.check_profanity(\"This is a damn example\")\nprint(result[\"profane_words\"])  # ['damn']\nprint(result[\"contains_profanity\"])  # True\n```\n\n## Configuration Options\n\n```python\nfrom glin_profanity import Filter, SeverityLevel\n\n# Advanced configuration\nconfig = {\n    \"languages\": [\"english\", \"spanish\"],  # Specific languages\n    \"case_sensitive\": False,              # Case sensitivity\n    \"word_boundaries\": True,              # Enforce word boundaries\n    \"replace_with\": \"***\",                # Replacement text\n    \"severity_levels\": True,              # Enable severity detection\n    \"custom_words\": [\"badword\"],          # Add custom words\n    \"ignore_words\": [\"exception\"],        # Ignore specific words\n    \"allow_obfuscated_match\": True,       # Detect obfuscated text\n    \"fuzzy_tolerance_level\": 0.8,         # Fuzzy matching threshold\n}\n\nfilter_instance = Filter(config)\n```\n\n## API Reference\n\n### Filter Class\n\n#### `__init__(config: Optional[FilterConfig] = None)`\nInitialize the filter with optional configuration.\n\n#### `is_profane(text: str) -> bool`\nCheck if text contains profanity. Returns `True` if profanity is detected.\n\n#### `check_profanity(text: str) -> CheckProfanityResult`\nPerform comprehensive profanity analysis with detailed results.\n\n#### `matches(word: str) -> bool`\nCheck if a single word matches profanity patterns. Alias for `is_profane()`.\n\n#### `check_profanity_with_min_severity(text: str, min_severity: SeverityLevel) -> Dict`\nCheck profanity with minimum severity filtering.\n\n### Types\n\n#### `CheckProfanityResult`\n```python\n{\n    \"contains_profanity\": bool,\n    \"profane_words\": List[str],\n    \"processed_text\": Optional[str],      # If replace_with is set\n    \"severity_map\": Optional[Dict],       # If severity_levels is True\n    \"matches\": Optional[List[Match]],     # Detailed match information\n    \"context_score\": Optional[float],     # Context analysis score\n    \"reason\": Optional[str]               # Analysis reason\n}\n```\n\n#### `SeverityLevel`\n- `SeverityLevel.EXACT`: Exact word match\n- `SeverityLevel.FUZZY`: Fuzzy/approximate match\n\n## Supported Languages\n\nArabic, Chinese, Czech, Danish, English, Esperanto, Finnish, French, German, Hindi, Hungarian, Italian, Japanese, Korean, Norwegian, Persian, Polish, Portuguese, Russian, Spanish, Swedish, Thai, Turkish\n\n## Development\n\n### Setup\n\n```bash\n# Clone the repository\ngit clone https://github.com/GLINCKER/glin-profanity\ncd glin-profanity/packages/py\n\n# Install with development dependencies\npip install -e \".[dev]\"\n```\n\n### Testing\n\n```bash\n# Run tests\npytest\n\n# Run tests with coverage\npytest --cov=glin_profanity\n```\n\n### Code Quality\n\n```bash\n# Format code\nblack glin_profanity tests\n\n# Sort imports\nisort glin_profanity tests\n\n# Type checking\nmypy glin_profanity\n\n# Linting\nruff check glin_profanity tests\n```\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](../../LICENSE) file for details.\n\n## Contributing\n\nContributions are welcome! Please read our [Contributing Guide](../../CONTRIBUTING.md) for details on our code of conduct and the process for submitting pull requests.\n\n## Support\n\n- \ud83d\udcd6 [Documentation](https://github.com/GLINCKER/glin-profanity)\n- \ud83d\udc1b [Issue Tracker](https://github.com/GLINCKER/glin-profanity/issues)\n- \ud83d\udcac [Discussions](https://github.com/GLINCKER/glin-profanity/discussions)\n\n## Changelog\n\nSee [CHANGELOG.md](../../CHANGELOG.md) for a list of changes and updates.",
    "bugtrack_url": null,
    "license": null,
    "summary": "Glin-Profanity is a lightweight and efficient Python package designed to detect and filter profane language in text inputs across multiple languages.",
    "version": "2.3.3",
    "project_urls": {
        "Documentation": "https://github.com/GLINCKER/glin-profanity",
        "Homepage": "https://www.glincker.com/tools/glin-profanity",
        "Issues": "https://github.com/GLINCKER/glin-profanity/issues",
        "Repository": "https://github.com/GLINCKER/glin-profanity"
    },
    "split_keywords": [
        "censorship",
        " chat",
        " comment",
        " content",
        " detection",
        " filter",
        " glin",
        " glincker",
        " language",
        " moderation",
        " nlp",
        " profanity",
        " social",
        " text"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "ddc880ba2c49963054460da8d49074598b4e6bcc5a4ef70fc07e31de33ae798f",
                "md5": "2281d6b0b072534ccd3d818197123e0a",
                "sha256": "1deeaacff9ebd949b35931db7303a678cfca8cd44abad2c7e90c310d1f33e95d"
            },
            "downloads": -1,
            "filename": "glin_profanity-2.3.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2281d6b0b072534ccd3d818197123e0a",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.10",
            "size": 10102,
            "upload_time": "2025-07-27T01:18:18",
            "upload_time_iso_8601": "2025-07-27T01:18:18.035176Z",
            "url": "https://files.pythonhosted.org/packages/dd/c8/80ba2c49963054460da8d49074598b4e6bcc5a4ef70fc07e31de33ae798f/glin_profanity-2.3.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "2b44b1e339a8a74729e8247c14937e183418924e34929519b54bacdef61a6c7c",
                "md5": "a7458d0875da058fea2d38586dcc9679",
                "sha256": "330fa0ab82e61d199b546462edc5f5be7789107108a97b5cd8091a830a9ddc69"
            },
            "downloads": -1,
            "filename": "glin_profanity-2.3.3.tar.gz",
            "has_sig": false,
            "md5_digest": "a7458d0875da058fea2d38586dcc9679",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.10",
            "size": 9357,
            "upload_time": "2025-07-27T01:18:19",
            "upload_time_iso_8601": "2025-07-27T01:18:19.219887Z",
            "url": "https://files.pythonhosted.org/packages/2b/44/b1e339a8a74729e8247c14937e183418924e34929519b54bacdef61a6c7c/glin_profanity-2.3.3.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-27 01:18:19",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "GLINCKER",
    "github_project": "glin-profanity",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "glin-profanity"
}
        
Elapsed time: 1.22877s