localizationkit


Namelocalizationkit JSON
Version 2.0.0 PyPI version JSON
download
home_pagehttps://github.com/Microsoft/localizationkit
SummaryString localization tests
upload_time2023-08-09 09:00:05
maintainerNone
docs_urlNone
authorDale Myers
requires_python>=3.8,<4.0
licenseMIT
keywords localization strings tests
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # localizationkit

`localizationkit` is a toolkit for ensuring that your localized strings are the best that they can be.

Included are tests for various things such as:

* Checking that all strings have comments
* Checking that the comments don't just match the value
* Check that tokens have position specifiers
* Check that no invalid tokens are included

with lots more to come. 

## Getting started

### Configuration

To use the library, first off, create a configuration file that is in the TOML format. Here's an example:

```toml
default_language = "en"

[has_comments]
minimum_comment_length = 25
minimum_comment_words = 8

[token_matching]
allow_missing_defaults = true

[token_position_identifiers]
always = false
```

This configuration file sets that `en` is the default language (so this is the language that will be checked for comments, etc. and all tests will run relative to it). Then it sets various settings for each test. Every instance of `[something_here]` specifies that the following settings are for that test. For example, the test `has_comments` will now make sure that not only are there comments, but that they are at least 25 characters in length and 8 words in length. 

You can now load in your configuration:

```python
from localizationkit import Configuration

configuration = Configuration.from_file("/path/to/config.toml")
```

### Localization Collections

Now we need to prepare the strings that will go in. Here's how you can create an individual string:

```python
from localizationkit import LocalizedString

my_string = LocalizedString("My string's key", "My string's value", "My string's comment", "en")
```

This creates a single string with a key, value and comment, with its language code set to `en`. Once you've created some more (usually for different languages too), you can bundle them into a collection:

```python
from localizationkit import LocalizedCollection

collection = LocalizedCollection(list_of_my_strings)
```

### Running the tests

At this point, you are ready to run the tests:

```python
import localizationkit

results = localizationkit.run_tests(configuration, collection)

for result in results:
    if not result.succeeded():
        print("The following test failed:", result.name)
        print("Failures encountered:")
        for violation in result.violations:
            print(violation)
```

### Not running the tests

Some tests don't make sense for everyone. To skip a test you can add the following to your config file at the root:

```toml
blacklist = ["test_identifier_1", "test_identifier_2"]
```

# Rule documentation

Most tests have configurable rules. If a rule is not specified, it will use the default instead.

Some tests are opt in only. These will be marked as such.

## Comment Linebreaks

Identifier: `comment_linebreaks`
Opt-In: `true`

Checks that comments for strings do not contain linebreaks. Comments which contain linebreaks can interfere with parsing in other tools such as [dotstrings](https://github.com/microsoft/dotstrings).

## Comment Similarity

Identifier: `comment_similarity`

Checks the similarity between a comment and the string's value in the default language. This is achieved via `difflib`'s `SequenceMatcher`. More details can be found [here](https://docs.python.org/3/library/difflib.html#difflib.SequenceMatcher.ratio)

<details>
    <summary>Configuration</summary>

| Parameter | Type | Acceptable Values | Default | Details | 
| --- | --- | --- | --- | --- |
| `maximum_similarity_ratio` | float | Between 0 and 1 | 0.5 | Set the maximum similarity ratio between the comment and the string value. The higher the value, the more similar they are. The longer the string the more accurate this will be. |

</details>

## Duplicate Keys

Identifier: `duplicate_keys`

Checks that there are no duplicate keys in the collection.

<details>
    <summary>Configuration</summary>

| Parameter | Type | Acceptable Values | Default | Details | 
| --- | --- | --- | --- | --- |
| `all_languages` | boolean | `true` or `false` | `false` | Set to `true` to check that every language has unique keys, not just the default language. |

</details>

## Has Comments

Identifier: `has_comments`

Checks that strings have comments.

_Note: Only languages that have Latin style scripts are really supported for the words check due to splitting on spaces to check._

<details>
    <summary>Configuration</summary>

| Parameter | Type | Acceptable Values | Default | Details | 
| --- | --- | --- | --- | --- |
| `minimum_comment_length` | int | Any integer | 30 | Set the minimum allowable length for a comment. Set the value to negative to not check. |
| `minimum_comment_words` | int | Any integer | 10 | Set the minimum allowable number of words for a comment. Set the value to negative to not check. |

</details>

## Has Value

Identifier: `has_value`

Checks that strings have values. Since any value is enough for some strings, it simply makes sure that the string isn't None/null and isn't empty.

<details>
    <summary>Configuration</summary>

| Parameter | Type | Acceptable Values | Default | Details | 
| --- | --- | --- | --- | --- |
| `default_language_only` | boolean | `true` or `false` | `false` | Set to true to only check the default language for missing values. Otherwise all languages will be checked. |

</details>

## Invalid Tokens

Identifier: `invalid_tokens`

Checks that all format tokens in a string are valid.

_Note: This check is not language specific. It only works very broadly._

## Key Length

Identifier: `key_length`

Checks the length of the keys.

_Note: By default this test doesn't check anything. It needs to have parameters set to positive values to do anything._

<details>
    <summary>Configuration</summary>

| Parameter | Type | Acceptable Values | Default | Details | 
| --- | --- | --- | --- | --- |
| `minimum` | int | Any integer | -1 | Set the minimum allowable length for a key. Set the value to negative to not check. |
| `maximum` | int | Any integer | -1 | Set the maximum allowable length for a key. Set the value to negative to not check. |

</details>

## Objective-C Alternative Tokens

Identifier: `objectivec_alternative_tokens`
Opt-In: `true`

Checks that strings do not contain Objective-C style alternative position tokens.

Objective-C seems to be allows positional tokens of the form `%1@` rather than `%1$@`. While not illegal, it is preferred that all tokens between languages are consistent so that tools don't experience unexpected failures, etc.

## Placeholder token explanation

Identifier: `placeholder_token_explanation`
Opt-In: `true`

Checks that if a placeholder is used in a string, the comment explicitly explains what it is replaced with.

Precondition: Each placeholder in the string and its explanation in comment is expected to follow `token_position_identifiers` rule.

## Swift Interpolation

Identifier: `swift_interpolation`
Opt-In: `true`

Checks that strings do not contain Swift style interpolation values since these cannot be localized.

## Token Matching

Identifier: `token_matching`

Checks that the tokens in a string match across all languages. e.g. If your English string is "Hello %s" but your French string is "Bonjour", this would flag that there is a missing token in the French string.

<details>
    <summary>Configuration</summary>

| Parameter | Type | Acceptable Values | Default | Details | 
| --- | --- | --- | --- | --- |
| `allow_missing_defaults` | boolean | `true` or `false` | `false` | Due to the way that automated localization works, usually there will be a default language, and then other translations will come in over time. If a translation is deleted, it isn't always deleted from all languages immediately. Setting this parameter to true will allow any strings in your non-default language to be ignored if that string is missing from your default language. |

</details>

## Token Position Identifiers

Identifier: `token_position_identifiers`

Check that each token has a position specifier with it. e.g. `%s` is not allowed, but `%1$s` is. Tokens can move around in different languages, so position specifiers are extremely important.

<details>
    <summary>Configuration</summary>

| Parameter | Type | Acceptable Values | Default | Details | 
| --- | --- | --- | --- | --- |
| `always` | boolean | `true` or `false` | `false` | If a string only has a single token, it doesn't need a position specifier. Set this to `true` to require it even in those cases.

</details>

# Contributing

This project welcomes contributions and suggestions.  Most contributions require you to agree to a
Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us
the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.


            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/Microsoft/localizationkit",
    "name": "localizationkit",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8,<4.0",
    "maintainer_email": null,
    "keywords": "localization,strings,tests",
    "author": "Dale Myers",
    "author_email": "dalemy@microsoft.com",
    "download_url": "https://files.pythonhosted.org/packages/18/55/fe388a4a56c791d3ab2b4cf4c14de2f2bdff71cb206dfed493875ecde423/localizationkit-2.0.0.tar.gz",
    "platform": null,
    "description": "# localizationkit\n\n`localizationkit` is a toolkit for ensuring that your localized strings are the best that they can be.\n\nIncluded are tests for various things such as:\n\n* Checking that all strings have comments\n* Checking that the comments don't just match the value\n* Check that tokens have position specifiers\n* Check that no invalid tokens are included\n\nwith lots more to come. \n\n## Getting started\n\n### Configuration\n\nTo use the library, first off, create a configuration file that is in the TOML format. Here's an example:\n\n```toml\ndefault_language = \"en\"\n\n[has_comments]\nminimum_comment_length = 25\nminimum_comment_words = 8\n\n[token_matching]\nallow_missing_defaults = true\n\n[token_position_identifiers]\nalways = false\n```\n\nThis configuration file sets that `en` is the default language (so this is the language that will be checked for comments, etc. and all tests will run relative to it). Then it sets various settings for each test. Every instance of `[something_here]` specifies that the following settings are for that test. For example, the test `has_comments` will now make sure that not only are there comments, but that they are at least 25 characters in length and 8 words in length. \n\nYou can now load in your configuration:\n\n```python\nfrom localizationkit import Configuration\n\nconfiguration = Configuration.from_file(\"/path/to/config.toml\")\n```\n\n### Localization Collections\n\nNow we need to prepare the strings that will go in. Here's how you can create an individual string:\n\n```python\nfrom localizationkit import LocalizedString\n\nmy_string = LocalizedString(\"My string's key\", \"My string's value\", \"My string's comment\", \"en\")\n```\n\nThis creates a single string with a key, value and comment, with its language code set to `en`. Once you've created some more (usually for different languages too), you can bundle them into a collection:\n\n```python\nfrom localizationkit import LocalizedCollection\n\ncollection = LocalizedCollection(list_of_my_strings)\n```\n\n### Running the tests\n\nAt this point, you are ready to run the tests:\n\n```python\nimport localizationkit\n\nresults = localizationkit.run_tests(configuration, collection)\n\nfor result in results:\n    if not result.succeeded():\n        print(\"The following test failed:\", result.name)\n        print(\"Failures encountered:\")\n        for violation in result.violations:\n            print(violation)\n```\n\n### Not running the tests\n\nSome tests don't make sense for everyone. To skip a test you can add the following to your config file at the root:\n\n```toml\nblacklist = [\"test_identifier_1\", \"test_identifier_2\"]\n```\n\n# Rule documentation\n\nMost tests have configurable rules. If a rule is not specified, it will use the default instead.\n\nSome tests are opt in only. These will be marked as such.\n\n## Comment Linebreaks\n\nIdentifier: `comment_linebreaks`\nOpt-In: `true`\n\nChecks that comments for strings do not contain linebreaks. Comments which contain linebreaks can interfere with parsing in other tools such as [dotstrings](https://github.com/microsoft/dotstrings).\n\n## Comment Similarity\n\nIdentifier: `comment_similarity`\n\nChecks the similarity between a comment and the string's value in the default language. This is achieved via `difflib`'s `SequenceMatcher`. More details can be found [here](https://docs.python.org/3/library/difflib.html#difflib.SequenceMatcher.ratio)\n\n<details>\n    <summary>Configuration</summary>\n\n| Parameter | Type | Acceptable Values | Default | Details | \n| --- | --- | --- | --- | --- |\n| `maximum_similarity_ratio` | float | Between 0 and 1 | 0.5 | Set the maximum similarity ratio between the comment and the string value. The higher the value, the more similar they are. The longer the string the more accurate this will be. |\n\n</details>\n\n## Duplicate Keys\n\nIdentifier: `duplicate_keys`\n\nChecks that there are no duplicate keys in the collection.\n\n<details>\n    <summary>Configuration</summary>\n\n| Parameter | Type | Acceptable Values | Default | Details | \n| --- | --- | --- | --- | --- |\n| `all_languages` | boolean | `true` or `false` | `false` | Set to `true` to check that every language has unique keys, not just the default language. |\n\n</details>\n\n## Has Comments\n\nIdentifier: `has_comments`\n\nChecks that strings have comments.\n\n_Note: Only languages that have Latin style scripts are really supported for the words check due to splitting on spaces to check._\n\n<details>\n    <summary>Configuration</summary>\n\n| Parameter | Type | Acceptable Values | Default | Details | \n| --- | --- | --- | --- | --- |\n| `minimum_comment_length` | int | Any integer | 30 | Set the minimum allowable length for a comment. Set the value to negative to not check. |\n| `minimum_comment_words` | int | Any integer | 10 | Set the minimum allowable number of words for a comment. Set the value to negative to not check. |\n\n</details>\n\n## Has Value\n\nIdentifier: `has_value`\n\nChecks that strings have values. Since any value is enough for some strings, it simply makes sure that the string isn't None/null and isn't empty.\n\n<details>\n    <summary>Configuration</summary>\n\n| Parameter | Type | Acceptable Values | Default | Details | \n| --- | --- | --- | --- | --- |\n| `default_language_only` | boolean | `true` or `false` | `false` | Set to true to only check the default language for missing values. Otherwise all languages will be checked. |\n\n</details>\n\n## Invalid Tokens\n\nIdentifier: `invalid_tokens`\n\nChecks that all format tokens in a string are valid.\n\n_Note: This check is not language specific. It only works very broadly._\n\n## Key Length\n\nIdentifier: `key_length`\n\nChecks the length of the keys.\n\n_Note: By default this test doesn't check anything. It needs to have parameters set to positive values to do anything._\n\n<details>\n    <summary>Configuration</summary>\n\n| Parameter | Type | Acceptable Values | Default | Details | \n| --- | --- | --- | --- | --- |\n| `minimum` | int | Any integer | -1 | Set the minimum allowable length for a key. Set the value to negative to not check. |\n| `maximum` | int | Any integer | -1 | Set the maximum allowable length for a key. Set the value to negative to not check. |\n\n</details>\n\n## Objective-C Alternative Tokens\n\nIdentifier: `objectivec_alternative_tokens`\nOpt-In: `true`\n\nChecks that strings do not contain Objective-C style alternative position tokens.\n\nObjective-C seems to be allows positional tokens of the form `%1@` rather than `%1$@`. While not illegal, it is preferred that all tokens between languages are consistent so that tools don't experience unexpected failures, etc.\n\n## Placeholder token explanation\n\nIdentifier: `placeholder_token_explanation`\nOpt-In: `true`\n\nChecks that if a placeholder is used in a string, the comment explicitly explains what it is replaced with.\n\nPrecondition: Each placeholder in the string and its explanation in comment is expected to follow `token_position_identifiers` rule.\n\n## Swift Interpolation\n\nIdentifier: `swift_interpolation`\nOpt-In: `true`\n\nChecks that strings do not contain Swift style interpolation values since these cannot be localized.\n\n## Token Matching\n\nIdentifier: `token_matching`\n\nChecks that the tokens in a string match across all languages. e.g. If your English string is \"Hello %s\" but your French string is \"Bonjour\", this would flag that there is a missing token in the French string.\n\n<details>\n    <summary>Configuration</summary>\n\n| Parameter | Type | Acceptable Values | Default | Details | \n| --- | --- | --- | --- | --- |\n| `allow_missing_defaults` | boolean | `true` or `false` | `false` | Due to the way that automated localization works, usually there will be a default language, and then other translations will come in over time. If a translation is deleted, it isn't always deleted from all languages immediately. Setting this parameter to true will allow any strings in your non-default language to be ignored if that string is missing from your default language. |\n\n</details>\n\n## Token Position Identifiers\n\nIdentifier: `token_position_identifiers`\n\nCheck that each token has a position specifier with it. e.g. `%s` is not allowed, but `%1$s` is. Tokens can move around in different languages, so position specifiers are extremely important.\n\n<details>\n    <summary>Configuration</summary>\n\n| Parameter | Type | Acceptable Values | Default | Details | \n| --- | --- | --- | --- | --- |\n| `always` | boolean | `true` or `false` | `false` | If a string only has a single token, it doesn't need a position specifier. Set this to `true` to require it even in those cases.\n\n</details>\n\n# Contributing\n\nThis project welcomes contributions and suggestions.  Most contributions require you to agree to a\nContributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us\nthe rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.\n\nWhen you submit a pull request, a CLA bot will automatically determine whether you need to provide\na CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions\nprovided by the bot. You will only need to do this once across all repos using our CLA.\n\nThis project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).\nFor more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or\ncontact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.\n\n",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "String localization tests",
    "version": "2.0.0",
    "project_urls": {
        "Homepage": "https://github.com/Microsoft/localizationkit",
        "Repository": "https://github.com/Microsoft/localizationkit"
    },
    "split_keywords": [
        "localization",
        "strings",
        "tests"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "8f59bd7f8f579abff64329cada0ae47a0b7b01f8c6bf158bd5cf9ed31adc32dc",
                "md5": "06a384bf234172dca829220929c26796",
                "sha256": "8f4ed3a9fb7b2f956d4a78bf5ae48a23f89b6bfd747e6ecf0f3072032bc6e840"
            },
            "downloads": -1,
            "filename": "localizationkit-2.0.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "06a384bf234172dca829220929c26796",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8,<4.0",
            "size": 18134,
            "upload_time": "2023-08-09T09:00:08",
            "upload_time_iso_8601": "2023-08-09T09:00:08.200671Z",
            "url": "https://files.pythonhosted.org/packages/8f/59/bd7f8f579abff64329cada0ae47a0b7b01f8c6bf158bd5cf9ed31adc32dc/localizationkit-2.0.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "1855fe388a4a56c791d3ab2b4cf4c14de2f2bdff71cb206dfed493875ecde423",
                "md5": "3c1f4e0c31a93499154c68aaae8a9a99",
                "sha256": "5abf745478122af494796cc37cbd0415e98b4471fda8865cb157f7bf530ee4c2"
            },
            "downloads": -1,
            "filename": "localizationkit-2.0.0.tar.gz",
            "has_sig": false,
            "md5_digest": "3c1f4e0c31a93499154c68aaae8a9a99",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8,<4.0",
            "size": 13570,
            "upload_time": "2023-08-09T09:00:05",
            "upload_time_iso_8601": "2023-08-09T09:00:05.630068Z",
            "url": "https://files.pythonhosted.org/packages/18/55/fe388a4a56c791d3ab2b4cf4c14de2f2bdff71cb206dfed493875ecde423/localizationkit-2.0.0.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-08-09 09:00:05",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "Microsoft",
    "github_project": "localizationkit",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": false,
    "lcname": "localizationkit"
}
        
Elapsed time: 0.10343s