Google Fonts Glyphset Definitions
=================================
What is this?
-------------
This repository contains curated glyphsets that Google Fonts hands out to **designers of commissioned fonts** for font authoring.
What is this _not_?
-------------------
These _glyphsets_ are not to be confused with the _subsets_ that the [Google Fonts API](https://developers.google.com/fonts/docs/getting_started#specifying_script_subsets) uses to minimize traffic by serving partial fonts based on subsets.
These subset definitions used to be hosted here in this repository but are now found over in the separate [nam-files](https://github.com/googlefonts/nam-files) repository.
What’s the difference?
----------------------
As a user of the [Google Fonts API](https://developers.google.com/fonts/docs/getting_started#specifying_script_subsets) you may request a multi-script font to be served limited to a _subset_ of glyphs, usually a certain script, such as `https://fonts.googleapis.com/css?family=Roboto+Mono&subset=cyrillic`, to speed up file transfer by leaving out unnecessary glyphs.
_Glyphsets_ on the other hand are what Google Fonts requires font authors to put into fonts when _designing_ them, and they’re not identical to subsets. You can get a font’s complete glyphset by manually downloading a TTF on [fonts.google.com](https://fonts.google.com/), but you typically don’t get the same glyphs in a font accessed through the Google Fonts API because these are subsetted.
Glyphsets for font authoring
----------------------------
**If you are a font author** and you want to merely get your hands on ready-made glyphsets, pick your files straight out of the [`/data/results`](/data/results) folder, such as `.glyphs` files with empty placeholder glyphs, or `.plist` files that are so-called _Custom Filters_ that will show up in the Glyphs.app sidebar when placed alongside your source files. Alternatively, you can cook your own Custom Filters with the `glyphsets` tool, see the _Glyphsets Tool_ section at the bottom of this document.
The rest of this README is addressing people who are **editing** glyphset and language definitions.
Editing glyphsets
-----------------
The repository recently (end of 2023/start of 2024) underwent a bigger overhaul in how the glyphsets are assembled.
The current approach has become part of a bigger network of tools that is also comprised of [gflanguages](https://github.com/googlefonts/lang/) and [shaperglot](https://github.com/googlefonts/shaperglot), as well as [fontbakery’s](https://github.com/fonttools/fontbakery) `shape_languages` check.
In the ideal scenario, glyphsets are defined merely by lists of language codes (such as `tu_Latn`).
During the build process (`sh build.sh`), the `gflanguages` database will be queried for all characters defined for those languages, then combined into a single glyphset.
_Optionally_, encoded characters as well as unencoded glyphs may be defined in glyphset-specific or language-specific files here in `gfglyphsets`, whose contents will also be added to the final glyphsets.
Later during font QA (as part of font onboarding work, just FYI), Fontbakery's `shape_languages` check first determines which glyphsets a font supports, then uses the languages defined for each glyphset to invoke `shaperglot`, which checks whether each language _shapes_ correctly or not.
This presents quite a leap forward in font QA where `shaperglot` invokes the `harfbuzz` shaping engine to prove the entire OpenType-stack to be funtioning at once, including mark attachment and character sequences.
`shaperglot` contains its own sets of script- or language-specific definitions, such as a check to see whether the `ı` and `i` shape into distinct letters in small-caps for Turkish.
> [!NOTE]
> See [GLYPHSETS.md](GLYPHSETS.md) for an up-to-date description of the state of the new glyphset definitions. Many glyphsets have not been transitioned to the new approach and still exist as manually curated lists of characters and unencoded glyphs.
How to assemble glyphsets
=========================
Prerequisites
-------------
In order for the build command to correctly assemble glyphsets using language defintions, make sure that your work environment sports the latest version of [gflanguages](https://github.com/googlefonts/lang/). If unsure, update it with `pip install -U gflanguages`.
Oftentimes you may want to adjust language definitions in `gflanguages` _at the same time_ as you’re adjusting other parts of the glyphsets. In this case you may clone the `gflanguages` repository to your computer and install it using `pip install -e .` from within its root folder. This will expose your `gflanguages` clone to your entire system (or virtual environment) and changes in `gflanguages` will automatically be reflected in other tools that use it, such as `gfglyphsets`, without the need of re-installing it after every code or data change. Thus, running `sh build.sh` will automatically use your latest language definitions, even before you have PR’d your language definition changes back to the repository.
Where are glyphsets defined?
------------------
Inside this repository, data is defined in two different places.
One place is inside the `glyphsets` Python package (`/Lib/glyphsets/definitions`). This data that needs to be exposed to third-party tools such as `fontbakery`.
The other place is in `/data/definitions`. This data is only used for authoring glyphsets and need not be distributed as part of the Python package.
1. **Inside Python package:** Glyphsets are defined in `.yaml` files inside the Python package folder at [`/Lib/glyphsets/definitions`](/Lib/glyphsets/definitions).
2. **Outside of Python package:** Additional files in the `/data/definitions` sub-folders will become part of the glyphsets as soon as they are found to exist under a certain filename. If a file that you need doesn't exist there, create it in its place.
Where are characters and glyphs defined?
------------------
In order to determine where _characters_ (encoded with a Unicode) or _glyphs_ (unencoded) are defined, follow this logic:
1. Is it a **language-specific encoded character**? Then it goes into the `gflanguages` database (which is a separate package) for example [here](https://github.com/googlefonts/lang/blob/main/Lib/gflanguages/data/languages/nl_Latn.textproto). `gflanguages` holds only encoded characters, not unencoded glyphs. Prepare a separate PR for `gflanguages` if you are changing those definitions as well.
1. Is it a **language-specific unencoded glyph**? Then it goes into `/data/definitions/per_language`
1. Is it a more general **glyphset-specific character or glyph**? Then it goes into `/data/definitions/per_glyphset`
If you find that you need additional separate definitions _per script_, contact @yanone to implement it.
(Re-) Building glyphsets
-----------------------
Once your language and glyphset definitions are set up and edited, run `sh build.sh` from the command line. This command sources characters from `gflanguages` as well as characters and glyphs from the various files in the `/data/definitions` folder, and combines them into one comprehensive list per glyphset, which are then rendered out into various different data formats into the `/data/results` folder.
Additionally, the [GLYPHSETS.md](GLYPHSETS.md) document is updated, which contains a human-readable overview of the state of each glyphset.
> [!NOTE]
> When making PRs, the glyphsets will automatically be rendered depending on defintion changes (which is useful for dependency update PRs such as `glyphsLib` or `gflanguages`). This means that you don’t _need to_ supply updated glyphsets and `GLYPHSETS.md` as part of your PR (as rendered by `sh build.sh`). A PR may be as simple as adding a language to a `.yaml` defintion and the changes to glyphsets will automatically be added in a commit to your PR where you can review the changed glyphsets.
Data flow visualization
-----------------------
Here’s a visual overview of the data definitions that go into each glyphset, and the files that are created as results.
Read this top to bottom.
```
DEFINITIONS:
┌──────────────────┐
│ Language codes │
│ "en_Latn" │
│ "de_Latn" │
│ ... │
└──────────────────┘
│
┌──────────────────┐ ┌──────────────────┐
│ gflanguages │ │ .stub.glyphs │
│ (Python package) │ │ (optional) │
└──────────────────┘ └──────────────────┘
│ │
╰──────────────────────┬──────────────────────╯
│
BUILD PROCESS: │
│
╔═══════════════════════════════╗
║ complete glyphset ║
╚═══════════════════════════════╝
│
RESULTS: │
│
╭──────────────────────┼──────────────────────┬──────────────────────╮
│ │ │ │
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ .txt │ │ .nam │ │ .glyphs │ │ .plist │
│ (nice & prod) │ │ │ │ │ │ │
└──────────────────┘ └──────────────────┘ └──────────────────┘ └──────────────────┘
```
Glyphsets Tool
==============
> [!NOTE]
> Previously existing commands of the `glyphsets` tool are currently deactivated after the transition to the new database. These are: `update-srcs`, `nam-file`, `missing-in-font`. Please report if you need to use these.
## Custom Filters
You can create your own Glyphs.app _Custom Filters_ using the `glyphsets` tool.
Install or update the tool with pip, if you haven’t already:
```
pip install -U glyphsets
```
Create a filter list for Glyph.app:
```
glyphsets filter-list -o myfilter.plist GF_Latin_Core GF_Latin_Plus
```
Add this `.plist` file next to your Glyphs file and (after restart) you would be able to see it in the filters sidebar.
## Compare Glyphsets
You can compare the contents of two or more glyphsets against each other. Each consecutive glyphset will be compared to the previous one.
This command lists the complete contents of `GF_Latin_Kernel` first, and then lists only extra (or missing) glyphs for `GF_Latin_Core` when compared to `GF_Latin_Kernel`:
```
glyphsets compare GF_Latin_Kernel GF_Latin_Core
```
Output:
```
GF_Latin_Kernel:
===============
Total glyphs: 116
Letter (52 glyphs):
`A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z`
...
GF_Latin_Core:
=============
Total glyphs: 324
GF_Latin_Core has 208 **extra** glyphs compared to GF_Latin_Kernel:
Letter (168 glyphs):
`ª º À Á Â Ã Ä Å Æ Ç È É Ê Ë Ì Í Î Ï Ð Ñ Ò Ó Ô Õ Ö Ø Ù Ú Û Ü Ý Þ ß à á â ã ä å æ ç è é ê ë ì í î ï ð ñ ò ó ô õ ö ø ù ú û ü ý þ ÿ Ā ā Ă ă Ą ą Ć ć Ċ ċ Č č Ď ď Đ đ Ē ē Ė ė Ę ę Ě ě Ğ ğ Ġ ġ Ģ ģ Ħ ħ Ī ī Į į İ ı Ķ ķ Ĺ ĺ Ļ ļ Ľ ľ Ł ł Ń ń Ņ ņ Ň ň Ő ő Œ œ Ŕ ŕ Ř ř Ś ś Ş ş Š š Ť ť Ū ū Ů ů Ű ű Ų ų Ŵ ŵ Ŷ ŷ Ÿ Ź ź Ż ż Ž ž Ș ș Ț ț ȷ Ẁ ẁ Ẃ ẃ Ẅ ẅ ẞ Ỳ ỳ /idotaccent`
...
```
## Find characters
To help authoring glyphsets, use `glyphsets find ſ` or `glyphsets find 0x017F` to see in which language definions (and under which category therein) and glyphsets a character is defined. As usual, the definitions are pulled from the `glyphsets` and `gflanguages` modules that are currently installed on your machine or venv
```
glyphsets % glyphsets find ß
Character: [ß] (0x00DF LATIN SMALL LETTER SHARP S)
Language Name Category Speakers Script Regions
---------- ------------- ---------- ---------- -------- ---------------------------------------
fr_Latn French auxiliary 272965534 Latn Asia, Americas, Oceania, Europe, Africa
de_Latn German base 134799567 Latn Asia, Americas, Europe, Africa
tr_Latn Turkish auxiliary 80191488 Latn Europe, Asia
it_Latn Italian auxiliary 70743415 Latn Oceania, Europe, Americas
pl_Latn Polish auxiliary 38273562 Latn Europe, Asia
nds_Latn Low German base 11520008 Latn Europe
fi_Latn Finnish auxiliary 5736841 Latn Europe
lb_Latn Luxembourgish auxiliary 421015 Latn Europe
ksh_Latn Colognian base 240479 Latn Europe
hsb_Latn Upper Sorbian auxiliary 12825 Latn Europe
wae_Latn Walser auxiliary 11376 Latn Europe
dsb_Latn Lower Sorbian auxiliary 6973 Latn Europe
Character is part of the following glyphsets:
---------------------------------------------
GF_Latin_Core
```
Acknowledgements
================
GF Greek Glyph Sets defined by Irene Vlachou @irenevl and Thomas Linard @thlinard. Documented by Alexei Vanyashin @alexeiva January 2017.
GF Glyph Sets defined by Alexei Vanyashin (@alexeiva) and Kalapi Gajjar (@kalapi) from 2016-06-27 to 2016-10-11, with input from
Dave Crossland,
Denis Jacquerye,
Frank Grießhammer,
Georg Seifert,
Gunnar Vilhjálmsson,
Jacques Le Bailly,
Michael Everson,
Nhung Nguyen (Vietnamese lists),
Pablo Impallari (Impallari Encoding),
Rainer Erich Scheichelbauer (@mekkablue),
Thomas Jockin,
Thomas Phinney
(Adobe Cyrillic lists), and
Underware (Latin Plus Encoding)
Housekeeping
============
Since v1.0.0, use these rules for **version updates** in line with semver:
- Major versions for API changes (v**1**.0.0)
- Minor versions for language data changes (v1.**1**.0)
- Patch version for non-breaking miniscule code or language data fixes (v1.1.**1**)
Raw data
{
"_id": null,
"home_page": "https://github.com/googlefonts/glyphsets/",
"name": "glyphsets",
"maintainer": null,
"docs_url": null,
"requires_python": ">=3.8",
"maintainer_email": null,
"keywords": null,
"author": "Dave Crossland, Eli Heuer, Felipe Sanches, Lasse Fister, Marc Foley, Yanone, Roderick Sheeter",
"author_email": "dave@lab6.com",
"download_url": "https://files.pythonhosted.org/packages/5b/0a/013b1fd9fc605a3ce8dc2e7f2c63d0c2158c70574ade8ba47570fb274079/glyphsets-1.1.0.tar.gz",
"platform": null,
"description": "Google Fonts Glyphset Definitions\n=================================\n\nWhat is this?\n-------------\n\nThis repository contains curated glyphsets that Google Fonts hands out to **designers of commissioned fonts** for font authoring.\n\nWhat is this _not_?\n-------------------\n\nThese _glyphsets_ are not to be confused with the _subsets_ that the [Google Fonts API](https://developers.google.com/fonts/docs/getting_started#specifying_script_subsets) uses to minimize traffic by serving partial fonts based on subsets.\n\nThese subset definitions used to be hosted here in this repository but are now found over in the separate [nam-files](https://github.com/googlefonts/nam-files) repository. \n\nWhat\u2019s the difference?\n----------------------\n\nAs a user of the [Google Fonts API](https://developers.google.com/fonts/docs/getting_started#specifying_script_subsets) you may request a multi-script font to be served limited to a _subset_ of glyphs, usually a certain script, such as `https://fonts.googleapis.com/css?family=Roboto+Mono&subset=cyrillic`, to speed up file transfer by leaving out unnecessary glyphs.\n\n_Glyphsets_ on the other hand are what Google Fonts requires font authors to put into fonts when _designing_ them, and they\u2019re not identical to subsets. You can get a font\u2019s complete glyphset by manually downloading a TTF on [fonts.google.com](https://fonts.google.com/), but you typically don\u2019t get the same glyphs in a font accessed through the Google Fonts API because these are subsetted.\n\n\nGlyphsets for font authoring\n----------------------------\n\n**If you are a font author** and you want to merely get your hands on ready-made glyphsets, pick your files straight out of the [`/data/results`](/data/results) folder, such as `.glyphs` files with empty placeholder glyphs, or `.plist` files that are so-called _Custom Filters_ that will show up in the Glyphs.app sidebar when placed alongside your source files. Alternatively, you can cook your own Custom Filters with the `glyphsets` tool, see the _Glyphsets Tool_ section at the bottom of this document.\n\nThe rest of this README is addressing people who are **editing** glyphset and language definitions.\n\nEditing glyphsets\n-----------------\n\nThe repository recently (end of 2023/start of 2024) underwent a bigger overhaul in how the glyphsets are assembled. \nThe current approach has become part of a bigger network of tools that is also comprised of [gflanguages](https://github.com/googlefonts/lang/) and [shaperglot](https://github.com/googlefonts/shaperglot), as well as [fontbakery\u2019s](https://github.com/fonttools/fontbakery) `shape_languages` check.\n\nIn the ideal scenario, glyphsets are defined merely by lists of language codes (such as `tu_Latn`).\nDuring the build process (`sh build.sh`), the `gflanguages` database will be queried for all characters defined for those languages, then combined into a single glyphset.\n_Optionally_, encoded characters as well as unencoded glyphs may be defined in glyphset-specific or language-specific files here in `gfglyphsets`, whose contents will also be added to the final glyphsets.\n\nLater during font QA (as part of font onboarding work, just FYI), Fontbakery's `shape_languages` check first determines which glyphsets a font supports, then uses the languages defined for each glyphset to invoke `shaperglot`, which checks whether each language _shapes_ correctly or not.\nThis presents quite a leap forward in font QA where `shaperglot` invokes the `harfbuzz` shaping engine to prove the entire OpenType-stack to be funtioning at once, including mark attachment and character sequences.\n`shaperglot` contains its own sets of script- or language-specific definitions, such as a check to see whether the `\u0131` and `i` shape into distinct letters in small-caps for Turkish.\n\n> [!NOTE] \n> See [GLYPHSETS.md](GLYPHSETS.md) for an up-to-date description of the state of the new glyphset definitions. Many glyphsets have not been transitioned to the new approach and still exist as manually curated lists of characters and unencoded glyphs.\n\nHow to assemble glyphsets\n=========================\n\nPrerequisites\n-------------\n\nIn order for the build command to correctly assemble glyphsets using language defintions, make sure that your work environment sports the latest version of [gflanguages](https://github.com/googlefonts/lang/). If unsure, update it with `pip install -U gflanguages`.\n\nOftentimes you may want to adjust language definitions in `gflanguages` _at the same time_ as you\u2019re adjusting other parts of the glyphsets. In this case you may clone the `gflanguages` repository to your computer and install it using `pip install -e .` from within its root folder. This will expose your `gflanguages` clone to your entire system (or virtual environment) and changes in `gflanguages` will automatically be reflected in other tools that use it, such as `gfglyphsets`, without the need of re-installing it after every code or data change. Thus, running `sh build.sh` will automatically use your latest language definitions, even before you have PR\u2019d your language definition changes back to the repository.\n\nWhere are glyphsets defined?\n------------------\n\nInside this repository, data is defined in two different places.\nOne place is inside the `glyphsets` Python package (`/Lib/glyphsets/definitions`). This data that needs to be exposed to third-party tools such as `fontbakery`.\nThe other place is in `/data/definitions`. This data is only used for authoring glyphsets and need not be distributed as part of the Python package.\n\n1. **Inside Python package:** Glyphsets are defined in `.yaml` files inside the Python package folder at [`/Lib/glyphsets/definitions`](/Lib/glyphsets/definitions).\n\n2. **Outside of Python package:** Additional files in the `/data/definitions` sub-folders will become part of the glyphsets as soon as they are found to exist under a certain filename. If a file that you need doesn't exist there, create it in its place.\n\nWhere are characters and glyphs defined?\n------------------\n\nIn order to determine where _characters_ (encoded with a Unicode) or _glyphs_ (unencoded) are defined, follow this logic:\n1. Is it a **language-specific encoded character**? Then it goes into the `gflanguages` database (which is a separate package) for example [here](https://github.com/googlefonts/lang/blob/main/Lib/gflanguages/data/languages/nl_Latn.textproto). `gflanguages` holds only encoded characters, not unencoded glyphs. Prepare a separate PR for `gflanguages` if you are changing those definitions as well.\n1. Is it a **language-specific unencoded glyph**? Then it goes into `/data/definitions/per_language`\n1. Is it a more general **glyphset-specific character or glyph**? Then it goes into `/data/definitions/per_glyphset`\n\nIf you find that you need additional separate definitions _per script_, contact @yanone to implement it.\n\n(Re-) Building glyphsets\n-----------------------\n\nOnce your language and glyphset definitions are set up and edited, run `sh build.sh` from the command line. This command sources characters from `gflanguages` as well as characters and glyphs from the various files in the `/data/definitions` folder, and combines them into one comprehensive list per glyphset, which are then rendered out into various different data formats into the `/data/results` folder.\n\nAdditionally, the [GLYPHSETS.md](GLYPHSETS.md) document is updated, which contains a human-readable overview of the state of each glyphset.\n\n> [!NOTE] \n> When making PRs, the glyphsets will automatically be rendered depending on defintion changes (which is useful for dependency update PRs such as `glyphsLib` or `gflanguages`). This means that you don\u2019t _need to_ supply updated glyphsets and `GLYPHSETS.md` as part of your PR (as rendered by `sh build.sh`). A PR may be as simple as adding a language to a `.yaml` defintion and the changes to glyphsets will automatically be added in a commit to your PR where you can review the changed glyphsets.\n\n\nData flow visualization\n-----------------------\n\nHere\u2019s a visual overview of the data definitions that go into each glyphset, and the files that are created as results.\n\nRead this top to bottom.\n\n```\n\nDEFINITIONS:\n\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 Language codes \u2502\n\u2502 \"en_Latn\" \u2502\n\u2502 \"de_Latn\" \u2502\n\u2502 ... \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 gflanguages \u2502 \u2502 .stub.glyphs \u2502\n\u2502 (Python package) \u2502 \u2502 (optional) \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n \u2502 \u2502\n \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\n \u2502\nBUILD PROCESS: \u2502\n \u2502\n \u2554\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2557\n \u2551 complete glyphset \u2551 \n \u255a\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u2550\u255d\n \u2502\nRESULTS: \u2502\n \u2502\n \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u253c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\n \u2502 \u2502 \u2502 \u2502\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\n\u2502 .txt \u2502 \u2502 .nam \u2502 \u2502 .glyphs \u2502 \u2502 .plist \u2502\n\u2502 (nice & prod) \u2502 \u2502 \u2502 \u2502 \u2502 \u2502 \u2502\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\n```\n\n\nGlyphsets Tool\n==============\n\n> [!NOTE] \n> Previously existing commands of the `glyphsets` tool are currently deactivated after the transition to the new database. These are: `update-srcs`, `nam-file`, `missing-in-font`. Please report if you need to use these.\n\n## Custom Filters\n\nYou can create your own Glyphs.app _Custom Filters_ using the `glyphsets` tool.\n\nInstall or update the tool with pip, if you haven\u2019t already:\n\n```\npip install -U glyphsets\n```\n\nCreate a filter list for Glyph.app:\n\n```\nglyphsets filter-list -o myfilter.plist GF_Latin_Core GF_Latin_Plus\n```\nAdd this `.plist` file next to your Glyphs file and (after restart) you would be able to see it in the filters sidebar.\n\n## Compare Glyphsets\n\nYou can compare the contents of two or more glyphsets against each other. Each consecutive glyphset will be compared to the previous one.\n\nThis command lists the complete contents of `GF_Latin_Kernel` first, and then lists only extra (or missing) glyphs for `GF_Latin_Core` when compared to `GF_Latin_Kernel`:\n```\nglyphsets compare GF_Latin_Kernel GF_Latin_Core\n```\n\nOutput:\n\n```\nGF_Latin_Kernel:\n===============\n\nTotal glyphs: 116\n\nLetter (52 glyphs): \n`A B C D E F G H I J K L M N O P Q R S T U V W X Y Z a b c d e f g h i j k l m n o p q r s t u v w x y z`\n\n...\n\n\nGF_Latin_Core:\n=============\n\nTotal glyphs: 324\n\nGF_Latin_Core has 208 **extra** glyphs compared to GF_Latin_Kernel:\n\nLetter (168 glyphs): \n`\u00aa \u00ba \u00c0 \u00c1 \u00c2 \u00c3 \u00c4 \u00c5 \u00c6 \u00c7 \u00c8 \u00c9 \u00ca \u00cb \u00cc \u00cd \u00ce \u00cf \u00d0 \u00d1 \u00d2 \u00d3 \u00d4 \u00d5 \u00d6 \u00d8 \u00d9 \u00da \u00db \u00dc \u00dd \u00de \u00df \u00e0 \u00e1 \u00e2 \u00e3 \u00e4 \u00e5 \u00e6 \u00e7 \u00e8 \u00e9 \u00ea \u00eb \u00ec \u00ed \u00ee \u00ef \u00f0 \u00f1 \u00f2 \u00f3 \u00f4 \u00f5 \u00f6 \u00f8 \u00f9 \u00fa \u00fb \u00fc \u00fd \u00fe \u00ff \u0100 \u0101 \u0102 \u0103 \u0104 \u0105 \u0106 \u0107 \u010a \u010b \u010c \u010d \u010e \u010f \u0110 \u0111 \u0112 \u0113 \u0116 \u0117 \u0118 \u0119 \u011a \u011b \u011e \u011f \u0120 \u0121 \u0122 \u0123 \u0126 \u0127 \u012a \u012b \u012e \u012f \u0130 \u0131 \u0136 \u0137 \u0139 \u013a \u013b \u013c \u013d \u013e \u0141 \u0142 \u0143 \u0144 \u0145 \u0146 \u0147 \u0148 \u0150 \u0151 \u0152 \u0153 \u0154 \u0155 \u0158 \u0159 \u015a \u015b \u015e \u015f \u0160 \u0161 \u0164 \u0165 \u016a \u016b \u016e \u016f \u0170 \u0171 \u0172 \u0173 \u0174 \u0175 \u0176 \u0177 \u0178 \u0179 \u017a \u017b \u017c \u017d \u017e \u0218 \u0219 \u021a \u021b \u0237 \u1e80 \u1e81 \u1e82 \u1e83 \u1e84 \u1e85 \u1e9e \u1ef2 \u1ef3 /idotaccent`\n\n...\n\n```\n\n## Find characters\n\nTo help authoring glyphsets, use `glyphsets find \u017f` or `glyphsets find 0x017F` to see in which language definions (and under which category therein) and glyphsets a character is defined. As usual, the definitions are pulled from the `glyphsets` and `gflanguages` modules that are currently installed on your machine or venv\n\n```\nglyphsets % glyphsets find \u00df\nCharacter: [\u00df] (0x00DF LATIN SMALL LETTER SHARP S)\n\nLanguage Name Category Speakers Script Regions\n---------- ------------- ---------- ---------- -------- ---------------------------------------\nfr_Latn French auxiliary 272965534 Latn Asia, Americas, Oceania, Europe, Africa\nde_Latn German base 134799567 Latn Asia, Americas, Europe, Africa\ntr_Latn Turkish auxiliary 80191488 Latn Europe, Asia\nit_Latn Italian auxiliary 70743415 Latn Oceania, Europe, Americas\npl_Latn Polish auxiliary 38273562 Latn Europe, Asia\nnds_Latn Low German base 11520008 Latn Europe\nfi_Latn Finnish auxiliary 5736841 Latn Europe\nlb_Latn Luxembourgish auxiliary 421015 Latn Europe\nksh_Latn Colognian base 240479 Latn Europe\nhsb_Latn Upper Sorbian auxiliary 12825 Latn Europe\nwae_Latn Walser auxiliary 11376 Latn Europe\ndsb_Latn Lower Sorbian auxiliary 6973 Latn Europe\n\nCharacter is part of the following glyphsets:\n---------------------------------------------\nGF_Latin_Core\n```\n\nAcknowledgements\n================\n\nGF Greek Glyph Sets defined by Irene Vlachou @irenevl and Thomas Linard @thlinard. Documented by Alexei Vanyashin @alexeiva January 2017.\n\nGF Glyph Sets defined by Alexei Vanyashin (@alexeiva) and Kalapi Gajjar (@kalapi) from 2016-06-27 to 2016-10-11, with input from\nDave Crossland,\nDenis Jacquerye,\nFrank Grie\u00dfhammer,\nGeorg Seifert,\nGunnar Vilhj\u00e1lmsson,\nJacques Le Bailly,\nMichael Everson,\nNhung Nguyen (Vietnamese lists),\nPablo Impallari (Impallari Encoding),\nRainer Erich Scheichelbauer (@mekkablue),\nThomas Jockin,\nThomas Phinney\n(Adobe Cyrillic lists), and\nUnderware (Latin Plus Encoding)\n\n\nHousekeeping\n============\n\nSince v1.0.0, use these rules for **version updates** in line with semver:\n\n- Major versions for API changes (v**1**.0.0)\n- Minor versions for language data changes (v1.**1**.0)\n- Patch version for non-breaking miniscule code or language data fixes (v1.1.**1**)\n",
"bugtrack_url": null,
"license": null,
"summary": "A python API for evaluating coverage of glyph sets in font projects.",
"version": "1.1.0",
"project_urls": {
"Homepage": "https://github.com/googlefonts/glyphsets/"
},
"split_keywords": [],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a94c0820a0f6f3d8643b8bfb1b7b306569be0917d25234dbafc6f0d63b231a0e",
"md5": "e00c9a32fb6b25c4000da749d69aab0a",
"sha256": "016920f23edc86410c9e68aeeb87d59d17bed33990c0e428521963c16477d121"
},
"downloads": -1,
"filename": "glyphsets-1.1.0-py3-none-any.whl",
"has_sig": false,
"md5_digest": "e00c9a32fb6b25c4000da749d69aab0a",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": ">=3.8",
"size": 101528,
"upload_time": "2024-12-20T11:52:50",
"upload_time_iso_8601": "2024-12-20T11:52:50.168450Z",
"url": "https://files.pythonhosted.org/packages/a9/4c/0820a0f6f3d8643b8bfb1b7b306569be0917d25234dbafc6f0d63b231a0e/glyphsets-1.1.0-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "5b0a013b1fd9fc605a3ce8dc2e7f2c63d0c2158c70574ade8ba47570fb274079",
"md5": "0d3afebe9ed3d2bf69a49655faa653cd",
"sha256": "8f36ba550dcf64040f92eda1ca8d0a78ee7dafae10f53a6a2c22347485ff5115"
},
"downloads": -1,
"filename": "glyphsets-1.1.0.tar.gz",
"has_sig": false,
"md5_digest": "0d3afebe9ed3d2bf69a49655faa653cd",
"packagetype": "sdist",
"python_version": "source",
"requires_python": ">=3.8",
"size": 1015613,
"upload_time": "2024-12-20T11:52:55",
"upload_time_iso_8601": "2024-12-20T11:52:55.833603Z",
"url": "https://files.pythonhosted.org/packages/5b/0a/013b1fd9fc605a3ce8dc2e7f2c63d0c2158c70574ade8ba47570fb274079/glyphsets-1.1.0.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-12-20 11:52:55",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "googlefonts",
"github_project": "glyphsets",
"travis_ci": false,
"coveralls": false,
"github_actions": true,
"tox": true,
"lcname": "glyphsets"
}