lbuild


Namelbuild JSON
Version 1.21.8 PyPI version JSON
download
home_pagehttps://github.com/modm-io/lbuild
SummaryGeneric, modular code generator using the Jinja2 template engine.
upload_time2023-11-18 21:33:33
maintainer
docs_urlNone
authorFabian Greif, Niklas Hauser
requires_python>=3.8.0
licenseBSD
keywords library builder generator
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # lbuild: generic, modular code generation in Python 3

The Library Builder (pronounced *lbuild*) is a BSD licensed [Python 3 tool][python]
for describing repositories containing modules which can copy or generate a set
of files based on the user provided data and options.

*lbuild* allows splitting up complex code generation projects into smaller
modules with configurable options, and provides for their transparent
discovery, documentation and dependency management.
Each module is written in Python 3 and declares its options and how to generate
its content via the [Jinja2 templating engine][jinja2] or a file/folder copy.

You can [install *lbuild* via PyPi][pypi]: `pip install lbuild`

Projects using *lbuild*:

- [modm generates a HAL for thousands of embedded devices][modm] using *lbuild*
  and a data-driven code generation pipeline.
- [Taproot: a friendly control library and framework for RoboMaster robots][taproot]
  uses *lbuild*.
- [OUTPOST - Open modUlar sofTware PlatfOrm for SpacecrafT][outpost] uses *lbuild*
  to assemble an execution platform targeted at embedded systems running mission
  critical software.

The dedicated maintainer of *lbuild* is [@salkinium][salkinium].


## Overview

Consider this repository:

```
 $ lbuild discover
Parser(lbuild)
╰── Repository(repo @ ../repo)
    ├── Option(option) = value in [value, special]
    ├── Module(repo:module)
    │   ├── Option(option) = yes in [yes, no]
    │   ├── Module(repo:module:submodule)
    │   │   ╰── Option(option) = REQUIRED in [1, 2, 3, 4, 5]
    │   ╰── Module(repo:module:submodule2)
    ╰── Module(modm:module2)
```

*lbuild* is called by the user with a configuration file which contains the
repositories to scan, the modules to include and the options to configure
them with:

```xml
<library>
  <repositories>
    <repository><path>../repo/repo.lb</path></repository>
  </repositories>
  <options>
    <option name="repo:option">special</option>
    <option name="repo:module:option">3</option>
  </options>
  <modules>
    <module>repo:module</module>
  </modules>
</library>
```

The `repo.lb` file is compiled by *lbuild* and the two functions `init`,
`prepare` are called:

```python
def init(repo):
    repo.name = "repo"
    repo.add_option(EnumerationOption(name="option",
                                      enumeration=["value", "special"],
                                      default="value"))

def prepare(repo, options):
    repo.find_modules_recursive("src")
```

This gives the repository a name and declares a string option. The prepare step
adds all module files in the `src/` folder.

Each `module.lb` file is then compiled by *lbuild*, and the three functions
`init`, `prepare` and `build` are called:

```python
def init(module):
    module.name = ":module"

def prepare(module, options):
    if options["repo:option"] == "special":
        module.add_option(EnumerationOption(name="option", enumeration=[1, 2, 3, 4, 5]))
        return True
    return False

def build(env):
    env.outbasepath = "repo/module"
    env.copy("static.hpp")
    for number in range(env["repo:module:option"]):
        env.template("template.cpp.in", "template_{}.cpp".format(number + 1))
```

The init step sets the module's name and its parent name. The prepare step
then adds a `EnumerationOption` and makes the module available, if the repository option
is set to `"special"`. Finally in the build step, a number of files are generated
based on the option's content.

The files are generated at the call-site of `lbuild build` which would then
look something like this:

```
 $ ls
main.cpp        project.xml
 $ lbuild build
 $ tree
.
├── main.cpp
├── repo
│   ├── module
│   │   ├── static.hpp
│   │   ├── template_1.cpp
│   │   ├── template_2.cpp
│   │   └── template_3.cpp
```


## Documentation

The above example shows a minimal feature set, but *lbuild* has a few more
tricks up its sleeves. Let's have a look at the API in more detail with examples
from [the modm repository][modm].


### Command Line Interface

Before you can build a project you need to provide a configuration.
*lbuild* aims to make discovery easy from the command line:

```
 $ lbuild --repository ../modm/repo.lb discover
Parser(lbuild)
╰── Repository(modm @ ../modm)   modm: a barebone embedded library generator
    ╰── Option(target) = REQUIRED in [at90can128, at90can32, at90can64, ...
```

This gives you an overview of the repositories and their options. In this case
the `modm:target` repository option is required, so let's check that out:

```
 $ lbuild -r ../modm/repo.lb discover-options
modm:target = REQUIRED in [at90can128, at90can32, at90can64, at90pwm1, at90pwm161, at90pwm2,
                           ... a really long list ...
                           stm32l4s9vit, stm32l4s9zij, stm32l4s9zit, stm32l4s9ziy]

  Meta-HAL target device
```

You can then choose this repository option and discover the available modules
for this specific repository option:

```
 $ lbuild -r ../modm/repo.lb --option modm:target=stm32f407vgt discover
Parser(lbuild)
╰── Repository(modm @ ../modm)   modm: a barebone embedded library generator
    ├── Option(target) = stm32f407vgt in [at90can128, at90can32, at90can64, ...]
    ├── Configuration(modm:disco-f407vg)
    ├── Module(modm:board)   Board Support Packages
    │   ╰── Module(modm:board:disco-f469ni)   STM32F469IDISCOVERY
    ├── Module(modm:build)   Build System Generators
    │   ├── PathOption(build.path) = build/parent-folder in [String]
    │   ├── Option(project.name) = parent-folder in [String]
    │   ╰── Module(modm:build:scons)  SCons Build Script Generator
    │       ├── Option(info.build) = no in [yes, no]
    │       ╰── Option(info.git) = Disabled in [Disabled, Info, Info+Status]
    ├── Module(modm:platform)   Platform HAL
    │   ├── Module(modm:platform:can)   Controller Area Network (CAN)
    │   │   ╰── Module(modm:platform:can:1)   Instance 1
    │   │       ├── Option(buffer.rx) = 32 in [1 .. 32 .. 65534]
    │   │       ╰── Option(buffer.tx) = 32 in [1 .. 32 .. 65534]
    │   ├── Module(modm:platform:core)   ARM Cortex-M Core
    │   │   ├── Option(allocator) = newlib in [block, newlib, tlsf]
    │   │   ├── Option(main_stack_size) = 3072 in [256 .. 3072 .. 65536]
    │   │   ╰── Option(vector_table_location) = rom in [ram, rom]
```

You can now discover all module options in more detail:

```
 $ lbuild -r ../modm/repo.lb -D modm:target=stm32f407vgt discover-options
modm:target = stm32f407vgt in [at90can128, at90can32, at90can64, ...]

  Meta-HAL target device

modm:build:build.path = build/parent-folder in [String]

  Path to the build folder

modm:build:project.name = parent-folder in [String]

  Project name for executable
```

Or check out specific module and option descriptions:

```
 $ lbuild -r ../modm/repo.lb -D modm:target=stm32f407vgt discover -n :build
>> modm:build

# Build System Generators

This parent module defines a common set of functionality that is independent of
the specific build system generator implementation.

>>>> modm:build:project.name  [StringOption]

# Project Name

The project name defaults to the folder name you're calling lbuild from.

Value: parent-folder
Inputs: [String]

>>>> modm:build:build.path  [StringOption]

# Build Path

The build path is defaulted to `build/{modm:build:project.name}`.

Value: build/parent-folder
Inputs: [String]
```

The complete lbuild command line interface is available with `lbuild -h`.


### Configuration

Even though *lbuild* can be configured sorely via the command line, it is
strongly recommended to create a configuration file (default is `project.xml`)
which *lbuild* will search for in the current working directory.

```xml
<library>
  <repositories>
    <!-- Declare all your repository locations relative to this file here -->
    <repository><path>path/to/repo.lb</path></repository>
    <!-- You can also use environment variables in all nodes -->
    <repository><path>${PROJECTHOME}/repo2.lb</path></repository>
    <!-- You can also search for repository files -->
    <glob>ext/**/repo.lb</glob>
  </repositories>
  <!-- You can also inherit from another configfile. The options you specify
       here will be overwritten. -->
  <extends>path/to/config.xml</extends>
  <!-- A repository may provide aliases for configurations, so that you can
       use a string as well, instead of a path. This saves you from knowing
       exactly where the configuration file is stored in the repo.
       See also `repo.add_configuration(...)`. -->
  <extends>repo:name_of_config</extends>
  <!-- A configuration alias may also be versioned -->
  <extends>repo:name_of_config:specific_version</extends>
  <!-- You can declare the *where* the output should be generated, default is cwd -->
  <outpath>generated/folder</outpath>
  <options>
    <!-- Options are treated as key-value pairs -->
    <option name="repo:repo_option_name">value</option>
    <!-- An option set is the only one allowing multiple values -->
    <option name="repo:module:module_option_name">set, options, may, contain, commas</option>
  </options>
  <modules>
    <!-- You only need to declare the modules you are actively using.
         The dependencies are automatically resolved by lbuild. -->
    <module>repo:module</module>
    <module>repo:other_module:submodule</module>
  </modules>
</library>
```

On startup, *lbuild* will search the current working directory upwards for one or more
`lbuild.xml` files, which if found, are used as the base configuration, inherited
by all other configurations. This is very useful when several projects all
require the same repositories, and you don't want to specify each repository
path for each project.

```xml
<library>
  <repositories>
    <repository><path>path/to/common/repo.lb</path></repository>
  </repositories>
  <modules>
    <module>repo:module-required-by-all</module>
  </modules>
</library>
```

In the simplest case your project just `<extends>` this base config.

```xml
<library>
  <extends>repo:config-name</extends>
</library>
```


### Files

*lbuild* properly imports the declared repository and modules files, so you can
use everything that Python has to offer.
In addition to `import`ing your required modules, *lbuild* provides these
global functions and classes for use in all files:

- `localpath(path)`: remaps paths relative to the currently executing file.
  All paths are already interpreted relative to this file, but you can use this
  to be explicit.
- `repopath(path)`: remaps paths relative to the repository file. You should use
  this to reference paths that are not related to your module.
- `listify(obj)`: turns obj into a list, maps `None` to empty list.
- `listrify(obj)`: turns obj into a list of strings, maps `None` to empty list.
- `uniquify(obj)`: turns obj into a unique list, maps `None` to empty list.
- `FileReader(path)`: reads the contents of a file and turns it into a string.
- `{*}Option(...)`: classes for describing options, [see Options](#Options).
- `{*}Query(...)`: classes for sharing code and data, [see Queries](#Queries).
- `{*}Collector(...)`: classes for describing metadata sinks, [see Collectors](#Collectors).
- `Alias(...)`: links to other nodes, [see Aliases](#Aliases).
- `Configuration(...)`: links to a configuration inside the repository.


### Repositories

*lbuild* calls these three functions for any repository file:

- `init(repo)`: provides name, documentation and other global functionality.
- `prepare(repo, options)`: adds all module files for this repository.
- `build(env)` (*optional*): *only* called if at least one module within the
  repository is built. It is meant for actions that must be performed for *any*
  module, like generating a global header file, or adding to the include path.

```python
# You can use everything Python has to offer
import antigravity

def init(repo):
    # You must give your repository a name, and it must be unique within the
    # scope of your project as it is used for namespacing all modules
    repo.name = "name"
    # You can set a repository description here, either as an inline string
    repo.description = "Repository Description"
    # or as a multi-line string
    repo.description = """
Multiline description.

Use whatever markup you want, lbuild treats it all as text.
"""
    # or read it from a separate file altogether
    repo.description = FileReader("module.md")

    # lbuild displays the descriptions as-is, without any modification, however,
    # you can set a custom format handler to change this for your repo.
    # NOTE: Custom format handlers are applied to all modules and options.
    def format_description(node, description):
        # in modm there's unit test metadata in HTML comments, let's remove them
        description = re.sub(r"\n?<!--.*?-->\n?", "", description, flags=re.S)
        # forward this to the default formatter
        return node.format_description(node, description)
    repo.format_description = format_description

    # You can also format the short descriptions for the discover views
    def format_short_description(node, description):
        # Remove the leading # from the Markdown headers
        return node.format_short_description(node, description.replace("#", ""))
    repo.format_short_description = format_short_description

    # Add ignore patterns for all repository modules
    # ignore patterns follow fnmatch rules
    repo.add_ignore_patterns("*/*.lb", "*/board.xml")

    # Add Jinja2 filters for all repository modules
    # NOTE: the filter is namespaced with the repository! {{ "A" | repo.number }} -> 65
    repo.add_filter("repo.number", lambda char: ord(char))

    # Add an alias for a internal configuration
    # NOTE: the configuration is namespaced with the repository! <extends>repo:config</extends>
    repo.add_configuration(Configuration(name="config",
                                         description="Special Config",
                                         path="path/to/config.xml")
    # You can also add configuration versions
    repo.add_configuration(Configuration(name="config2",
                                         description="Versioned Config",
                                         path={"v1": "path/to/config_v1.xml",
                                               "v2": "path/to/config_v2.xml"})

    # See Options for more option types
    repo.add_option(StringOption(name="option", default="value"))


def prepare(repo, options):
    # Access repository options via the `options` resolver
    if options["repo:option"] == "value":
        # Adds module files directly, or via globbing, all paths relative to this file
        repo.add_modules("folder/module.lb", repo.glob("*/*/module.lb"))
    # Searches recursively starting at basepath, adding any file that
    # fnmatch(`modulefile`), while ignoring fnmatch(`ignore`) patterns
    repo.add_modules_recursive(basepath=".", modulefile="*.lb", ignore="*/ignore/patterns/*")


# The build step is optional
def build(env):
    # Add the generated src/ folder to the header search path collector
    env.collect("::include_path", "src/")
    # See module.build(env) for complete feature description.
```


### Modules

*lbuild* calls these five functions for any module file:

- `init(module)`: provides module name, parent and documentation.
- `prepare(module, options)`: enables modules, adds options and submodules by
  taking the repository options into consideration.
- `validate(env)` (*optional*): validate your inputs before building anything.
- `build(env)`: generate your library and add metadata to build log.
- `post_build(env, buildlog)` (*optional*): access the build log after the build
  step completed.

Module files are provided with these additional global classes:

- `Module`: Base class for generated modules.
- `ValidateException`: Exception to be raised when the `validate(env)` step fails.

Note that in contrast to a repository, modules must return a boolean from the
`prepare(module, options)` function, which indicates that the module is available
for the repository option configuration. This allows for modules to "share" a
name, but have completely different implementations.

The `validate(env)` step is used to validate the input for the build step,
allowing for computations that can fail to raise a `ValidateException("reason")`.
*lbuild* will collect these exceptions for all modules and display them
together before aborting the build. This step is performed before each build,
and you cannot generate any files in this step, only read the repository's state.
You can manually call this step via the `lbuild validate` command.

The `build(env)` step is where the actual file generation happens. Here you can
copy and generate files and folders from Jinja2 templates with the substitutions
of you choice and the configuration of the modules. Each file operation is
appended to a global build log, which you can also explicitly add metadata to.

The `post_build(env)` step is meant for modules that need to generate
files which receive information from all built modules. The typically use-case
here is generating scripts for build systems, which need to know about what
files were generated and all module's metadata.

```python
def init(module):
    # give your module a hierarchical name, the repository name is implicit
    module.name = "repo:name"
    module.name = ":name"      # same as this
    # You can set a module description here
    module.description = "Description"
    module.description = """Multiline"""
    module.description = FileReader("module.md")
    # modules can have their own formatters, works the same as for repositories
    module.format_description = custom_format_description
    module.format_short_description = custom_format_short_description
    # Add Jinja2 filters for this modules and all submodules
    # NOTE: the filter is namespace with the repository! {{ 65 | repo.character }} -> "A"
    module.add_filter("repo.character", lambda number: chr(number))


def prepare(module, options):
    # Access repository options via the `options` resolver
    if options["repo:option"] == "value":
        # Returning False from this step disables this module
        return False

    # modules can depend on other modules
    module.depends("repo:module1", ":module2", ":module3:submodule", ...)

    # You can add more submodules in files
    module.add_submodule("folder/submodule.lb")

    # You can generate more modules here. This is useful if you have a lot of
    # very similar modules (like instances of hardware peripherals) that you
    # don't want to create a module file for each for.
    class Instance(Module):
        def __init__(self, instance):
            self.instance = instance
        def init(self, module):
            module.name = str(self.instance)
        def prepare(self, module, options):
            pass
        def validate(self, env): # optional
            pass
        def build(self, env):
            pass
        def post_build(self, env): # optional
            pass

    # You can statically create and add these submodules
    for index in range(0, 5):
        module.add_submodule(Instance(index))
    # or make the creation dependent on a repository option
    for index in options["repo:instances"]:
        module.add_submodule(Instance(index))

    # See Options for more option types
    module.add_option(StringOption(name="option", default="world"))

    def common_operation(args):
        """
        You can share any function with other modules.
        This is useful to not have to duplicate code across module.lb files.
        """
        return args
    # See Queries for more query types
    module.add_query(Query(name="shared_function", function=common_operation))

    # You can collect information from active modules, to use any post_build step
    # See Collectors for more collector types
    module.add_collector(
        PathCollector(name="include_path", description="Global header search paths"))

    # Make this module available
    return True


# store data computed in validate step for build step.
build_data = None
# The validation step is optional
def validate(env):
    # Perform your input validations here
    # Access all options
    repo_option = env["repo:option"]
    defaulted_option = env.get("repo:module:option", default="hello")
    # Use proper logging instead of print() please
    # env.log.warning(...) and env.log.error(...) also available
    env.log.debug("Repo option: '{}'".format(repo_option))

    # You can query for options
    if env.has_option("repo:module:option") or env.has_module("repo:module"):
        env.log.info("Module option: '{}'".format(env["repo:module:option"]))

    # Call shared functions from other modules with arguments
    shared_function = env.query("repo:module:shared_function")
    result = shared_function("argument")
    # Or just precomputed properties without arguments
    data = env.query("repo:module:shared_property")

    # You may also use incomplete queries, see Name Resolution
    env.has_module(":module") # instead of repo:module
    env.has_option("::option") # repo:module:option
    # And use fnmatch queries
    # matches any module starting with `mod` and option starting with `name`.
    env.has_option(":mod*:name*")
    env.has_query("::shared_*")
    env.has_collector("::collector")

    # Raise a ValidateException if something is wrong
    if defaulted_option + repo_option != "hello world":
        raise ValidateException("Options are invalid because ...")

    # If you do heavy computations here for validation, you can store the
    # data in a global variable and reuse this for the build step
    global build_data
    build_data = defaulted_option * 2


# The build step can do everything the validation step can
# But now you can finally generate files
def build(env):
    # Set the output base path, this is relative to the lbuild invocation path
    env.outbasepath = "repo/module"

    # Copy single files
    env.copy("file.hpp")
    # Copy single files while renaming them
    env.copy("file.hpp", "cool_filename.hpp")
    # Relative paths are preserved!!!
    env.copy("../file.hpp") # copies to repo/file.hpp
    env.copy("../file.hpp", dest="file.hpp") # copies to repo/module/file.hpp

    # You can also copy entire folders
    env.copy("folder/", dest="renamed/")
    # and ignore specific RELATIVE files/folders
    env.copy("folder/", ignore=env.ignore_files("*.txt", "this_specific_file.hpp"))
    # or ignore specific ABSOLUTE paths
    env.copy("folder/", ignore=env.ignore_paths("*/folder/*.txt"))

    # You can also copy files out of a .zip or .tar archive
    env.extract("archive.zip") # everything inside the archive
    env.extract("archive.zip", dest="renamed/") # extract into folder
    # You can extract only parts of the archive, like a single file
    env.extract("archive.zip", src="filename.hpp", dest="renamed.hpp")
    # or an a single folder somewhere in the archive
    env.extract("archive.zip", src="folder/subfolder", dest="renamed/folder")
    # of course, you can ignore files and folders inside the archive too
    env.extract("archive.zip", src="folder", dest="renamed", ignore=env.ignore_files("*.txt"))

    # Set the global Jinja2 substitutions dictionary
    env.substitutions = {
        "hello": "world",
        "instances": map(str, env["repo:instances"]),
        "build_data": build_data, # from validation step
    }
    # and generate a file from a template
    env.template("template.hpp.in")
    # any `.in` postfix is automatically removed, unless you rename it
    for instance in env["repo:instances"]:
        env.template("template.hpp.in", "template_{}.hpp".format(instance))
    # You can explicitly add Jinja2 substitutions and filters
    env.template("template.hpp.in",
                 substitutions={"more": "subs"},
                 filters={"stringify": lambda i: str(i)})
    # Note: these filters are NOT namespaced with the repository name!

    # submodules are build first, so you can access the generated files
    headers = env.get_generated_local_files(lambda file: file.endswith(".hpp"))
    # and use this information for a new template.
    env.template("module_header.hpp.in", substitutions={"headers": headers})

    # Add values to a collector, all these are type checked
    env.collect("::include_path", "repo/must_be_valid_path/", "repo/folder2/")


# The post build step can do everything the build step can,
# but you can't add to the metadata anymore:
# - env.collect() unavailable
# You have access to the entire buildlog up to this point
def post_build(env):
    # The absolute path to the lbuild output directory
    outpath = env.buildlog.outpath

    # All modules that were built
    modules = env.buildlog.modules
    # All file generation operations that were done
    operations = env.buildlog.operations
    # All operations per module
    operations = env.buildlog.operations_per_module("repo:module")

    # iterate over all operations directly
    for operation in buildlog:
        # Get the module name that generated this file
        env.log.info("Module({}) generated the '{}' file"
                     .format(operation.module, operation.filename))
        # You can also get the filename relative to a subfolder in outpath
        env.relative_output(operation.filename, "subfolder/")
        # or as an absolute path
        env.real_output(operation.filename, "subfolder/")

    # get all include paths from all active modules
    include_paths = env.collector_values("::include_path")
```

### Options

*lbuild* options are mappings from strings to Python objects.
Each option must have a unique name within their parent repository or module.
If you do not provide a default value, the option is marked as **REQUIRED** and
the project cannot be built without it.

```python
def prepare(module, options):
    # Add option to module
    option = Option(...)
    module.add_option(option)

def build(env):
    # Check if options exist
    exists = env.has_option(":module:option")
    # Access option value or use default if option doesn't exist
    value = env.get(":module:option", default="value")
    # Access option values, this may raise an exception if option doesn't exist
    value = env[":module:option"]
```

If your option requires a unique set of input values, you can tell *lbuild* to 
wrap the option into a set using `module.add_set_option()`:

```python
def prepare(module, options):
    # Add an option, but allow a set of unique values as input and output
    module.add_set_option(option)

def build(env):
    # a unique set of option values is returned here
    for value in env[":module:option"]:
        print(value)
```

Option sets are declared as comma-separated strings, so that inheriting
configurations or passing option values via CLI can overwrite these sets.
A `StringOption` cannot be wrapped into a set for this reasons, however, it's
easy to split your string value in Python exactly how you want.

```xml
<!-- All comma separated values are validated by the option -->
<option name=":module:set-option">value, 1, obj</option>
```

If you want to preserve duplicates to count the number of inputs, use a list
option `module.add_list_option()`:

```python
def prepare(module, options):
    # Add an option, but allow a list of values as input and output
    module.add_list_option(option)

def build(env):
    # a list of option values is returned here
    value_count = env[":module:option"].count("value")
```

Options can have a dependency handler which is called when the project
configuration is merged into the module options. It will give you the chosen
input value and you can return a number of module dependencies.

```python
def add_option_dependencies(value):
    if special_condition(value):
        # return single dependency
        return "repo:module"
    if other_special_condition(value):
        # return multiple dependencies
        return [":module1", ":module2"]
    # No additional dependencies
    return None
```


#### StringOption

This is the most generic option, allowing to input any string.
You may, however, provide your own validator that may raise a `ValueError`
if the input string does not match your expectations.
You may also pass a transformation function to convert the option value.
The string is passed unmodified from the configuration to the module and the
dependency handler.

```python
def validate_string(string):
    if "please" not in string:
        raise ValueError("Input does not contain the magic word!")

def transform_string(string):
    return string.lower()

option = StringOption(name="option-name",
                      description="inline", # or FileReader("file.md")
                      default="default string",
                      validate=validate_string,
                      transform=transform_string,
                      dependencies=add_option_dependencies)
```


#### PathOption

This option operates on strings, but additionally validates them to be
syntactically valid paths, so the filesystem accepts these strings
as valid arguments to path operations. This option does not check if the path
exists, or if it can be created, just if the string is properly formatted.

Since an empty string is not a valid path, but it can be useful to allow an
empty string as an input value to encode a special case (like a "disable" value),
you may set `empty_ok=True` to tell the path validation to ignore empty strings.

By default, the path input is not modified and must be correctly interpreted in
the context of the module that uses it (usually relocated to the output path).
However, if you want to input an existing path you should set `absolute=True`,
so that *lbuild* can relocate the *relative path* declared in the config files
to an absolute path, which is indepented of the CWD.
This is particularly useful if you declare paths in config files that are not
located at the project root, like options inherited from multiple `lbuild.xml`.

```python
option = PathOption(name="option-name",
                    description="path",
                    default="path/to/folder/or/file",
                    empty_ok=False, # is an empty path considered valid?
                    absolute=False, # is the path relative to the config file?
                    validate=validate_path,
                    dependencies=add_option_dependencies)
```


#### BooleanOption

This option maps strings from `true`, `yes`, `1`, `enable` to `bool(True)` and
`false`, `no`, `0`, `disable` to `bool(False)`. You can extend this list with a
custom transform handler. The dependency handler is passed this `bool` value.

```python
def transform_boolean(string):
    if string == 'y': return True;
    if string == 'n': return False;
    return string # hand over to built-in conversion

option = BooleanOption(name="option-name",
                       description="boolean",
                       default=True,
                       transform=transform_boolean,
                       dependencies=add_option_dependencies)
```


#### NumericOption

This option allows a number from [-Inf, +Inf]. You can limit this to the
range [minimum, maximum].
The values can be specified directly as numeric value or as a string, which is
interpreted using the `eval()` function, so that you can describe values as more
intuitive formulas when necessary.
You can also suffix numbers with the SI multipliers `K`, `M`, `G`, `T`, `Ki`,
`Mi`, `Gi`, and `Ti` to simplify formulas even further.
Note that you should use strings to specify precise floating point values such
as "1/3". The validation and dependency handlers are passed a numeric value.

```python
option = NumericOption(name="option-name",
                       description="numeric",
                       minimum=0,
                       maximum="5Mi*2",
                       default="1K",
                       validate=validate_number,
                       dependencies=add_option_dependencies)
```


#### EnumerationOption

This option maps a string to any generic Python object.
You can provide a list, set, tuple or range, the only limitation is that
the objects must be convertible to a string for unique identification.
If this is not possible, you can provide a dictionary with a manual mapping
from string to object. The dependency handler is passed the string value.

```python
option = EnumerationOption(name="option-name",
                           description="enumeration",
                           # must be implicitly convertible to string!
                           enumeration=["value", 1, obj],
                           # or use a dictionary explicitly
                           enumeration={"value": "value", "1": 1, "obj": obj},
                           default="1",
                           dependencies=add_option_dependencies)
```


### Queries

It is sometimes necessary to share code and data between *lbuild* modules,
which can be difficult when they are split across files and repositories.
Queries allow you to share functions and computed properties with other modules
using the global name resolution system.

```python
def prepare(module, options):
    # Add queries to module
    query = Query(...)
    module.add_query(query)

def build(env):
    exists = env.has_query(":module:query")
    # Access query value or use default if query doesn't exist
    data = env.query(":module:query", default="value")
```

*Note that queries must be stateless (aka. a pure function), since module build
order is not guaranteed. You must enforce this property yourself.*

You can discover all the available queries in your repository using
`lbuild discover --developer`.


#### Query

This wraps any callable object into a query. By default the name is taken from
the object's name, however, you may overwrite this.
Note that when using a lambda function, you must provide a name.
The description is taken from the objects docstring.

```python
def shared_function(args):
    """
    Describe what this query does.

    :param args: what does it need?
    :returns: what does it return?
    """
    return args

query = Query(function=shared_function)
query = Query(name="different_name",
              function=shared_function)
```


#### EnvironmentQuery

This query's result is computed only once on demand and then cached.

The data must be returned from a factory function that gets passed the
environment of the first module to access this query.
The return value is then cached for all further accesses.
This allows you to lazily compute your shared properties only once and only if
accessed by any module.

```python
def factory(env):
    """
    Describe what this query is about, but don't document the `env` argument.

    :returns: an immutable object
    """
    # You can read the build environment, but cannot modify it here
    value = env["repo:module:option"]
    # This return data is cached, so this function is only called once.
    return {"key": value}

query = EnvironmentQuery(name="name",
                         factory=factory)
```


### Collectors

The post-build step has access to the build log containing the list of modules
that were built and what files they generated.
However, these modules also need to pass additional data to the post-build steps,
so that this information can be computed locally.

*lbuild* allows each module to declare what metadata it wants using a collector,
which is given a name, description and optional limitations depending on type.
In the build step, each module may add values to this collector, which the
post-build steps then can access.

```python
def prepare(module, options):
    # Add a collector to module
    collector = Collector(...)
    module.add_collector(collector)

def build(env):
    exists = env.has_collector(":module:collector")
    # Add values to this collector
    env.collect(":module:collector", "value1", "value2", "value3")

def post_build(env):
    # Get all unique values from all operations
    unique_values = env.collector_values(":module:collector")
    # get all values from all operations, even duplicates!
    all_values = env.collector_values(":module:collector", unique=False)
```

Note that the ordering of values is preserved only relative to the order they
were added within a module and only if accessing them non-uniquely!
The above example will preserve the order of `value1`, `value2` and `value3`,
only if the values are accessed not uniquely and only relative to each other.

When you add values to a collector, the current operation is recorded, consisting
out of the current module, but you may also explicitly set this to a set of
file operations:

```python
def build(env):
    operation = env.copy("file.hpp")
    # Add values to this collector for the file operation
    env.collect(":module:collector", "values", operations=operation)

    # The return value from a file operation is actually a set of operations
    operations = env.copy("folder1/")
    # So you can extend this set for multiple file operations
    operations |= env.copy("folder2/")
    # And then filter this set of operations
    operations = filter(lambda op: op.filename.endswith(".txt"), operations)
    # Only add this metadata to .txt files
    env.collect(":module:collector", "txt-file-values", operations=operations)

    # A file operation object has these properties:
    operation.module # full module name, this is always available
    operation.repository # repository name, always available
    operation.has_filename # Some operations are specific to files
    operation.filename # The generated filename relative to outpath

def post_build(env):
    # You can use these operation properties to filter the collector values
    txt_filter = lambda op: op.repository == "repo" and op.filename.endswith(".txt")
    unique_txt_values = env.collector_values(":module:collector", filterfunc=txt_filter)
    # May contain duplicate values!
    all_txt_values = env.collector_values(":module:collector", filterfunc=txt_filter, unique=False)
```

If you have very special requirements for the ordering values (for example
when collecting compile flags), consider iterating over the collectors items
manually, and possibly de-duplicating and reordering the values yourself.

```python
def post_build(env):
    # Get the collector, may return None if collector does not exist!
    collector = env.collector(":module:collector")
    if collector is not None:
        for operation, values in collector.items():
            # values is a list and may contain duplicates
            print(operation.module, values)
            if operation.has_filename: # not all operations have filenames!
                print(operation.filename)
```

Note that collector values that were added by a module without explicit
operations do not have filename, only module names!

Collectors are implemented using the same type-safe mechanisms as
[Options](#Options), the only differences are the lack of dependency handlers
and default values, since you can add default values in the modules build step.

You may add collector values via the project configuration. However, since these
collector values cannot be overwritten by inheriting configurations use this with care.

```xml
<library>
  <collectors>
    <collect name="repo:collector_name">value</collect>
    <collect name="repo:collector_name">value2</collect>
  </collectors>
</library>
```

You can discover all the available collectors in your repository using
`lbuild discover --developer`.


#### CallableCollector

This collector allows you to collect callable objects, that the post-build step
can execute. This can be useful for providing specializations to the post-build
module without it needing to know how they work.

```python
collector = CallableCollector(name="collector-name",
                              description="callable")
```


#### StringCollector

See [StringOption](#StringOption) for documentation.

```python
collector = StringCollector(name="collector-name",
                            description="string",
                            validate=validate_function)
```


#### PathCollector

See [PathOption](#PathOption) for documentation.

```python
collector = PathCollector(name="collector-name",
                          description="path",
                          empty_ok=False,
                          absolute=False)
```


#### BooleanCollector

See [BooleanOption](#BooleanOption) for documentation.

```python
collector = BooleanCollector(name="collector-name",
                             description="boolean")
```


#### NumericCollector

See [NumericOption](#NumericOption) for documentation.

```python
collector = NumericCollector(name="collector-name",
                             description="numeric",
                             minimum=0,
                             maximum=100)
```


#### EnumerationCollector

See [EnumerationOption](#EnumerationOption) for documentation.

```python
collector = EnumerationCollector(name="collector-name",
                                 description="enumeration",
                                 enumeration=enumeration)
```


### Aliases

*lbuild* aliases are mappings from one lbuild node to another. They are useful
for gracefully dealing with renaming or moving nodes in your lbuild module tree.
Aliases will print a warning when accessed showing the alias description. Each
alias must have a unique name within their parent repository or module.

Aliases can be used for any type of node that you want forwarded. You can also
add aliases that do not have a destination and will raise an exception with the
alias description. This allows you to remove lbuild nodes while providing
details for a workaround.

```python
def prepare(module, options):
    # Move option in this module
    module.add_module(Option(name="option"))
    # Forward the old option to the new option
    module.add_alias(Alias(name="option-alias",
                           destination="option",
                           description="Renamed for clarity."))
    # Instead of silently failing, you can provide a detailed description
    # about why the node was removed and what the workaround is.
    module.add_alias(Alias(name="removed-alias",
                           description="Removed. Workaround: ..."))
    # You alias any type to any other node.
    module.add_alias(Alias(name="submodule-alias",
                           destination=":other-module:submodule"
                           description="Removed. Workaround: ..."))

def build(env):
    # Will show a warning (once) that the alias has been moved
    exists = env.has_option(":module:option-alias")
    # Accesses :module:option instead
    value = env[":module:option-alias"]
    # This will raise an exception with the alias description
    value = env[":module:removed-alias"]
    # will check for :other-module:submodule instead
    value = env.has_module[":module:submodule-alias"]
```


### Jinja2 Configuration

*lbuild* uses the [Jinja2 template engine][jinja2] with the following global
configuration:

- Line statements start with `%% statement`.
- Line comments start with `%# comment`.
- Undefined variables throw an exception instead of being ignored.
- Global extensions: `jinja2.ext.do`.
- Global substitutions are:
  + `time`: `strftime("%d %b %Y, %H:%M:%S")`
  + `options`: an option resolver in the context of the current module.


### Name Resolution

*lbuild* manages repositories, modules and options in a tree structure and
serializes identification into unique string using `:` as hierarchy delimiters.
Any identifier provided via the command line, configuration, repository or
module files use the same resolver, which allows using *partially-qualified*
identifiers. In addition, globbing for multiple identifiers using fnmatch
semantics is supported.

The following rules for resolving identifiers apply:

1. A fully-qualified identifier specifies all parts: `repo:module:option`.
2. A partially-qualified identifier adds fnmatch wildcarts: `*:m.dule:opt*`.
3. `*` wildcarts for entire hierarchies can be ommitted: `::option`
4. A special wildcart is `:**`, which globs for everything below the current
   hierarchy level: `repo:**` selects all in `repo`, `repo:module:**` all in
   `repo:module`, etc.
5. Wildcarts are resolved in reverse hierarchical order. Therefore, `::option`
   may be unique within the context of `:module`, but not within the entire
   project.
6. For accessing direct children, you may specify their name without any
   delimiters: `option` within the context of `:module` will resolve to
   `:module:option`.

Partial identifiers were introduced to reduce verbosity and aid refactoring,
it is therefore recommended to:

1. Omit the repository name for accessing modules and options within the same
   repository.
2. Accessing a module's options with their name directly.


### Execution order

*lbuild* executes in this order:

1. `repository:init()`
2. Create repository options
3. `repository:prepare(repo-options)`
4. Find all modules in repositories
5. `module:init()`
6. `module:prepare(repo-options)`
7. Create module options
8. Resolve module dependencies
9. `module:validate(env)` submodules-first, *optional*
10. `module:build(env)` submodules-first
11. `repo:build(env)`: *optional*
12. `module:post_build(env)`: submodules-first, *optional*


[modm]: https://modm.io/how-modm-works
[taproot]: https://github.com/uw-advanced-robotics/taproot
[outpost]: https://github.com/DLR-RY/outpost-core
[jinja2]: http://jinja.pocoo.org
[python]: https://www.python.org
[pypi]: https://pypi.org/project/lbuild
[salkinium]: https://github.com/salkinium
[travis]: https://travis-ci.org/modm-io/lbuild
[travis-svg]: https://travis-ci.org/modm-io/lbuild.svg?branch=develop

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/modm-io/lbuild",
    "name": "lbuild",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.8.0",
    "maintainer_email": "",
    "keywords": "library builder generator",
    "author": "Fabian Greif, Niklas Hauser",
    "author_email": "fabian.greif@rwth-aachen.de, niklas@salkinium.com",
    "download_url": "https://files.pythonhosted.org/packages/1b/83/9e88bcf431a3a76913c210091e08fec230007fed604937aa684b5d6c4f96/lbuild-1.21.8.tar.gz",
    "platform": null,
    "description": "# lbuild: generic, modular code generation in Python 3\n\nThe Library Builder (pronounced *lbuild*) is a BSD licensed [Python 3 tool][python]\nfor describing repositories containing modules which can copy or generate a set\nof files based on the user provided data and options.\n\n*lbuild* allows splitting up complex code generation projects into smaller\nmodules with configurable options, and provides for their transparent\ndiscovery, documentation and dependency management.\nEach module is written in Python 3 and declares its options and how to generate\nits content via the [Jinja2 templating engine][jinja2] or a file/folder copy.\n\nYou can [install *lbuild* via PyPi][pypi]: `pip install lbuild`\n\nProjects using *lbuild*:\n\n- [modm generates a HAL for thousands of embedded devices][modm] using *lbuild*\n  and a data-driven code generation pipeline.\n- [Taproot: a friendly control library and framework for RoboMaster robots][taproot]\n  uses *lbuild*.\n- [OUTPOST - Open modUlar sofTware PlatfOrm for SpacecrafT][outpost] uses *lbuild*\n  to assemble an execution platform targeted at embedded systems running mission\n  critical software.\n\nThe dedicated maintainer of *lbuild* is [@salkinium][salkinium].\n\n\n## Overview\n\nConsider this repository:\n\n```\n $ lbuild discover\nParser(lbuild)\n\u2570\u2500\u2500 Repository(repo @ ../repo)\n    \u251c\u2500\u2500 Option(option) = value in [value, special]\n    \u251c\u2500\u2500 Module(repo:module)\n    \u2502   \u251c\u2500\u2500 Option(option) = yes in [yes, no]\n    \u2502   \u251c\u2500\u2500 Module(repo:module:submodule)\n    \u2502   \u2502   \u2570\u2500\u2500 Option(option) = REQUIRED in [1, 2, 3, 4, 5]\n    \u2502   \u2570\u2500\u2500 Module(repo:module:submodule2)\n    \u2570\u2500\u2500 Module(modm:module2)\n```\n\n*lbuild* is called by the user with a configuration file which contains the\nrepositories to scan, the modules to include and the options to configure\nthem with:\n\n```xml\n<library>\n  <repositories>\n    <repository><path>../repo/repo.lb</path></repository>\n  </repositories>\n  <options>\n    <option name=\"repo:option\">special</option>\n    <option name=\"repo:module:option\">3</option>\n  </options>\n  <modules>\n    <module>repo:module</module>\n  </modules>\n</library>\n```\n\nThe `repo.lb` file is compiled by *lbuild* and the two functions `init`,\n`prepare` are called:\n\n```python\ndef init(repo):\n    repo.name = \"repo\"\n    repo.add_option(EnumerationOption(name=\"option\",\n                                      enumeration=[\"value\", \"special\"],\n                                      default=\"value\"))\n\ndef prepare(repo, options):\n    repo.find_modules_recursive(\"src\")\n```\n\nThis gives the repository a name and declares a string option. The prepare step\nadds all module files in the `src/` folder.\n\nEach `module.lb` file is then compiled by *lbuild*, and the three functions\n`init`, `prepare` and `build` are called:\n\n```python\ndef init(module):\n    module.name = \":module\"\n\ndef prepare(module, options):\n    if options[\"repo:option\"] == \"special\":\n        module.add_option(EnumerationOption(name=\"option\", enumeration=[1, 2, 3, 4, 5]))\n        return True\n    return False\n\ndef build(env):\n    env.outbasepath = \"repo/module\"\n    env.copy(\"static.hpp\")\n    for number in range(env[\"repo:module:option\"]):\n        env.template(\"template.cpp.in\", \"template_{}.cpp\".format(number + 1))\n```\n\nThe init step sets the module's name and its parent name. The prepare step\nthen adds a `EnumerationOption` and makes the module available, if the repository option\nis set to `\"special\"`. Finally in the build step, a number of files are generated\nbased on the option's content.\n\nThe files are generated at the call-site of `lbuild build` which would then\nlook something like this:\n\n```\n $ ls\nmain.cpp        project.xml\n $ lbuild build\n $ tree\n.\n\u251c\u2500\u2500 main.cpp\n\u251c\u2500\u2500 repo\n\u2502   \u251c\u2500\u2500 module\n\u2502   \u2502   \u251c\u2500\u2500 static.hpp\n\u2502   \u2502   \u251c\u2500\u2500 template_1.cpp\n\u2502   \u2502   \u251c\u2500\u2500 template_2.cpp\n\u2502   \u2502   \u2514\u2500\u2500 template_3.cpp\n```\n\n\n## Documentation\n\nThe above example shows a minimal feature set, but *lbuild* has a few more\ntricks up its sleeves. Let's have a look at the API in more detail with examples\nfrom [the modm repository][modm].\n\n\n### Command Line Interface\n\nBefore you can build a project you need to provide a configuration.\n*lbuild* aims to make discovery easy from the command line:\n\n```\n $ lbuild --repository ../modm/repo.lb discover\nParser(lbuild)\n\u2570\u2500\u2500 Repository(modm @ ../modm)   modm: a barebone embedded library generator\n    \u2570\u2500\u2500 Option(target) = REQUIRED in [at90can128, at90can32, at90can64, ...\n```\n\nThis gives you an overview of the repositories and their options. In this case\nthe `modm:target` repository option is required, so let's check that out:\n\n```\n $ lbuild -r ../modm/repo.lb discover-options\nmodm:target = REQUIRED in [at90can128, at90can32, at90can64, at90pwm1, at90pwm161, at90pwm2,\n                           ... a really long list ...\n                           stm32l4s9vit, stm32l4s9zij, stm32l4s9zit, stm32l4s9ziy]\n\n  Meta-HAL target device\n```\n\nYou can then choose this repository option and discover the available modules\nfor this specific repository option:\n\n```\n $ lbuild -r ../modm/repo.lb --option modm:target=stm32f407vgt discover\nParser(lbuild)\n\u2570\u2500\u2500 Repository(modm @ ../modm)   modm: a barebone embedded library generator\n    \u251c\u2500\u2500 Option(target) = stm32f407vgt in [at90can128, at90can32, at90can64, ...]\n    \u251c\u2500\u2500 Configuration(modm:disco-f407vg)\n    \u251c\u2500\u2500 Module(modm:board)   Board Support Packages\n    \u2502   \u2570\u2500\u2500 Module(modm:board:disco-f469ni)   STM32F469IDISCOVERY\n    \u251c\u2500\u2500 Module(modm:build)   Build System Generators\n    \u2502   \u251c\u2500\u2500 PathOption(build.path) = build/parent-folder in [String]\n    \u2502   \u251c\u2500\u2500 Option(project.name) = parent-folder in [String]\n    \u2502   \u2570\u2500\u2500 Module(modm:build:scons)  SCons Build Script Generator\n    \u2502       \u251c\u2500\u2500 Option(info.build) = no in [yes, no]\n    \u2502       \u2570\u2500\u2500 Option(info.git) = Disabled in [Disabled, Info, Info+Status]\n    \u251c\u2500\u2500 Module(modm:platform)   Platform HAL\n    \u2502   \u251c\u2500\u2500 Module(modm:platform:can)   Controller Area Network (CAN)\n    \u2502   \u2502   \u2570\u2500\u2500 Module(modm:platform:can:1)   Instance 1\n    \u2502   \u2502       \u251c\u2500\u2500 Option(buffer.rx) = 32 in [1 .. 32 .. 65534]\n    \u2502   \u2502       \u2570\u2500\u2500 Option(buffer.tx) = 32 in [1 .. 32 .. 65534]\n    \u2502   \u251c\u2500\u2500 Module(modm:platform:core)   ARM Cortex-M Core\n    \u2502   \u2502   \u251c\u2500\u2500 Option(allocator) = newlib in [block, newlib, tlsf]\n    \u2502   \u2502   \u251c\u2500\u2500 Option(main_stack_size) = 3072 in [256 .. 3072 .. 65536]\n    \u2502   \u2502   \u2570\u2500\u2500 Option(vector_table_location) = rom in [ram, rom]\n```\n\nYou can now discover all module options in more detail:\n\n```\n $ lbuild -r ../modm/repo.lb -D modm:target=stm32f407vgt discover-options\nmodm:target = stm32f407vgt in [at90can128, at90can32, at90can64, ...]\n\n  Meta-HAL target device\n\nmodm:build:build.path = build/parent-folder in [String]\n\n  Path to the build folder\n\nmodm:build:project.name = parent-folder in [String]\n\n  Project name for executable\n```\n\nOr check out specific module and option descriptions:\n\n```\n $ lbuild -r ../modm/repo.lb -D modm:target=stm32f407vgt discover -n :build\n>> modm:build\n\n# Build System Generators\n\nThis parent module defines a common set of functionality that is independent of\nthe specific build system generator implementation.\n\n>>>> modm:build:project.name  [StringOption]\n\n# Project Name\n\nThe project name defaults to the folder name you're calling lbuild from.\n\nValue: parent-folder\nInputs: [String]\n\n>>>> modm:build:build.path  [StringOption]\n\n# Build Path\n\nThe build path is defaulted to `build/{modm:build:project.name}`.\n\nValue: build/parent-folder\nInputs: [String]\n```\n\nThe complete lbuild command line interface is available with `lbuild -h`.\n\n\n### Configuration\n\nEven though *lbuild* can be configured sorely via the command line, it is\nstrongly recommended to create a configuration file (default is `project.xml`)\nwhich *lbuild* will search for in the current working directory.\n\n```xml\n<library>\n  <repositories>\n    <!-- Declare all your repository locations relative to this file here -->\n    <repository><path>path/to/repo.lb</path></repository>\n    <!-- You can also use environment variables in all nodes -->\n    <repository><path>${PROJECTHOME}/repo2.lb</path></repository>\n    <!-- You can also search for repository files -->\n    <glob>ext/**/repo.lb</glob>\n  </repositories>\n  <!-- You can also inherit from another configfile. The options you specify\n       here will be overwritten. -->\n  <extends>path/to/config.xml</extends>\n  <!-- A repository may provide aliases for configurations, so that you can\n       use a string as well, instead of a path. This saves you from knowing\n       exactly where the configuration file is stored in the repo.\n       See also `repo.add_configuration(...)`. -->\n  <extends>repo:name_of_config</extends>\n  <!-- A configuration alias may also be versioned -->\n  <extends>repo:name_of_config:specific_version</extends>\n  <!-- You can declare the *where* the output should be generated, default is cwd -->\n  <outpath>generated/folder</outpath>\n  <options>\n    <!-- Options are treated as key-value pairs -->\n    <option name=\"repo:repo_option_name\">value</option>\n    <!-- An option set is the only one allowing multiple values -->\n    <option name=\"repo:module:module_option_name\">set, options, may, contain, commas</option>\n  </options>\n  <modules>\n    <!-- You only need to declare the modules you are actively using.\n         The dependencies are automatically resolved by lbuild. -->\n    <module>repo:module</module>\n    <module>repo:other_module:submodule</module>\n  </modules>\n</library>\n```\n\nOn startup, *lbuild* will search the current working directory upwards for one or more\n`lbuild.xml` files, which if found, are used as the base configuration, inherited\nby all other configurations. This is very useful when several projects all\nrequire the same repositories, and you don't want to specify each repository\npath for each project.\n\n```xml\n<library>\n  <repositories>\n    <repository><path>path/to/common/repo.lb</path></repository>\n  </repositories>\n  <modules>\n    <module>repo:module-required-by-all</module>\n  </modules>\n</library>\n```\n\nIn the simplest case your project just `<extends>` this base config.\n\n```xml\n<library>\n  <extends>repo:config-name</extends>\n</library>\n```\n\n\n### Files\n\n*lbuild* properly imports the declared repository and modules files, so you can\nuse everything that Python has to offer.\nIn addition to `import`ing your required modules, *lbuild* provides these\nglobal functions and classes for use in all files:\n\n- `localpath(path)`: remaps paths relative to the currently executing file.\n  All paths are already interpreted relative to this file, but you can use this\n  to be explicit.\n- `repopath(path)`: remaps paths relative to the repository file. You should use\n  this to reference paths that are not related to your module.\n- `listify(obj)`: turns obj into a list, maps `None` to empty list.\n- `listrify(obj)`: turns obj into a list of strings, maps `None` to empty list.\n- `uniquify(obj)`: turns obj into a unique list, maps `None` to empty list.\n- `FileReader(path)`: reads the contents of a file and turns it into a string.\n- `{*}Option(...)`: classes for describing options, [see Options](#Options).\n- `{*}Query(...)`: classes for sharing code and data, [see Queries](#Queries).\n- `{*}Collector(...)`: classes for describing metadata sinks, [see Collectors](#Collectors).\n- `Alias(...)`: links to other nodes, [see Aliases](#Aliases).\n- `Configuration(...)`: links to a configuration inside the repository.\n\n\n### Repositories\n\n*lbuild* calls these three functions for any repository file:\n\n- `init(repo)`: provides name, documentation and other global functionality.\n- `prepare(repo, options)`: adds all module files for this repository.\n- `build(env)` (*optional*): *only* called if at least one module within the\n  repository is built. It is meant for actions that must be performed for *any*\n  module, like generating a global header file, or adding to the include path.\n\n```python\n# You can use everything Python has to offer\nimport antigravity\n\ndef init(repo):\n    # You must give your repository a name, and it must be unique within the\n    # scope of your project as it is used for namespacing all modules\n    repo.name = \"name\"\n    # You can set a repository description here, either as an inline string\n    repo.description = \"Repository Description\"\n    # or as a multi-line string\n    repo.description = \"\"\"\nMultiline description.\n\nUse whatever markup you want, lbuild treats it all as text.\n\"\"\"\n    # or read it from a separate file altogether\n    repo.description = FileReader(\"module.md\")\n\n    # lbuild displays the descriptions as-is, without any modification, however,\n    # you can set a custom format handler to change this for your repo.\n    # NOTE: Custom format handlers are applied to all modules and options.\n    def format_description(node, description):\n        # in modm there's unit test metadata in HTML comments, let's remove them\n        description = re.sub(r\"\\n?<!--.*?-->\\n?\", \"\", description, flags=re.S)\n        # forward this to the default formatter\n        return node.format_description(node, description)\n    repo.format_description = format_description\n\n    # You can also format the short descriptions for the discover views\n    def format_short_description(node, description):\n        # Remove the leading # from the Markdown headers\n        return node.format_short_description(node, description.replace(\"#\", \"\"))\n    repo.format_short_description = format_short_description\n\n    # Add ignore patterns for all repository modules\n    # ignore patterns follow fnmatch rules\n    repo.add_ignore_patterns(\"*/*.lb\", \"*/board.xml\")\n\n    # Add Jinja2 filters for all repository modules\n    # NOTE: the filter is namespaced with the repository! {{ \"A\" | repo.number }} -> 65\n    repo.add_filter(\"repo.number\", lambda char: ord(char))\n\n    # Add an alias for a internal configuration\n    # NOTE: the configuration is namespaced with the repository! <extends>repo:config</extends>\n    repo.add_configuration(Configuration(name=\"config\",\n                                         description=\"Special Config\",\n                                         path=\"path/to/config.xml\")\n    # You can also add configuration versions\n    repo.add_configuration(Configuration(name=\"config2\",\n                                         description=\"Versioned Config\",\n                                         path={\"v1\": \"path/to/config_v1.xml\",\n                                               \"v2\": \"path/to/config_v2.xml\"})\n\n    # See Options for more option types\n    repo.add_option(StringOption(name=\"option\", default=\"value\"))\n\n\ndef prepare(repo, options):\n    # Access repository options via the `options` resolver\n    if options[\"repo:option\"] == \"value\":\n        # Adds module files directly, or via globbing, all paths relative to this file\n        repo.add_modules(\"folder/module.lb\", repo.glob(\"*/*/module.lb\"))\n    # Searches recursively starting at basepath, adding any file that\n    # fnmatch(`modulefile`), while ignoring fnmatch(`ignore`) patterns\n    repo.add_modules_recursive(basepath=\".\", modulefile=\"*.lb\", ignore=\"*/ignore/patterns/*\")\n\n\n# The build step is optional\ndef build(env):\n    # Add the generated src/ folder to the header search path collector\n    env.collect(\"::include_path\", \"src/\")\n    # See module.build(env) for complete feature description.\n```\n\n\n### Modules\n\n*lbuild* calls these five functions for any module file:\n\n- `init(module)`: provides module name, parent and documentation.\n- `prepare(module, options)`: enables modules, adds options and submodules by\n  taking the repository options into consideration.\n- `validate(env)` (*optional*): validate your inputs before building anything.\n- `build(env)`: generate your library and add metadata to build log.\n- `post_build(env, buildlog)` (*optional*): access the build log after the build\n  step completed.\n\nModule files are provided with these additional global classes:\n\n- `Module`: Base class for generated modules.\n- `ValidateException`: Exception to be raised when the `validate(env)` step fails.\n\nNote that in contrast to a repository, modules must return a boolean from the\n`prepare(module, options)` function, which indicates that the module is available\nfor the repository option configuration. This allows for modules to \"share\" a\nname, but have completely different implementations.\n\nThe `validate(env)` step is used to validate the input for the build step,\nallowing for computations that can fail to raise a `ValidateException(\"reason\")`.\n*lbuild* will collect these exceptions for all modules and display them\ntogether before aborting the build. This step is performed before each build,\nand you cannot generate any files in this step, only read the repository's state.\nYou can manually call this step via the `lbuild validate` command.\n\nThe `build(env)` step is where the actual file generation happens. Here you can\ncopy and generate files and folders from Jinja2 templates with the substitutions\nof you choice and the configuration of the modules. Each file operation is\nappended to a global build log, which you can also explicitly add metadata to.\n\nThe `post_build(env)` step is meant for modules that need to generate\nfiles which receive information from all built modules. The typically use-case\nhere is generating scripts for build systems, which need to know about what\nfiles were generated and all module's metadata.\n\n```python\ndef init(module):\n    # give your module a hierarchical name, the repository name is implicit\n    module.name = \"repo:name\"\n    module.name = \":name\"      # same as this\n    # You can set a module description here\n    module.description = \"Description\"\n    module.description = \"\"\"Multiline\"\"\"\n    module.description = FileReader(\"module.md\")\n    # modules can have their own formatters, works the same as for repositories\n    module.format_description = custom_format_description\n    module.format_short_description = custom_format_short_description\n    # Add Jinja2 filters for this modules and all submodules\n    # NOTE: the filter is namespace with the repository! {{ 65 | repo.character }} -> \"A\"\n    module.add_filter(\"repo.character\", lambda number: chr(number))\n\n\ndef prepare(module, options):\n    # Access repository options via the `options` resolver\n    if options[\"repo:option\"] == \"value\":\n        # Returning False from this step disables this module\n        return False\n\n    # modules can depend on other modules\n    module.depends(\"repo:module1\", \":module2\", \":module3:submodule\", ...)\n\n    # You can add more submodules in files\n    module.add_submodule(\"folder/submodule.lb\")\n\n    # You can generate more modules here. This is useful if you have a lot of\n    # very similar modules (like instances of hardware peripherals) that you\n    # don't want to create a module file for each for.\n    class Instance(Module):\n        def __init__(self, instance):\n            self.instance = instance\n        def init(self, module):\n            module.name = str(self.instance)\n        def prepare(self, module, options):\n            pass\n        def validate(self, env): # optional\n            pass\n        def build(self, env):\n            pass\n        def post_build(self, env): # optional\n            pass\n\n    # You can statically create and add these submodules\n    for index in range(0, 5):\n        module.add_submodule(Instance(index))\n    # or make the creation dependent on a repository option\n    for index in options[\"repo:instances\"]:\n        module.add_submodule(Instance(index))\n\n    # See Options for more option types\n    module.add_option(StringOption(name=\"option\", default=\"world\"))\n\n    def common_operation(args):\n        \"\"\"\n        You can share any function with other modules.\n        This is useful to not have to duplicate code across module.lb files.\n        \"\"\"\n        return args\n    # See Queries for more query types\n    module.add_query(Query(name=\"shared_function\", function=common_operation))\n\n    # You can collect information from active modules, to use any post_build step\n    # See Collectors for more collector types\n    module.add_collector(\n        PathCollector(name=\"include_path\", description=\"Global header search paths\"))\n\n    # Make this module available\n    return True\n\n\n# store data computed in validate step for build step.\nbuild_data = None\n# The validation step is optional\ndef validate(env):\n    # Perform your input validations here\n    # Access all options\n    repo_option = env[\"repo:option\"]\n    defaulted_option = env.get(\"repo:module:option\", default=\"hello\")\n    # Use proper logging instead of print() please\n    # env.log.warning(...) and env.log.error(...) also available\n    env.log.debug(\"Repo option: '{}'\".format(repo_option))\n\n    # You can query for options\n    if env.has_option(\"repo:module:option\") or env.has_module(\"repo:module\"):\n        env.log.info(\"Module option: '{}'\".format(env[\"repo:module:option\"]))\n\n    # Call shared functions from other modules with arguments\n    shared_function = env.query(\"repo:module:shared_function\")\n    result = shared_function(\"argument\")\n    # Or just precomputed properties without arguments\n    data = env.query(\"repo:module:shared_property\")\n\n    # You may also use incomplete queries, see Name Resolution\n    env.has_module(\":module\") # instead of repo:module\n    env.has_option(\"::option\") # repo:module:option\n    # And use fnmatch queries\n    # matches any module starting with `mod` and option starting with `name`.\n    env.has_option(\":mod*:name*\")\n    env.has_query(\"::shared_*\")\n    env.has_collector(\"::collector\")\n\n    # Raise a ValidateException if something is wrong\n    if defaulted_option + repo_option != \"hello world\":\n        raise ValidateException(\"Options are invalid because ...\")\n\n    # If you do heavy computations here for validation, you can store the\n    # data in a global variable and reuse this for the build step\n    global build_data\n    build_data = defaulted_option * 2\n\n\n# The build step can do everything the validation step can\n# But now you can finally generate files\ndef build(env):\n    # Set the output base path, this is relative to the lbuild invocation path\n    env.outbasepath = \"repo/module\"\n\n    # Copy single files\n    env.copy(\"file.hpp\")\n    # Copy single files while renaming them\n    env.copy(\"file.hpp\", \"cool_filename.hpp\")\n    # Relative paths are preserved!!!\n    env.copy(\"../file.hpp\") # copies to repo/file.hpp\n    env.copy(\"../file.hpp\", dest=\"file.hpp\") # copies to repo/module/file.hpp\n\n    # You can also copy entire folders\n    env.copy(\"folder/\", dest=\"renamed/\")\n    # and ignore specific RELATIVE files/folders\n    env.copy(\"folder/\", ignore=env.ignore_files(\"*.txt\", \"this_specific_file.hpp\"))\n    # or ignore specific ABSOLUTE paths\n    env.copy(\"folder/\", ignore=env.ignore_paths(\"*/folder/*.txt\"))\n\n    # You can also copy files out of a .zip or .tar archive\n    env.extract(\"archive.zip\") # everything inside the archive\n    env.extract(\"archive.zip\", dest=\"renamed/\") # extract into folder\n    # You can extract only parts of the archive, like a single file\n    env.extract(\"archive.zip\", src=\"filename.hpp\", dest=\"renamed.hpp\")\n    # or an a single folder somewhere in the archive\n    env.extract(\"archive.zip\", src=\"folder/subfolder\", dest=\"renamed/folder\")\n    # of course, you can ignore files and folders inside the archive too\n    env.extract(\"archive.zip\", src=\"folder\", dest=\"renamed\", ignore=env.ignore_files(\"*.txt\"))\n\n    # Set the global Jinja2 substitutions dictionary\n    env.substitutions = {\n        \"hello\": \"world\",\n        \"instances\": map(str, env[\"repo:instances\"]),\n        \"build_data\": build_data, # from validation step\n    }\n    # and generate a file from a template\n    env.template(\"template.hpp.in\")\n    # any `.in` postfix is automatically removed, unless you rename it\n    for instance in env[\"repo:instances\"]:\n        env.template(\"template.hpp.in\", \"template_{}.hpp\".format(instance))\n    # You can explicitly add Jinja2 substitutions and filters\n    env.template(\"template.hpp.in\",\n                 substitutions={\"more\": \"subs\"},\n                 filters={\"stringify\": lambda i: str(i)})\n    # Note: these filters are NOT namespaced with the repository name!\n\n    # submodules are build first, so you can access the generated files\n    headers = env.get_generated_local_files(lambda file: file.endswith(\".hpp\"))\n    # and use this information for a new template.\n    env.template(\"module_header.hpp.in\", substitutions={\"headers\": headers})\n\n    # Add values to a collector, all these are type checked\n    env.collect(\"::include_path\", \"repo/must_be_valid_path/\", \"repo/folder2/\")\n\n\n# The post build step can do everything the build step can,\n# but you can't add to the metadata anymore:\n# - env.collect() unavailable\n# You have access to the entire buildlog up to this point\ndef post_build(env):\n    # The absolute path to the lbuild output directory\n    outpath = env.buildlog.outpath\n\n    # All modules that were built\n    modules = env.buildlog.modules\n    # All file generation operations that were done\n    operations = env.buildlog.operations\n    # All operations per module\n    operations = env.buildlog.operations_per_module(\"repo:module\")\n\n    # iterate over all operations directly\n    for operation in buildlog:\n        # Get the module name that generated this file\n        env.log.info(\"Module({}) generated the '{}' file\"\n                     .format(operation.module, operation.filename))\n        # You can also get the filename relative to a subfolder in outpath\n        env.relative_output(operation.filename, \"subfolder/\")\n        # or as an absolute path\n        env.real_output(operation.filename, \"subfolder/\")\n\n    # get all include paths from all active modules\n    include_paths = env.collector_values(\"::include_path\")\n```\n\n### Options\n\n*lbuild* options are mappings from strings to Python objects.\nEach option must have a unique name within their parent repository or module.\nIf you do not provide a default value, the option is marked as **REQUIRED** and\nthe project cannot be built without it.\n\n```python\ndef prepare(module, options):\n    # Add option to module\n    option = Option(...)\n    module.add_option(option)\n\ndef build(env):\n    # Check if options exist\n    exists = env.has_option(\":module:option\")\n    # Access option value or use default if option doesn't exist\n    value = env.get(\":module:option\", default=\"value\")\n    # Access option values, this may raise an exception if option doesn't exist\n    value = env[\":module:option\"]\n```\n\nIf your option requires a unique set of input values, you can tell *lbuild* to \nwrap the option into a set using `module.add_set_option()`:\n\n```python\ndef prepare(module, options):\n    # Add an option, but allow a set of unique values as input and output\n    module.add_set_option(option)\n\ndef build(env):\n    # a unique set of option values is returned here\n    for value in env[\":module:option\"]:\n        print(value)\n```\n\nOption sets are declared as comma-separated strings, so that inheriting\nconfigurations or passing option values via CLI can overwrite these sets.\nA `StringOption` cannot be wrapped into a set for this reasons, however, it's\neasy to split your string value in Python exactly how you want.\n\n```xml\n<!-- All comma separated values are validated by the option -->\n<option name=\":module:set-option\">value, 1, obj</option>\n```\n\nIf you want to preserve duplicates to count the number of inputs, use a list\noption `module.add_list_option()`:\n\n```python\ndef prepare(module, options):\n    # Add an option, but allow a list of values as input and output\n    module.add_list_option(option)\n\ndef build(env):\n    # a list of option values is returned here\n    value_count = env[\":module:option\"].count(\"value\")\n```\n\nOptions can have a dependency handler which is called when the project\nconfiguration is merged into the module options. It will give you the chosen\ninput value and you can return a number of module dependencies.\n\n```python\ndef add_option_dependencies(value):\n    if special_condition(value):\n        # return single dependency\n        return \"repo:module\"\n    if other_special_condition(value):\n        # return multiple dependencies\n        return [\":module1\", \":module2\"]\n    # No additional dependencies\n    return None\n```\n\n\n#### StringOption\n\nThis is the most generic option, allowing to input any string.\nYou may, however, provide your own validator that may raise a `ValueError`\nif the input string does not match your expectations.\nYou may also pass a transformation function to convert the option value.\nThe string is passed unmodified from the configuration to the module and the\ndependency handler.\n\n```python\ndef validate_string(string):\n    if \"please\" not in string:\n        raise ValueError(\"Input does not contain the magic word!\")\n\ndef transform_string(string):\n    return string.lower()\n\noption = StringOption(name=\"option-name\",\n                      description=\"inline\", # or FileReader(\"file.md\")\n                      default=\"default string\",\n                      validate=validate_string,\n                      transform=transform_string,\n                      dependencies=add_option_dependencies)\n```\n\n\n#### PathOption\n\nThis option operates on strings, but additionally validates them to be\nsyntactically valid paths, so the filesystem accepts these strings\nas valid arguments to path operations. This option does not check if the path\nexists, or if it can be created, just if the string is properly formatted.\n\nSince an empty string is not a valid path, but it can be useful to allow an\nempty string as an input value to encode a special case (like a \"disable\" value),\nyou may set `empty_ok=True` to tell the path validation to ignore empty strings.\n\nBy default, the path input is not modified and must be correctly interpreted in\nthe context of the module that uses it (usually relocated to the output path).\nHowever, if you want to input an existing path you should set `absolute=True`,\nso that *lbuild* can relocate the *relative path* declared in the config files\nto an absolute path, which is indepented of the CWD.\nThis is particularly useful if you declare paths in config files that are not\nlocated at the project root, like options inherited from multiple `lbuild.xml`.\n\n```python\noption = PathOption(name=\"option-name\",\n                    description=\"path\",\n                    default=\"path/to/folder/or/file\",\n                    empty_ok=False, # is an empty path considered valid?\n                    absolute=False, # is the path relative to the config file?\n                    validate=validate_path,\n                    dependencies=add_option_dependencies)\n```\n\n\n#### BooleanOption\n\nThis option maps strings from `true`, `yes`, `1`, `enable` to `bool(True)` and\n`false`, `no`, `0`, `disable` to `bool(False)`. You can extend this list with a\ncustom transform handler. The dependency handler is passed this `bool` value.\n\n```python\ndef transform_boolean(string):\n    if string == 'y': return True;\n    if string == 'n': return False;\n    return string # hand over to built-in conversion\n\noption = BooleanOption(name=\"option-name\",\n                       description=\"boolean\",\n                       default=True,\n                       transform=transform_boolean,\n                       dependencies=add_option_dependencies)\n```\n\n\n#### NumericOption\n\nThis option allows a number from [-Inf, +Inf]. You can limit this to the\nrange [minimum, maximum].\nThe values can be specified directly as numeric value or as a string, which is\ninterpreted using the `eval()` function, so that you can describe values as more\nintuitive formulas when necessary.\nYou can also suffix numbers with the SI multipliers `K`, `M`, `G`, `T`, `Ki`,\n`Mi`, `Gi`, and `Ti` to simplify formulas even further.\nNote that you should use strings to specify precise floating point values such\nas \"1/3\". The validation and dependency handlers are passed a numeric value.\n\n```python\noption = NumericOption(name=\"option-name\",\n                       description=\"numeric\",\n                       minimum=0,\n                       maximum=\"5Mi*2\",\n                       default=\"1K\",\n                       validate=validate_number,\n                       dependencies=add_option_dependencies)\n```\n\n\n#### EnumerationOption\n\nThis option maps a string to any generic Python object.\nYou can provide a list, set, tuple or range, the only limitation is that\nthe objects must be convertible to a string for unique identification.\nIf this is not possible, you can provide a dictionary with a manual mapping\nfrom string to object. The dependency handler is passed the string value.\n\n```python\noption = EnumerationOption(name=\"option-name\",\n                           description=\"enumeration\",\n                           # must be implicitly convertible to string!\n                           enumeration=[\"value\", 1, obj],\n                           # or use a dictionary explicitly\n                           enumeration={\"value\": \"value\", \"1\": 1, \"obj\": obj},\n                           default=\"1\",\n                           dependencies=add_option_dependencies)\n```\n\n\n### Queries\n\nIt is sometimes necessary to share code and data between *lbuild* modules,\nwhich can be difficult when they are split across files and repositories.\nQueries allow you to share functions and computed properties with other modules\nusing the global name resolution system.\n\n```python\ndef prepare(module, options):\n    # Add queries to module\n    query = Query(...)\n    module.add_query(query)\n\ndef build(env):\n    exists = env.has_query(\":module:query\")\n    # Access query value or use default if query doesn't exist\n    data = env.query(\":module:query\", default=\"value\")\n```\n\n*Note that queries must be stateless (aka. a pure function), since module build\norder is not guaranteed. You must enforce this property yourself.*\n\nYou can discover all the available queries in your repository using\n`lbuild discover --developer`.\n\n\n#### Query\n\nThis wraps any callable object into a query. By default the name is taken from\nthe object's name, however, you may overwrite this.\nNote that when using a lambda function, you must provide a name.\nThe description is taken from the objects docstring.\n\n```python\ndef shared_function(args):\n    \"\"\"\n    Describe what this query does.\n\n    :param args: what does it need?\n    :returns: what does it return?\n    \"\"\"\n    return args\n\nquery = Query(function=shared_function)\nquery = Query(name=\"different_name\",\n              function=shared_function)\n```\n\n\n#### EnvironmentQuery\n\nThis query's result is computed only once on demand and then cached.\n\nThe data must be returned from a factory function that gets passed the\nenvironment of the first module to access this query.\nThe return value is then cached for all further accesses.\nThis allows you to lazily compute your shared properties only once and only if\naccessed by any module.\n\n```python\ndef factory(env):\n    \"\"\"\n    Describe what this query is about, but don't document the `env` argument.\n\n    :returns: an immutable object\n    \"\"\"\n    # You can read the build environment, but cannot modify it here\n    value = env[\"repo:module:option\"]\n    # This return data is cached, so this function is only called once.\n    return {\"key\": value}\n\nquery = EnvironmentQuery(name=\"name\",\n                         factory=factory)\n```\n\n\n### Collectors\n\nThe post-build step has access to the build log containing the list of modules\nthat were built and what files they generated.\nHowever, these modules also need to pass additional data to the post-build steps,\nso that this information can be computed locally.\n\n*lbuild* allows each module to declare what metadata it wants using a collector,\nwhich is given a name, description and optional limitations depending on type.\nIn the build step, each module may add values to this collector, which the\npost-build steps then can access.\n\n```python\ndef prepare(module, options):\n    # Add a collector to module\n    collector = Collector(...)\n    module.add_collector(collector)\n\ndef build(env):\n    exists = env.has_collector(\":module:collector\")\n    # Add values to this collector\n    env.collect(\":module:collector\", \"value1\", \"value2\", \"value3\")\n\ndef post_build(env):\n    # Get all unique values from all operations\n    unique_values = env.collector_values(\":module:collector\")\n    # get all values from all operations, even duplicates!\n    all_values = env.collector_values(\":module:collector\", unique=False)\n```\n\nNote that the ordering of values is preserved only relative to the order they\nwere added within a module and only if accessing them non-uniquely!\nThe above example will preserve the order of `value1`, `value2` and `value3`,\nonly if the values are accessed not uniquely and only relative to each other.\n\nWhen you add values to a collector, the current operation is recorded, consisting\nout of the current module, but you may also explicitly set this to a set of\nfile operations:\n\n```python\ndef build(env):\n    operation = env.copy(\"file.hpp\")\n    # Add values to this collector for the file operation\n    env.collect(\":module:collector\", \"values\", operations=operation)\n\n    # The return value from a file operation is actually a set of operations\n    operations = env.copy(\"folder1/\")\n    # So you can extend this set for multiple file operations\n    operations |= env.copy(\"folder2/\")\n    # And then filter this set of operations\n    operations = filter(lambda op: op.filename.endswith(\".txt\"), operations)\n    # Only add this metadata to .txt files\n    env.collect(\":module:collector\", \"txt-file-values\", operations=operations)\n\n    # A file operation object has these properties:\n    operation.module # full module name, this is always available\n    operation.repository # repository name, always available\n    operation.has_filename # Some operations are specific to files\n    operation.filename # The generated filename relative to outpath\n\ndef post_build(env):\n    # You can use these operation properties to filter the collector values\n    txt_filter = lambda op: op.repository == \"repo\" and op.filename.endswith(\".txt\")\n    unique_txt_values = env.collector_values(\":module:collector\", filterfunc=txt_filter)\n    # May contain duplicate values!\n    all_txt_values = env.collector_values(\":module:collector\", filterfunc=txt_filter, unique=False)\n```\n\nIf you have very special requirements for the ordering values (for example\nwhen collecting compile flags), consider iterating over the collectors items\nmanually, and possibly de-duplicating and reordering the values yourself.\n\n```python\ndef post_build(env):\n    # Get the collector, may return None if collector does not exist!\n    collector = env.collector(\":module:collector\")\n    if collector is not None:\n        for operation, values in collector.items():\n            # values is a list and may contain duplicates\n            print(operation.module, values)\n            if operation.has_filename: # not all operations have filenames!\n                print(operation.filename)\n```\n\nNote that collector values that were added by a module without explicit\noperations do not have filename, only module names!\n\nCollectors are implemented using the same type-safe mechanisms as\n[Options](#Options), the only differences are the lack of dependency handlers\nand default values, since you can add default values in the modules build step.\n\nYou may add collector values via the project configuration. However, since these\ncollector values cannot be overwritten by inheriting configurations use this with care.\n\n```xml\n<library>\n  <collectors>\n    <collect name=\"repo:collector_name\">value</collect>\n    <collect name=\"repo:collector_name\">value2</collect>\n  </collectors>\n</library>\n```\n\nYou can discover all the available collectors in your repository using\n`lbuild discover --developer`.\n\n\n#### CallableCollector\n\nThis collector allows you to collect callable objects, that the post-build step\ncan execute. This can be useful for providing specializations to the post-build\nmodule without it needing to know how they work.\n\n```python\ncollector = CallableCollector(name=\"collector-name\",\n                              description=\"callable\")\n```\n\n\n#### StringCollector\n\nSee [StringOption](#StringOption) for documentation.\n\n```python\ncollector = StringCollector(name=\"collector-name\",\n                            description=\"string\",\n                            validate=validate_function)\n```\n\n\n#### PathCollector\n\nSee [PathOption](#PathOption) for documentation.\n\n```python\ncollector = PathCollector(name=\"collector-name\",\n                          description=\"path\",\n                          empty_ok=False,\n                          absolute=False)\n```\n\n\n#### BooleanCollector\n\nSee [BooleanOption](#BooleanOption) for documentation.\n\n```python\ncollector = BooleanCollector(name=\"collector-name\",\n                             description=\"boolean\")\n```\n\n\n#### NumericCollector\n\nSee [NumericOption](#NumericOption) for documentation.\n\n```python\ncollector = NumericCollector(name=\"collector-name\",\n                             description=\"numeric\",\n                             minimum=0,\n                             maximum=100)\n```\n\n\n#### EnumerationCollector\n\nSee [EnumerationOption](#EnumerationOption) for documentation.\n\n```python\ncollector = EnumerationCollector(name=\"collector-name\",\n                                 description=\"enumeration\",\n                                 enumeration=enumeration)\n```\n\n\n### Aliases\n\n*lbuild* aliases are mappings from one lbuild node to another. They are useful\nfor gracefully dealing with renaming or moving nodes in your lbuild module tree.\nAliases will print a warning when accessed showing the alias description. Each\nalias must have a unique name within their parent repository or module.\n\nAliases can be used for any type of node that you want forwarded. You can also\nadd aliases that do not have a destination and will raise an exception with the\nalias description. This allows you to remove lbuild nodes while providing\ndetails for a workaround.\n\n```python\ndef prepare(module, options):\n    # Move option in this module\n    module.add_module(Option(name=\"option\"))\n    # Forward the old option to the new option\n    module.add_alias(Alias(name=\"option-alias\",\n                           destination=\"option\",\n                           description=\"Renamed for clarity.\"))\n    # Instead of silently failing, you can provide a detailed description\n    # about why the node was removed and what the workaround is.\n    module.add_alias(Alias(name=\"removed-alias\",\n                           description=\"Removed. Workaround: ...\"))\n    # You alias any type to any other node.\n    module.add_alias(Alias(name=\"submodule-alias\",\n                           destination=\":other-module:submodule\"\n                           description=\"Removed. Workaround: ...\"))\n\ndef build(env):\n    # Will show a warning (once) that the alias has been moved\n    exists = env.has_option(\":module:option-alias\")\n    # Accesses :module:option instead\n    value = env[\":module:option-alias\"]\n    # This will raise an exception with the alias description\n    value = env[\":module:removed-alias\"]\n    # will check for :other-module:submodule instead\n    value = env.has_module[\":module:submodule-alias\"]\n```\n\n\n### Jinja2 Configuration\n\n*lbuild* uses the [Jinja2 template engine][jinja2] with the following global\nconfiguration:\n\n- Line statements start with `%% statement`.\n- Line comments start with `%# comment`.\n- Undefined variables throw an exception instead of being ignored.\n- Global extensions: `jinja2.ext.do`.\n- Global substitutions are:\n  + `time`: `strftime(\"%d %b %Y, %H:%M:%S\")`\n  + `options`: an option resolver in the context of the current module.\n\n\n### Name Resolution\n\n*lbuild* manages repositories, modules and options in a tree structure and\nserializes identification into unique string using `:` as hierarchy delimiters.\nAny identifier provided via the command line, configuration, repository or\nmodule files use the same resolver, which allows using *partially-qualified*\nidentifiers. In addition, globbing for multiple identifiers using fnmatch\nsemantics is supported.\n\nThe following rules for resolving identifiers apply:\n\n1. A fully-qualified identifier specifies all parts: `repo:module:option`.\n2. A partially-qualified identifier adds fnmatch wildcarts: `*:m.dule:opt*`.\n3. `*` wildcarts for entire hierarchies can be ommitted: `::option`\n4. A special wildcart is `:**`, which globs for everything below the current\n   hierarchy level: `repo:**` selects all in `repo`, `repo:module:**` all in\n   `repo:module`, etc.\n5. Wildcarts are resolved in reverse hierarchical order. Therefore, `::option`\n   may be unique within the context of `:module`, but not within the entire\n   project.\n6. For accessing direct children, you may specify their name without any\n   delimiters: `option` within the context of `:module` will resolve to\n   `:module:option`.\n\nPartial identifiers were introduced to reduce verbosity and aid refactoring,\nit is therefore recommended to:\n\n1. Omit the repository name for accessing modules and options within the same\n   repository.\n2. Accessing a module's options with their name directly.\n\n\n### Execution order\n\n*lbuild* executes in this order:\n\n1. `repository:init()`\n2. Create repository options\n3. `repository:prepare(repo-options)`\n4. Find all modules in repositories\n5. `module:init()`\n6. `module:prepare(repo-options)`\n7. Create module options\n8. Resolve module dependencies\n9. `module:validate(env)` submodules-first, *optional*\n10. `module:build(env)` submodules-first\n11. `repo:build(env)`: *optional*\n12. `module:post_build(env)`: submodules-first, *optional*\n\n\n[modm]: https://modm.io/how-modm-works\n[taproot]: https://github.com/uw-advanced-robotics/taproot\n[outpost]: https://github.com/DLR-RY/outpost-core\n[jinja2]: http://jinja.pocoo.org\n[python]: https://www.python.org\n[pypi]: https://pypi.org/project/lbuild\n[salkinium]: https://github.com/salkinium\n[travis]: https://travis-ci.org/modm-io/lbuild\n[travis-svg]: https://travis-ci.org/modm-io/lbuild.svg?branch=develop\n",
    "bugtrack_url": null,
    "license": "BSD",
    "summary": "Generic, modular code generator using the Jinja2 template engine.",
    "version": "1.21.8",
    "project_urls": {
        "Homepage": "https://github.com/modm-io/lbuild"
    },
    "split_keywords": [
        "library",
        "builder",
        "generator"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "0f6c2f5c3e3343f84bad44b25251276f105db92ac6c88f0776b06bf9ad403bba",
                "md5": "7272695070050dcef1de0a7a9a3f01f2",
                "sha256": "0ee6f4abe2bc5b78c3f512756e36f8e52fd3b060fc6ffd63dae4e175ef44bd1c"
            },
            "downloads": -1,
            "filename": "lbuild-1.21.8-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7272695070050dcef1de0a7a9a3f01f2",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8.0",
            "size": 72200,
            "upload_time": "2023-11-18T21:33:30",
            "upload_time_iso_8601": "2023-11-18T21:33:30.788514Z",
            "url": "https://files.pythonhosted.org/packages/0f/6c/2f5c3e3343f84bad44b25251276f105db92ac6c88f0776b06bf9ad403bba/lbuild-1.21.8-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "1b839e88bcf431a3a76913c210091e08fec230007fed604937aa684b5d6c4f96",
                "md5": "8f8619294f7de3e10e272599e9f6ffe8",
                "sha256": "70abaf36b46a239c1ee4fa9c71e843381b21b9dccede4cd50df392ae78560e57"
            },
            "downloads": -1,
            "filename": "lbuild-1.21.8.tar.gz",
            "has_sig": false,
            "md5_digest": "8f8619294f7de3e10e272599e9f6ffe8",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.8.0",
            "size": 87355,
            "upload_time": "2023-11-18T21:33:33",
            "upload_time_iso_8601": "2023-11-18T21:33:33.216136Z",
            "url": "https://files.pythonhosted.org/packages/1b/83/9e88bcf431a3a76913c210091e08fec230007fed604937aa684b5d6c4f96/lbuild-1.21.8.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-11-18 21:33:33",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "modm-io",
    "github_project": "lbuild",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "lbuild"
}
        
Elapsed time: 0.28933s