qlever


Nameqlever JSON
Version 0.5.0 PyPI version JSON
download
home_pageNone
SummaryScript for using the QLever SPARQL engine.
upload_time2024-04-14 12:45:06
maintainerNone
docs_urlNone
authorNone
requires_python>=3.8
licenseApache-2.0
keywords sparql rdf knowledge graphs triple store
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # QLever

QLever is a very fast SPARQL engine, much faster than most existing engines. It
can handle graphs with more than hundred billion triples on a single machine
with moderate resources. See https://qlever.cs.uni-freiburg.de for more
information and many public SPARQL endpoints that use QLever

This project provides a Python script that can control everything that QLever
does, in particular, creating SPARQL endpoints for arbitrary RDF datasets. It
is supposed to be very easy to use and self-explanatory as you use it. In
particular, the tool provides context-sensitive autocompletion of all its
commands and options. If you use a container system (like Docker or Podman),
you don't even have to download any QLever code, but the script will download
the required image for you.

NOTE: There has been a major update on 24.03.2024, which changed some of the
Qleverfile variables and command-line options (all for the better, of course).
If you encounter any problems, please contact us by opening an issue on
https://github.com/ad-freiburg/qlever-control/issues.

# Installation

Simply do `pip install qlever` and make sure that the directory where pip
installs the package is in your `PATH`. Typically, `pip` will warn you when
that is not the case and tell you what to do.

# Usage

Create an empty directory, with a name corresponding to the dataset you want to
work with. For the following example, take `olympics`. Go to that directory
and do the following. After the first call, `qlever` will tell you how to
activate autocompletion for all its commands and options (it's very easy, but
`pip` cannot do that automatically).

```
qlever setup-config olympics   # Get Qleverfile (config file) for this dataset
qlever get-data                # Download the dataset
qlever index                   # Build index data structures for this dataset
qlever start                   # Start a QLever server using that index
qlever example-queries         # Launch some example queries
qlever ui                      # Launch the QLever UI
```

This will create a SPARQL endpoint for the [120 Years of
Olympics](https://github.com/wallscope/olympics-rdf) dataset. It is a great
dataset for getting started because it is small, but not trivial (around 2
million triples), and the downloading and indexing should only take a few
seconds.

Each command will also show you the command line it uses. That way you can
learn, on the side, how QLever works internally. If you just want to know the
command line for a particular command, without executing it, you can append
`--show` like this:

```
qlever index --show
```

There are many more commands and options, see `qlever --help` for general help,
`qlever <command> --help` for help on a specific command, or just the
autocompletion.

# For developers

The (Python) code for the script is in the `*.py` files in `src/qlever`. The
preconfigured Qleverfiles are in `src/qlever/Qleverfiles`.

If you want to make changes to the script, or add new commands, do as follows:

```
git clone https://github.com/ad-freiburg/qlever-control
cd qlever-control
pip install -e .
```

Then you can use `qlever` just as if you had installed it via `pip install
qlever`. Note that you don't have to rerun `pip install -e .` when you modify
any of the `*.py` files and not even when you add new commands in
`src/qlever/commands`. The exceutable created by `pip` simply links and refers
to the files in your working copy.

If you have bug fixes or new useful features or commands, please open a pull
request. If you have questions or suggestions, please open an issue.

            

Raw data

            {
    "_id": null,
    "home_page": null,
    "name": "qlever",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.8",
    "maintainer_email": null,
    "keywords": "SPARQL, RDF, Knowledge Graphs, Triple Store",
    "author": null,
    "author_email": "Hannah Bast <bast@cs.uni-freiburg.de>",
    "download_url": null,
    "platform": null,
    "description": "# QLever\n\nQLever is a very fast SPARQL engine, much faster than most existing engines. It\ncan handle graphs with more than hundred billion triples on a single machine\nwith moderate resources. See https://qlever.cs.uni-freiburg.de for more\ninformation and many public SPARQL endpoints that use QLever\n\nThis project provides a Python script that can control everything that QLever\ndoes, in particular, creating SPARQL endpoints for arbitrary RDF datasets. It\nis supposed to be very easy to use and self-explanatory as you use it. In\nparticular, the tool provides context-sensitive autocompletion of all its\ncommands and options. If you use a container system (like Docker or Podman),\nyou don't even have to download any QLever code, but the script will download\nthe required image for you.\n\nNOTE: There has been a major update on 24.03.2024, which changed some of the\nQleverfile variables and command-line options (all for the better, of course).\nIf you encounter any problems, please contact us by opening an issue on\nhttps://github.com/ad-freiburg/qlever-control/issues.\n\n# Installation\n\nSimply do `pip install qlever` and make sure that the directory where pip\ninstalls the package is in your `PATH`. Typically, `pip` will warn you when\nthat is not the case and tell you what to do.\n\n# Usage\n\nCreate an empty directory, with a name corresponding to the dataset you want to\nwork with. For the following example, take `olympics`. Go to that directory\nand do the following. After the first call, `qlever` will tell you how to\nactivate autocompletion for all its commands and options (it's very easy, but\n`pip` cannot do that automatically).\n\n```\nqlever setup-config olympics   # Get Qleverfile (config file) for this dataset\nqlever get-data                # Download the dataset\nqlever index                   # Build index data structures for this dataset\nqlever start                   # Start a QLever server using that index\nqlever example-queries         # Launch some example queries\nqlever ui                      # Launch the QLever UI\n```\n\nThis will create a SPARQL endpoint for the [120 Years of\nOlympics](https://github.com/wallscope/olympics-rdf) dataset. It is a great\ndataset for getting started because it is small, but not trivial (around 2\nmillion triples), and the downloading and indexing should only take a few\nseconds.\n\nEach command will also show you the command line it uses. That way you can\nlearn, on the side, how QLever works internally. If you just want to know the\ncommand line for a particular command, without executing it, you can append\n`--show` like this:\n\n```\nqlever index --show\n```\n\nThere are many more commands and options, see `qlever --help` for general help,\n`qlever <command> --help` for help on a specific command, or just the\nautocompletion.\n\n# For developers\n\nThe (Python) code for the script is in the `*.py` files in `src/qlever`. The\npreconfigured Qleverfiles are in `src/qlever/Qleverfiles`.\n\nIf you want to make changes to the script, or add new commands, do as follows:\n\n```\ngit clone https://github.com/ad-freiburg/qlever-control\ncd qlever-control\npip install -e .\n```\n\nThen you can use `qlever` just as if you had installed it via `pip install\nqlever`. Note that you don't have to rerun `pip install -e .` when you modify\nany of the `*.py` files and not even when you add new commands in\n`src/qlever/commands`. The exceutable created by `pip` simply links and refers\nto the files in your working copy.\n\nIf you have bug fixes or new useful features or commands, please open a pull\nrequest. If you have questions or suggestions, please open an issue.\n",
    "bugtrack_url": null,
    "license": "Apache-2.0",
    "summary": "Script for using the QLever SPARQL engine.",
    "version": "0.5.0",
    "project_urls": {
        "Github": "https://github.com/ad-freiburg/qlever"
    },
    "split_keywords": [
        "sparql",
        " rdf",
        " knowledge graphs",
        " triple store"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "e47a37b00a0bd824ca01ae9b4dc15aca300c8cf765bd1ff58cdeb60048e5cdd9",
                "md5": "67724e6ed6ab70864aa794f0fe6bde94",
                "sha256": "426b4568ddacfc2062c3db9af89642f5253917efad4fd917b4d99ed6fdaf40ad"
            },
            "downloads": -1,
            "filename": "qlever-0.5.0-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "67724e6ed6ab70864aa794f0fe6bde94",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.8",
            "size": 92463,
            "upload_time": "2024-04-14T12:45:06",
            "upload_time_iso_8601": "2024-04-14T12:45:06.526843Z",
            "url": "https://files.pythonhosted.org/packages/e4/7a/37b00a0bd824ca01ae9b4dc15aca300c8cf765bd1ff58cdeb60048e5cdd9/qlever-0.5.0-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-14 12:45:06",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "ad-freiburg",
    "github_project": "qlever",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "qlever"
}
        
Elapsed time: 0.24014s