onprem


Nameonprem JSON
Version 0.17.1 PyPI version JSON
download
home_pagehttps://github.com/amaiya/onprem
SummaryA tool for running on-premises large language models on non-public data
upload_time2025-07-30 01:04:24
maintainerNone
docs_urlNone
authorArun S. Maiya
requires_python>=3.9
licenseApache Software License 2.0
keywords nbdev jupyter notebook python
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            # OnPrem.LLM


<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->

> A privacy-conscious toolkit for document intelligence β€” local by
> default, cloud-capable

**[OnPrem.LLM](https://github.com/amaiya/onprem)** (or β€œOnPrem” for
short) is a Python-based toolkit for applying large language models
(LLMs) to sensitive, non-public data in offline or restricted
environments. Inspired largely by the
[privateGPT](https://github.com/imartinez/privateGPT) project,
**OnPrem.LLM** is designed for fully local execution, but also supports
integration with a wide range of cloud LLM providers (e.g., OpenAI,
Anthropic).

The full documentation is [here](https://amaiya.github.io/onprem/).

<!--A Google Colab demo of installing and using **OnPrem.LLM** is [here](https://colab.research.google.com/drive/1LVeacsQ9dmE1BVzwR3eTLukpeRIMmUqi?usp=sharing).
-->

**Quick Start**

``` python
# install
!pip install onprem[chroma]
from onprem import LLM, utils

# local LLM with Ollama as backend
!ollama pull llama3.2
llm = LLM('ollama/llama3.2')

# basic prompting
result = llm.prompt('Give me a short one sentence definition of an LLM.')

# RAG
utils.download('https://www.arxiv.org/pdf/2505.07672', '/tmp/my_documents/paper.pdf')
llm.ingest('/tmp/my_documents')
result = llm.ask('What is OnPrem.LLM?')

# switch to cloud LLM using Anthropic as backend
llm = LLM("anthropic/claude-3-7-sonnet-latest")

# structured outputs
from pydantic import BaseModel, Field
class MeasuredQuantity(BaseModel):
    value: str = Field(description="numerical value")
    unit: str = Field(description="unit of measurement")
structured_output = llm.pydantic_prompt('He was going 35 mph.', pydantic_model=MeasuredQuantity)
print(structured_output.value) # 35
print(structured_output.unit)  # mph
```

Many LLM backends are supported (e.g.,
[llama_cpp](https://github.com/abetlen/llama-cpp-python),
[transformers](https://github.com/huggingface/transformers),
[Ollama](https://ollama.com/),
[vLLM](https://github.com/vllm-project/vllm),
[OpenAI](https://platform.openai.com/docs/models),
[Anthropic](https://docs.anthropic.com/en/docs/about-claude/models/overview),
etc.).

------------------------------------------------------------------------

<center>
<p align="center">
<img src="https://raw.githubusercontent.com/amaiya/onprem/refs/heads/master/images/onprem.png" border="0" alt="onprem.llm" width="200"/>
</p>
</center>
<center>
<p align="center">

**[Install](https://amaiya.github.io/onprem/#install) \|
[Usage](https://amaiya.github.io/onprem/#how-to-use) \|
[Examples](https://amaiya.github.io/onprem/#examples) \| [Web
UI](https://amaiya.github.io/onprem/webapp.html) \|
[FAQ](https://amaiya.github.io/onprem/#faq) \| [How to
Cite](https://amaiya.github.io/onprem/#how-to-cite)**

</p>
</center>

*Latest News* πŸ”₯

- \[2025/07\] v0.17.0 released and now allows you to connect directly to
  SharePoint for search and RAG. See the [example notebook on vector
  stores](https://amaiya.github.io/onprem/examples_vectorstore_factory.html#rag-with-sharepoint-documents)
  for more information.
- \[2025/07\] v0.16.0 released and now includes out-of-the-box support
  for **Elasticsearch** as a vector store for RAG and semantic search in
  addition to other vector store backends. See the [example notebook on
  vector
  stores](https://amaiya.github.io/onprem/examples_vectorstore_factory.html)
  for more information.
- \[2025/06\] v0.15.0 released and now includes support for solving
  tasks with **agents**. See the [example notebook on
  agents](https://amaiya.github.io/onprem/examples_agent.html) for more
  information.
- \[2025/05\] v0.14.0 released and now includes a point-and-click
  interface for **Document Analysis**: applying prompts to individual
  passages in uploaded documents. See the [Web UI
  documentation](https://amaiya.github.io/onprem/webapp.html) for more
  information.
- \[2025/04\] v0.13.0 released and now includes streamlined support for
  Ollama and many cloud LLMs via special URLs (e.g.,
  `model_url="ollama://llama3.2"`,
  `model_url="anthropic://claude-3-7-sonnet-latest"`). See the [cheat
  sheet](https://amaiya.github.io/onprem/#how-to-use) for examples.
  (**Note: Please use `onprem>=0.13.1` due to bug in v0.13.0.**)
- \[2025/04\] v0.12.0 released and now includes a re-vamped and improved
  Web UI with support for interactive chatting, document
  question-answering (RAG), and document search (both keyword searches
  and semantic searches). See the [Web UI
  documentation](https://amaiya.github.io/onprem/webapp.html) for more
  information.

------------------------------------------------------------------------

## Install

Once you have [installed
PyTorch](https://pytorch.org/get-started/locally/), you can install
**OnPrem.LLM** with the following steps:

1.  Install **llama-cpp-python** (*optional* - see below):
    - **CPU:** `pip install llama-cpp-python` ([extra
      steps](https://github.com/amaiya/onprem/blob/master/MSWindows.md)
      required for Microsoft Windows)
    - **GPU**: Follow [instructions
      below](https://amaiya.github.io/onprem/#on-gpu-accelerated-inference).
2.  Install **OnPrem.LLM**: `pip install onprem`

For RAG using the [default dense
vectorstore](https://amaiya.github.io/onprem/#step-1-ingest-the-documents-into-a-vector-database),
please also install chroma packages: `pip install onprem[chroma]`.

**Note:** Installing **llama-cpp-python** is *optional* if any of the
following is true:

- You use Hugging Face Transformers (instead of llama-cpp-python) as the
  LLM backend by supplying the `model_id` parameter when instantiating
  an LLM, as [shown
  here](https://amaiya.github.io/onprem/#using-hugging-face-transformers-instead-of-llama.cpp).
- You are using **OnPrem.LLM** with an LLM being served through an
  [external REST API](#connecting-to-llms-served-through-rest-apis)
  (e.g., Ollama, vLLM, OpenLLM).
- You are using **Onprem.LLM** with a cloud LLM (more information
  below).

### On GPU-Accelerated Inference With `llama-cpp-python`

When installing **llama-cpp-python** with
`pip install llama-cpp-python`, the LLM will run on your **CPU**. To
generate answers much faster, you can run the LLM on your **GPU** by
building **llama-cpp-python** based on your operating system.

- **Linux**:
  `CMAKE_ARGS="-DGGML_CUDA=on" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir`
- **Mac**: `CMAKE_ARGS="-DGGML_METAL=on" pip install llama-cpp-python`
- **Windows 11**: Follow the instructions
  [here](https://github.com/amaiya/onprem/blob/master/MSWindows.md#using-the-system-python-in-windows-11s).
- **Windows Subsystem for Linux (WSL2)**: Follow the instructions
  [here](https://github.com/amaiya/onprem/blob/master/MSWindows.md#using-wsl2-with-gpu-acceleration).

For Linux and Windows, you will need [an up-to-date NVIDIA
driver](https://www.nvidia.com/en-us/drivers/) along with the [CUDA
toolkit](https://developer.nvidia.com/cuda-downloads) installed before
running the installation commands above.

After following the instructions above, supply the `n_gpu_layers=-1`
parameter when instantiating an LLM to use your GPU for fast inference:

``` python
llm = LLM(n_gpu_layers=-1, ...)
```

Quantized models with 8B parameters and below can typically run on GPUs
with as little as 6GB of VRAM. If a model does not fit on your GPU
(e.g., you get a β€œCUDA Error: Out-of-Memory” error), you can offload a
subset of layers to the GPU by experimenting with different values for
the `n_gpu_layers` parameter (e.g., `n_gpu_layers=20`). Setting
`n_gpu_layers=-1`, as shown above, offloads all layers to the GPU.

See [the FAQ](https://amaiya.github.io/onprem/#faq) for extra tips, if
you experience issues with
[llama-cpp-python](https://pypi.org/project/llama-cpp-python/)
installation.

## How to Use

### Setup

``` python
from onprem import LLM

llm = LLM(verbose=False) # default model and backend are used
```

#### Cheat Sheet

*Local Models:* A number of different local LLM backends are supported.

- **Llama-cpp**: `llm = LLM(default_model="llama", n_gpu_layers=-1)`

- **Llama-cpp with selected GGUF model via URL**:

  ``` python
   # prompt templates are required for user-supplied GGUF models (see FAQ)
   llm = LLM(model_url='https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF/resolve/main/zephyr-7b-beta.Q4_K_M.gguf', 
             prompt_template= "<|system|>\n</s>\n<|user|>\n{prompt}</s>\n<|assistant|>", n_gpu_layers=-1)
  ```

- **Llama-cpp with selected GGUF model via file path**:

  ``` python
   # prompt templates are required for user-supplied GGUF models (see FAQ)
   llm = LLM(model_url='zephyr-7b-beta.Q4_K_M.gguf', 
             model_download_path='/path/to/folder/to/where/you/downloaded/model',
             prompt_template= "<|system|>\n</s>\n<|user|>\n{prompt}</s>\n<|assistant|>", n_gpu_layers=-1)
  ```

- **Hugging Face Transformers**:
  `llm = LLM(model_id='Qwen/Qwen2.5-0.5B-Instruct', device='cuda')`

- **Ollama**: `llm = LLM(model_url="ollama://llama3.2", api_key='na')`

- **Also Ollama**:
  `llm = LLM(model_url="ollama/llama3.2", api_key='na')`

- **Also Ollama**:
  `llm = LLM(model_url='http://localhost:11434/v1', api_key='na', model='llama3.2')`

- **VLLM**:
  `llm = LLM(model_url='http://localhost:8000/v1', api_key='na', model='Qwen/Qwen2.5-0.5B-Instruct')`

*Cloud Models:* Despite the focus on local LLMs, cloud LLMs are also
supported:

- **Anthropic Claude**:
  `llm = LLM(model_url="anthropic://claude-3-7-sonnet-latest")`
- **Also Anthropic Claude**:
  `llm = LLM(model_url="anthropic/claude-3-7-sonnet-latest")`
- **OpenAI GPT-4o**: `llm = LLM(model_url="openai://gpt-4o")`
- **Also OpenAI GPT-4o**: `llm = LLM(model_url="openai/gpt-4o")`

The instantiations above are described in more detail below.

#### Specifying the Local Model to Use

The default LLM backend is
[llama-cpp-python](https://github.com/abetlen/llama-cpp-python), and the
default model is currently a 7B-parameter model called
**Zephyr-7B-beta**, which is automatically downloaded and used. The two
other default models are `llama` and `mistral`. For instance, if
`default_model='llama'` is supplied, then a **Llama-3.1-8B-Instsruct**
model is automatically downloaded and used:

``` python
# Llama 3.1 is downloaded here and the correct prompt template for Llama-3.1 is automatically configured and used
llm = LLM(default_model='llama')
```

*Choosing Your Own Models:* Of course, you can also easily supply the
URL or path to an LLM of your choosing to
[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm) (see the
[FAQ](https://amaiya.github.io/onprem/#faq) for an example).

*Supplying Extra Parameters:* Any extra parameters supplied to
[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm) are forwarded
directly to
[llama-cpp-python](https://github.com/abetlen/llama-cpp-python), the
default LLM backend.

#### Changing the Default LLM Backend

If `default_engine="transformers"` is supplied to
[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm), Hugging Face
[transformers](https://github.com/huggingface/transformers) is used as
the LLM backend. Extra parameters to
[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm) (e.g.,
β€˜device=’cuda’`) are forwarded diretly to`transformers.pipeline`. If supplying a`model_id\`
parameter, the default LLM backend is automatically changed to Hugging
Face [transformers](https://github.com/huggingface/transformers).

``` python
# LLama-3.1 model quantized using AWQ is downloaded and run with Hugging Face transformers (requires GPU)
llm = LLM(default_model='llama', default_engine='transformers')

# Using a custom model with Hugging Face Transformers
llm = LLM(model_id='Qwen/Qwen2.5-0.5B-Instruct', device_map='cpu')
```

See
[here](https://amaiya.github.io/onprem/#using-hugging-face-transformers-instead-of-llama.cpp)
for more information about using Hugging Face
[transformers](https://github.com/huggingface/transformers) as the LLM
backend.

You can also connect to **Ollama**, local LLM APIs (e.g., vLLM), and
cloud LLMs.

``` python
# connecting to an LLM served by Ollama
lm = LLM(model_url='ollama/llama3.2')

# connecting to an LLM served through vLLM (set API key as needed)
llm = LLM(model_url='http://localhost:8000/v1', api_key='token-abc123', model='Qwen/Qwen2.5-0.5B-Instruct')`

# connecting to a cloud-backed LLM (e.g., OpenAI, Anthropic).
llm = LLM(model_url="openai/gpt-4o-mini")  # OpenAI
llm = LLM(model_url="anthropic/claude-3-7-sonnet-20250219") # Anthropic
```

**OnPrem.LLM** suppports any provider and model supported by the
[LiteLLM](https://github.com/BerriAI/litellm) package.

See
[here](https://amaiya.github.io/onprem/#connecting-to-llms-served-through-rest-apis)
for more information on *local* LLM APIs.

More information on using OpenAI models specifically with **OnPrem.LLM**
is [here](https://amaiya.github.io/onprem/examples_openai.html).

#### Supplying Parameters to the LLM Backend

Extra parameters supplied to
[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm) and
[`LLM.prompt`](https://amaiya.github.io/onprem/llm.base.html#llm.prompt)
are passed directly to the LLM backend. Parameter names will vary
depending on the backend you chose.

For instance, with the default llama-cpp backend, the default context
window size (`n_ctx`) is set to 3900 and the default output size
(`max_tokens`) is set 512. Both are configurable parameters to
[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm). Increase if
you have larger prompts or need longer outputs. Other parameters (e.g.,
`api_key`, `device_map`, etc.) can be supplied directly to
[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm) and will be
routed to the LLM backend or API (e.g., llama-cpp-python, Hugging Face
transformers, vLLM, OpenAI, etc.). The `max_tokens` parameter can also
be adjusted on-the-fly by supplying it to
[`LLM.prompt`](https://amaiya.github.io/onprem/llm.base.html#llm.prompt).

On the other hand, for Ollama models, context window and output size are
controlled by `num_ctx` and `num_predict`, respectively.

With the Hugging Face transformers, setting the context window size is
not needed, but the output size is controlled by the `max_new_tokens`
parameter to
[`LLM.prompt`](https://amaiya.github.io/onprem/llm.base.html#llm.prompt).

### Send Prompts to the LLM to Solve Problems

This is an example of few-shot prompting, where we provide an example of
what we want the LLM to do.

``` python
prompt = """Extract the names of people in the supplied sentences.
Separate names with commas and place on a single line.

# Example 1:
Sentence: James Gandolfini and Paul Newman were great actors.
People:
James Gandolfini, Paul Newman

# Example 2:
Sentence:
I like Cillian Murphy's acting. Florence Pugh is great, too.
People:"""

saved_output = llm.prompt(prompt, stop=['\n\n'])
```


    Cillian Murphy, Florence Pugh

**Additional prompt examples are [shown
here](https://amaiya.github.io/onprem/examples.html).**

### Talk to Your Documents

Answers are generated from the content of your documents (i.e.,
[retrieval augmented generation](https://arxiv.org/abs/2005.11401) or
RAG). Here, we will use [GPU
offloading](https://amaiya.github.io/onprem/#speeding-up-inference-using-a-gpu)
to speed up answer generation using the default model. However, the
Zephyr-7B model may perform even better, responds faster, and is used in
our **[RAG example
notebook](https://amaiya.github.io/onprem/examples_rag.html)**.

``` python
from onprem import LLM

llm = LLM(n_gpu_layers=-1, store_type='sparse', verbose=False)
```

    llama_new_context_with_model: n_ctx_per_seq (3904) < n_ctx_train (32768) -- the full capacity of the model will not be utilized

#### Step 1: Ingest the Documents into a Vector Database

As of v0.10.0, you have the option of storing documents in either a
dense vector store (i.e., Chroma) or a sparse vector store (i.e., a
built-in keyword search index). Sparse vector stores sacrifice a small
amount of inference speed for significant improvements in ingestion
speed (useful for larger document sets) and also assume answer sources
will include at least one word from the question. To select the store
type, supply either `store_type="dense"` or `store_type="sparse"` when
creating the [`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm).
As you can see above, we use a sparse vector store here.

``` python
llm.ingest("./tests/sample_data")
```

    Creating new vectorstore at /home/amaiya/onprem_data/vectordb/sparse
    Loading documents from ./tests/sample_data
    Split into 354 chunks of text (max. 500 chars each for text; max. 2000 chars for tables)
    Ingestion complete! You can now query your documents using the LLM.ask or LLM.chat methods

    Loading new documents: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6/6 [00:09<00:00,  1.51s/it]
    Processing and chunking 43 new documents: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 1/1 [00:00<00:00, 116.11it/s]
    100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 354/354 [00:00<00:00, 2548.70it/s]

#### Step 2: Answer Questions About the Documents

``` python
question = """What is  ktrain?"""
result = llm.ask(question)
```

     ktrain is a low-code machine learning platform. It provides out-of-the-box support for training models on various types of data such as text, vision, graph, and tabular.

The sources used by the model to generate the answer are stored in
`result['source_documents']`:

``` python
print("\nSources:\n")
for i, document in enumerate(result["source_documents"]):
    print(f"\n{i+1}.> " + document.metadata["source"] + ":")
    print(document.page_content)
```


    Sources:


    1.> /home/amaiya/projects/ghub/onprem/nbs/tests/sample_data/ktrain_paper/ktrain_paper.pdf:
    transferred to, and executed on new data in a production environment.
    ktrain is a Python library for machine learning with the goal of presenting a simple,
    unified interface to easily perform the above steps regardless of the type of data (e.g., text
    vs. images vs. graphs). Moreover, each of the three steps above can be accomplished in
    Β©2022 Arun S. Maiya.
    License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are

    2.> /home/amaiya/projects/ghub/onprem/nbs/tests/sample_data/ktrain_paper/ktrain_paper.pdf:
    custom models and data formats, as well. Inspired by other low-code (and no-code) open-
    source ML libraries such as fastai (Howard and Gugger, 2020) and ludwig (Molino et al.,
    2019), ktrain is intended to help further democratize machine learning by enabling begin-
    ners and domain experts with minimal programming or data science experience to build
    sophisticated machine learning models with minimal coding. It is also a useful toolbox for

    3.> /home/amaiya/projects/ghub/onprem/nbs/tests/sample_data/ktrain_paper/ktrain_paper.pdf:
    Apache license, and available on GitHub at: https://github.com/amaiya/ktrain.
    2. Building Models
    Supervised learning tasks in ktrain follow a standard, easy-to-use template.
    STEP 1: Load and Preprocess Data. This step involves loading data from different
    sources and preprocessing it in a way that is expected by the model. In the case of text,
    this may involve language-specific preprocessing (e.g., tokenization). In the case of images,

    4.> /home/amaiya/projects/ghub/onprem/nbs/tests/sample_data/ktrain_paper/ktrain_paper.pdf:
    AutoKeras (Jin et al., 2019) and AutoGluon (Erickson et al., 2020) lack some key β€œpre-
    canned” features in ktrain, which has the strongest support for natural language processing
    and graph-based data. Support for additional features is planned for the future.
    5. Conclusion
    This work presented ktrain, a low-code platform for machine learning. ktrain currently in-
    cludes out-of-the-box support for training models on text, vision, graph, and tabular

### Extract Text from Documents

The
[`load_single_document`](https://amaiya.github.io/onprem/ingest.base.html#load_single_document)
function can extract text from a range of different document formats
(e.g., PDFs, Microsoft PowerPoint, Microsoft Word, etc.). It is
automatically invoked when calling
[`LLM.ingest`](https://amaiya.github.io/onprem/llm.base.html#llm.ingest).
Extracted text is represented as LangChain `Document` objects, where
`Document.page_content` stores the extracted text and
`Document.metadata` stores any extracted document metadata.

For PDFs, in particular, a number of different options are available
depending on your use case.

**Fast PDF Extraction (default)**

- **Pro:** Fast
- **Con:** Does not infer/retain structure of tables in PDF documents

``` python
from onprem.ingest import load_single_document

docs = load_single_document('tests/sample_data/ktrain_paper/ktrain_paper.pdf')
docs[0].metadata
```

    {'source': '/home/amaiya/projects/ghub/onprem/nbs/sample_data/1/ktrain_paper.pdf',
     'file_path': '/home/amaiya/projects/ghub/onprem/nbs/sample_data/1/ktrain_paper.pdf',
     'page': 0,
     'total_pages': 9,
     'format': 'PDF 1.4',
     'title': '',
     'author': '',
     'subject': '',
     'keywords': '',
     'creator': 'LaTeX with hyperref',
     'producer': 'dvips + GPL Ghostscript GIT PRERELEASE 9.22',
     'creationDate': "D:20220406214054-04'00'",
     'modDate': "D:20220406214054-04'00'",
     'trapped': ''}

**Automatic OCR of PDFs**

- **Pro:** Automatically extracts text from scanned PDFs
- **Con:** Slow

The
[`load_single_document`](https://amaiya.github.io/onprem/ingest.base.html#load_single_document)
function will automatically OCR PDFs that require it (i.e., PDFs that
are scanned hard-copies of documents). If a document is OCR’ed during
extraction, the `metadata['ocr']` field will be populated with `True`.

``` python
docs = load_single_document('tests/sample_data/ocr_document/lynn1975.pdf')
docs[0].metadata
```

    {'source': '/home/amaiya/projects/ghub/onprem/nbs/sample_data/4/lynn1975.pdf',
     'ocr': True}

**Markdown Conversion in PDFs**

- **Pro**: Better chunking for QA
- **Con**: Slower than default PDF extraction

The
[`load_single_document`](https://amaiya.github.io/onprem/ingest.base.html#load_single_document)
function can convert PDFs to Markdown instead of plain text by supplying
the `pdf_markdown=True` as an argument:

``` python
docs = load_single_document('your_pdf_document.pdf', 
                            pdf_markdown=True)
```

Converting to Markdown can facilitate downstream tasks like
question-answering. For instance, when supplying `pdf_markdown=True` to
[`LLM.ingest`](https://amaiya.github.io/onprem/llm.base.html#llm.ingest),
documents are chunked in a Markdown-aware fashion (e.g., the abstract of
a research paper tends to be kept together into a single chunk instead
of being split up). Note that Markdown will not be extracted if the
document requires OCR.

**Inferring Table Structure in PDFs**

- **Pro**: Makes it easier for LLMs to analyze information in tables
- **Con**: Slower than default PDF extraction

When supplying `infer_table_structure=True` to either
[`load_single_document`](https://amaiya.github.io/onprem/ingest.base.html#load_single_document)
or
[`LLM.ingest`](https://amaiya.github.io/onprem/llm.base.html#llm.ingest),
tables are inferred and extracted from PDFs using a TableTransformer
model. Tables are represented as **Markdown** (or **HTML** if Markdown
conversion is not possible).

``` python
docs = load_single_document('your_pdf_document.pdf', 
                            infer_table_structure=True)
```

**Parsing Extracted Text Into Sentences or Paragraphs**

For some analyses (e.g., using prompts for information extraction), it
may be useful to parse the text extracted from documents into individual
sentences or paragraphs. This can be accomplished using the
[`segment`](https://amaiya.github.io/onprem/utils.html#segment)
function:

``` python
from onprem.ingest import load_single_document
from onprem.utils import segment
text = load_single_document('tests/sample_data/sotu/state_of_the_union.txt')[0].page_content
```

``` python
segment(text, unit='paragraph')[0]
```

    'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.  Members of Congress and the Cabinet.  Justices of the Supreme Court.  My fellow Americans.'

``` python
segment(text, unit='sentence')[0]
```

    'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.'

### Summarization Pipeline

Summarize your raw documents (e.g., PDFs, MS Word) with an LLM.

#### Map-Reduce Summarization

Summarize each chunk in a document and then generate a single summary
from the individual summaries.

``` python
from onprem import LLM
llm = LLM(n_gpu_layers=-1, verbose=False, mute_stream=True) # disabling viewing of intermediate summarization prompts/inferences
```

``` python
from onprem.pipelines import Summarizer
summ = Summarizer(llm)

resp = summ.summarize('tests/sample_data/ktrain_paper/ktrain_paper.pdf', max_chunks_to_use=5) # omit max_chunks_to_use parameter to consider entire document
print(resp['output_text'])
```

     Ktrain is an open-source machine learning library that offers a unified interface for various machine learning tasks. The library supports both supervised and non-supervised machine learning, and includes methods for training models, evaluating models, making predictions on new data, and providing explanations for model decisions. Additionally, the library integrates with various explainable AI libraries such as shap, eli5 with lime, and others to provide more interpretable models.

#### Concept-Focused Summarization

Summarize a large document with respect to a particular concept of
interest.

``` python
from onprem import LLM
from onprem.pipelines import Summarizer
```

``` python
llm = LLM(default_model='zephyr', n_gpu_layers=-1, verbose=False, temperature=0)
summ = Summarizer(llm)
summary, sources = summ.summarize_by_concept('tests/sample_data/ktrain_paper/ktrain_paper.pdf', concept_description="question answering")
```


    The context provided describes the implementation of an open-domain question-answering system using ktrain, a low-code library for augmented machine learning. The system follows three main steps: indexing documents into a search engine, locating documents containing words in the question, and extracting candidate answers from those documents using a BERT model pretrained on the SQuAD dataset. Confidence scores are used to sort and prune candidate answers before returning results. The entire workflow can be implemented with only three lines of code using ktrain's SimpleQA module. This system allows for the submission of natural language questions and receives exact answers, as demonstrated in the provided example. Overall, the context highlights the ease and accessibility of building sophisticated machine learning models, including open-domain question-answering systems, through ktrain's low-code interface.

### Information Extraction Pipeline

Extract information from raw documents (e.g., PDFs, MS Word documents)
with an LLM.

``` python
from onprem import LLM
from onprem.pipelines import Extractor
# Notice that we're using a cloud-based, off-premises model here! See "OpenAI" section below.
llm = LLM(model_url='openai://gpt-3.5-turbo', verbose=False, mute_stream=True, temperature=0) 
extractor = Extractor(llm)
prompt = """Extract the names of research institutions (e.g., universities, research labs, corporations, etc.) 
from the following sentence delimited by three backticks. If there are no organizations, return NA.  
If there are multiple organizations, separate them with commas.
```{text}```
"""
df = extractor.apply(prompt, fpath='tests/sample_data/ktrain_paper/ktrain_paper.pdf', pdf_pages=[1], stop=['\n'])
df.loc[df['Extractions'] != 'NA'].Extractions[0]
```

    /home/amaiya/projects/ghub/onprem/onprem/core.py:159: UserWarning: The model you supplied is gpt-3.5-turbo, an external service (i.e., not on-premises). Use with caution, as your data and prompts will be sent externally.
      warnings.warn(f'The model you supplied is {self.model_name}, an external service (i.e., not on-premises). '+\

    'Institute for Defense Analyses'

### Few-Shot Classification

Make accurate text classification predictions using only a tiny number
of labeled examples.

``` python
# create classifier
from onprem.pipelines import FewShotClassifier
clf = FewShotClassifier(use_smaller=True)

# Fetching data
from sklearn.datasets import fetch_20newsgroups
import pandas as pd
import numpy as np
classes = ["soc.religion.christian", "sci.space"]
newsgroups = fetch_20newsgroups(subset="all", categories=classes)
corpus, group_labels = np.array(newsgroups.data), np.array(newsgroups.target_names)[newsgroups.target]

# Wrangling data into a dataframe and selecting training examples
data = pd.DataFrame({"text": corpus, "label": group_labels})
train_df = data.groupby("label").sample(5)
test_df = data.drop(index=train_df.index)

# X_sample only contains 5 examples of each class!
X_sample, y_sample = train_df['text'].values, train_df['label'].values

# test set
X_test, y_test = test_df['text'].values, test_df['label'].values

# train
clf.train(X_sample,  y_sample, max_steps=20)

# evaluate
print(clf.evaluate(X_test, y_test, print_report=False)['accuracy'])
#output: 0.98

# make predictions
clf.predict(['Elon Musk likes launching satellites.']).tolist()[0]
#output: sci.space
```

**TIP:** You can also easily train a wide range of [traditional text
classification
models](https://amaiya.github.io/onprem/pipelines.classifier.html) using
both Hugging Face transformers and scikit-learn as backends.

### Using Hugging Face Transformers Instead of Llama.cpp

By default, the LLM backend employed by **OnPrem.LLM** is
[llama-cpp-python](https://github.com/abetlen/llama-cpp-python), which
requires models in [GGUF format](https://huggingface.co/docs/hub/gguf).
As of v0.5.0, it is now possible to use [Hugging Face
transformers](https://github.com/huggingface/transformers) as the LLM
backend instead. This is accomplished by using the `model_id` parameter
(instead of supplying a `model_url` argument). In the example below, we
run the
[Llama-3.1-8B](https://huggingface.co/hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4)
model.

``` python
# llama-cpp-python does NOT need to be installed when using model_id parameter
llm = LLM(model_id="hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4", device_map='cuda')
```

This allows you to more easily use any model on the Hugging Face hub in
[SafeTensors format](https://huggingface.co/docs/safetensors/index)
provided it can be loaded with the Hugging Face `transformers.pipeline`.
Note that, when using the `model_id` parameter, the `prompt_template` is
set automatically by `transformers`.

The Llama-3.1 model loaded above was quantized using
[AWQ](https://huggingface.co/docs/transformers/main/en/quantization/awq),
which allows the model to fit onto smaller GPUs (e.g., laptop GPUs with
6GB of VRAM) similar to the default GGUF format. AWQ models will require
the [autoawq](https://pypi.org/project/autoawq/) package to be
installed: `pip install autoawq` (AWQ only supports Linux system,
including Windows Subsystem for Linux). If you do need to load a model
that is not quantized, you can supply a quantization configuration at
load time (known as β€œinflight quantization”). In the following example,
we load an unquantized [Zephyr-7B-beta
model](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) that will be
quantized during loading to fit on GPUs with as little as 6GB of VRAM:

``` python
from transformers import BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype="float16",
    bnb_4bit_use_double_quant=True,
)
llm = LLM(model_id="HuggingFaceH4/zephyr-7b-beta", device_map='cuda', 
          model_kwargs={"quantization_config":quantization_config})
```

When supplying a `quantization_config`, the
[bitsandbytes](https://huggingface.co/docs/bitsandbytes/main/en/installation)
library, a lightweight Python wrapper around CUDA custom functions, in
particular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 &
4-bit quantization functions, is used. There are ongoing efforts by the
bitsandbytes team to support multiple backends in addition to CUDA. If
you receive errors related to bitsandbytes, please refer to the
[bitsandbytes
documentation](https://huggingface.co/docs/bitsandbytes/main/en/installation).

### Connecting to LLMs Served Through REST APIs

**OnPrem.LLM** can be used with LLMs being served through any
OpenAI-compatible REST API. This means you can easily use **OnPrem.LLM**
with tools like [vLLM](https://github.com/vllm-project/vllm),
[OpenLLM](https://github.com/bentoml/OpenLLM),
[Ollama](https://ollama.com/blog/openai-compatibility), and the
[llama.cpp
server](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).

#### vLLM Example

For instance, using [vLLM](https://github.com/vllm-project/vllm), you
can serve an LLM as follows (replace the `--model` argument with model
you want to use):

``` sh
python -m vllm.entrypoints.openai.api_server --model Qwen/Qwen2.5-0.5B-Instruct --dtype auto --api-key token-abc123
```

You can then connect OnPrem.LLM to the LLM by supplying the URL of the
server you just started:

``` python
from onprem import LLM
llm = LLM(model_url='http://localhost:8000/v1', api_key='token-abc123', model='Qwen/Qwen2.5-0.5B-Instruct') 
# Note: The API key can either be supplied directly or stored in the OPENAI_API_KEY environment variable.
#       If the server does not require an API key, `api_key` should still be supplied with a dummy value like 'na'.
#       The model argument must exactly match what was supplied when starting the vLLM server.
```

That’s it! Solve problems with **OnPrem.LLM** as you normally would
(e.g., RAG question-answering, summarization, few-shot prompting, code
generation, etc.).

#### Ollama Example

After [downloading and installing Ollama](https://ollama.com/) and
pulling a model (eg., `ollama pull llama3.2`), you can use it in
OnPrem.LLM as follows:

``` python
from onprem import LLM
llm = LLM(model_url='http://localhost:11434/v1', api_key='NA', model='llama3.2')
output = llm.prompt('What is the capital of France?')

# OUTPUT:
# The capital of France is Paris.
```

If using OnPrem.LLM with Ollama or vLLM, then `llama-cpp-python` does
**not** need to be installed.

### Using OpenAI Models with OnPrem.LLM

Even when using on-premises language models, it can sometimes be useful
to have easy access to non-local, cloud-based models (e.g., OpenAI) for
testing, producing baselines for comparison, and generating synthetic
examples for fine-tuning. For these reasons, in spite of the name,
**OnPrem.LLM** now includes support for OpenAI chat models:

``` python
from onprem import LLM
llm = LLM(model_url='openai://gpt-4o', temperature=0)
```

    /home/amaiya/projects/ghub/onprem/onprem/core.py:196: UserWarning: The model you supplied is gpt-4o, an external service (i.e., not on-premises). Use with caution, as your data and prompts will be sent externally.
      warnings.warn(f'The model you supplied is {self.model_name}, an external service (i.e., not on-premises). '+\

This OpenAI [`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm)
instance can now be used for most features in OnPrem.LLM (e.g., RAG,
information extraction, summarization, etc.). Here we simply use it for
general prompting:

``` python
saved_result = llm.prompt('List three cute  names for a cat and explain why each is cute.')
```

    Certainly! Here are three cute names for a cat, along with explanations for why each is adorable:

    1. **Whiskers**: This name is cute because it highlights one of the most distinctive and charming features of a catβ€”their whiskers. It's playful and endearing, evoking the image of a curious cat twitching its whiskers as it explores its surroundings.

    2. **Mittens**: This name is cute because it conjures up the image of a cat with little white paws that look like they are wearing mittens. It's a cozy and affectionate name that suggests warmth and cuddliness, much like a pair of soft mittens.

    3. **Pumpkin**: This name is cute because it brings to mind the warm, orange hues of a pumpkin, which can be reminiscent of certain cat fur colors. It's also associated with the fall season, which is often linked to comfort and coziness. Plus, the name "Pumpkin" has a sweet and affectionate ring to it, making it perfect for a beloved pet.

**Using Vision Capabilities in GPT-4o**

``` python
image_url = "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
saved_result = llm.prompt('Describe the weather in this image.', image_path_or_url=image_url)
```

    The weather in the image appears to be clear and sunny. The sky is mostly blue with some scattered clouds, suggesting a pleasant day with good visibility. The sunlight is bright, illuminating the green grass and landscape.

**Using OpenAI-Style Message Dictionaries**

``` python
messages = [
    {'content': [{'text': 'describe the weather in this image', 
                  'type': 'text'},
                 {'image_url': {'url': image_url},
                  'type': 'image_url'}],
     'role': 'user'}]
saved_result = llm.prompt(messages)
```

    The weather in the image appears to be clear and sunny. The sky is mostly blue with some scattered clouds, suggesting a pleasant day with good visibility. The sunlight is bright, casting clear shadows and illuminating the green landscape.

**Azure OpenAI**

For Azure OpenAI models, use the following URL format:

``` python
llm = LLM(model_url='azure://<deployment_name>', ...) 
# <deployment_name> is the Azure deployment name and additional Azure-specific parameters 
# can be supplied as extra arguments to LLM (or set as environment variables)
```

### Structured and Guided Outputs

The
[`LLM.pydantic_prompt`](https://amaiya.github.io/onprem/llm.base.html#llm.pydantic_prompt)
method allows you to specify the desired structure of the LLM’s output
as a Pydantic model.

``` python
from pydantic import BaseModel, Field

class Joke(BaseModel):
    setup: str = Field(description="question to set up a joke")
    punchline: str = Field(description="answer to resolve the joke")

from onprem import LLM
llm = LLM(default_model='llama', verbose=False)
structured_output = llm.pydantic_prompt('Tell me a joke.', pydantic_model=Joke)
```

    llama_new_context_with_model: n_ctx_per_seq (3904) < n_ctx_train (131072) -- the full capacity of the model will not be utilized

    {
      "setup": "Why couldn't the bicycle stand alone?",
      "punchline": "Because it was two-tired!"
    }

The output is a Pydantic object instead of a string:

``` python
structured_output
```

    Joke(setup="Why couldn't the bicycle stand alone?", punchline='Because it was two-tired!')

``` python
print(structured_output.setup)
print()
print(structured_output.punchline)
```

    Why couldn't the bicycle stand alone?

    Because it was two-tired!

You can also use **OnPrem.LLM** with the
[Guidance](https://github.com/guidance-ai/guidance) package to guide the
LLM to generate outputs based on your conditions and constraints. We’ll
show a couple of examples here, but see [our documentation on guided
prompts](https://amaiya.github.io/onprem/examples_guided_prompts.html)
for more information.

``` python
from onprem import LLM

llm = LLM(n_gpu_layers=-1, verbose=False)
from onprem.pipelines.guider import Guider
guider = Guider(llm)
```

With the Guider, you can use use Regular Expressions to control LLM
generation:

``` python
prompt = f"""Question: Luke has ten balls. He gives three to his brother. How many balls does he have left?
Answer: """ + gen(name='answer', regex='\d+')

guider.prompt(prompt, echo=False)
```

    {'answer': '7'}

``` python
prompt = '19, 18,' + gen(name='output', max_tokens=50, stop_regex='[^\d]7[^\d]')
guider.prompt(prompt)
```

<pre style='margin: 0px; padding: 0px; padding-left: 8px; margin-left: -8px; border-radius: 0px; border-left: 1px solid rgba(127, 127, 127, 0.2); white-space: pre-wrap; font-family: ColfaxAI, Arial; font-size: 15px; line-height: 23px;'>19, 18<span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>7</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>6</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>5</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>4</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>3</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>2</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>0</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 9</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 8</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span></pre>

    {'output': ' 17, 16, 15, 14, 13, 12, 11, 10, 9, 8,'}

See [the
documentation](https://amaiya.github.io/onprem/examples_guided_prompts.html)
for more examples of how to use
[Guidance](https://github.com/guidance-ai/guidance) with **OnPrem.LLM**.

## Solving Tasks With Agents

``` python
from onprem import LLM
from onprem.pipelines import Agent
llm = LLM('openai/gpt-4o-mini', mute_stream=True) 
agent = Agent(llm)
agent.add_webview_tool()
answer = agent.run("What is the highest level of education of the person listed on this page: https://arun.maiya.net?")
# ANSWER: Ph.D. in Computer Science
```

See the **[example notebook on
agents](https://amaiya.github.io/onprem/examples_agent.html)** for more
information

## Built-In Web App

**OnPrem.LLM** includes a built-in Web app to access the LLM. To start
it, run the following command after installation:

``` shell
onprem --port 8000
```

Then, enter `localhost:8000` (or `<domain_name>:8000` if running on
remote server) in a Web browser to access the application:

<img src="https://raw.githubusercontent.com/amaiya/onprem/master/images/onprem_welcome.png" border="1" alt="screenshot" width="775"/>

For more information, [see the corresponding
documentation](https://amaiya.github.io/onprem/webapp.html).

## Examples

The [documentation](https://amaiya.github.io/onprem/) includes many
examples, including:

- [Prompts for
  Problem-Solving](https://amaiya.github.io/onprem/examples.html)
- [RAG Example](https://amaiya.github.io/onprem/examples_rag.html)
- [Code Generation](https://amaiya.github.io/onprem/examples_code.html)
- [Semantic
  Similarity](https://amaiya.github.io/onprem/examples_semantic.html)
- [Document
  Summarization](https://amaiya.github.io/onprem/examples_summarization.html)
- [Information
  Extraction](https://amaiya.github.io/onprem/examples_information_extraction.html)
- [Text
  Classification](https://amaiya.github.io/onprem/examples_classification.html)
- [Agent-Based Task
  Execution](https://amaiya.github.io/onprem/examples_agent.html)
- [Audo-Coding Survey
  Responses](https://amaiya.github.io/onprem/examples_qualitative_survey_analysis.html)
- [Legal and Regulatory
  Analysis](https://amaiya.github.io/onprem/examples_legal_analysis.html)

## FAQ

1.  **How do I use other models with OnPrem.LLM?**

    > You can supply any model of your choice using the `model_url` and
    > `model_id` parameters to `LLM` (see cheat sheet above).

    > Here, we will go into detail on how to supply a custom GGUF model
    > using the llma.cpp backend.

    > You can find llama.cpp-supported models with `GGUF` in the file
    > name on
    > [huggingface.co](https://huggingface.co/models?sort=trending&search=gguf).

    > Make sure you are pointing to the URL of the actual GGUF model
    > file, which is the β€œdownload” link on the model’s page. An example
    > for **Mistral-7B** is shown below:

    > <img src="https://raw.githubusercontent.com/amaiya/onprem/master/images/model_download_link.png" border="1" alt="screenshot" width="775"/>

    > When using the llama.cpp backend, GGUF models have specific prompt
    > formats that need to supplied to `LLM`. For instance, the prompt
    > template required for **Zephyr-7B**, as described on the [model’s
    > page](https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF), is:
    >
    > `<|system|>\n</s>\n<|user|>\n{prompt}</s>\n<|assistant|>`
    >
    > So, to use the **Zephyr-7B** model, you must supply the
    > `prompt_template` argument to the `LLM` constructor (or specify it
    > in the `webapp.yml` configuration for the Web app).
    >
    > ``` python
    > # how to use Zephyr-7B with OnPrem.LLM
    > llm = LLM(model_url='https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF/resolve/main/zephyr-7b-beta.Q4_K_M.gguf',
    >           prompt_template = "<|system|>\n</s>\n<|user|>\n{prompt}</s>\n<|assistant|>",
    >           n_gpu_layers=33)
    > llm.prompt("List three cute names for a cat.")
    > ```

    > Prompt templates are **not** required for any other LLM backend
    > (e.g., when using Ollama as backend or when using `model_id`
    > parameter for transformers models). Prompt templates are also not
    > required if using any of the default models.

2.  **When installing `onprem`, I’m getting β€œbuild” errors related to
    `llama-cpp-python` (or `chroma-hnswlib`) on Windows/Mac/Linux?**

    > See [this LangChain documentation on
    > LLama.cpp](https://python.langchain.com/docs/integrations/llms/llamacpp)
    > for help on installing the `llama-cpp-python` package for your
    > system. Additional tips for different operating systems are shown
    > below:

    > For **Linux** systems like Ubuntu, try this:
    > `sudo apt-get install build-essential g++ clang`. Other tips are
    > [here](https://github.com/oobabooga/text-generation-webui/issues/1534).

    > For **Windows** systems, please try following [these
    > instructions](https://github.com/amaiya/onprem/blob/master/MSWindows.md).
    > We recommend you use [Windows Subsystem for Linux
    > (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install)
    > instead of using Microsoft Windows directly. If you do need to use
    > Microsoft Window directly, be sure to install the [Microsoft C++
    > Build
    > Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)
    > and make sure the **Desktop development with C++** is selected.

    > For **Macs**, try following [these
    > tips](https://github.com/imartinez/privateGPT/issues/445#issuecomment-1563333950).

    > There are also various other tips for each of the above OSes in
    > [this privateGPT repo
    > thread](https://github.com/imartinez/privateGPT/issues/445). Of
    > course, you can also [easily
    > use](https://colab.research.google.com/drive/1LVeacsQ9dmE1BVzwR3eTLukpeRIMmUqi?usp=sharing)
    > **OnPrem.LLM** on Google Colab.

    > Finally, if you still can’t overcome issues with building
    > `llama-cpp-python`, you can try [installing the pre-built wheel
    > file](https://abetlen.github.io/llama-cpp-python/whl/cpu/llama-cpp-python/)
    > for your system:

    > **Example:**
    > `pip install llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu`
    >
    > **Tip:** There are [pre-built wheel files for
    > `chroma-hnswlib`](https://pypi.org/project/chroma-hnswlib/#files),
    > as well. If running `pip install onprem` fails on building
    > `chroma-hnswlib`, it may be because a pre-built wheel doesn’t yet
    > exist for the version of Python you’re using (in which case you
    > can try downgrading Python).

3.  **I’m behind a corporate firewall and am receiving an SSL error when
    trying to download the model?**

    > Try this:
    >
    > ``` python
    > from onprem import LLM
    > LLM.download_model(url, ssl_verify=False)
    > ```

    > You can download the embedding model (used by `LLM.ingest` and
    > `LLM.ask`) as follows:
    >
    > ``` sh
    > wget --no-check-certificate https://public.ukp.informatik.tu-darmstadt.de/reimers/sentence-transformers/v0.2/all-MiniLM-L6-v2.zip
    > ```

    > Supply the unzipped folder name as the `embedding_model_name`
    > argument to `LLM`.

    > If you’re getting SSL errors when even running `pip install`, try
    > this:
    >
    > ``` sh
    > pip install –-trusted-host pypi.org –-trusted-host files.pythonhosted.org pip_system_certs
    > ```

4.  **How do I use this on a machine with no internet access?**

    > Use the `LLM.download_model` method to download the model files to
    > `<your_home_directory>/onprem_data` and transfer them to the same
    > location on the air-gapped machine.

    > For the `ingest` and `ask` methods, you will need to also download
    > and transfer the embedding model files:
    >
    > ``` python
    > from sentence_transformers import SentenceTransformer
    > model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')
    > model.save('/some/folder')
    > ```

    > Copy the `some/folder` folder to the air-gapped machine and supply
    > the path to `LLM` via the `embedding_model_name` parameter.

5.  **My model is not loading when I call `llm = LLM(...)`?**

    > This can happen if the model file is corrupt (in which case you
    > should delete from `<home directory>/onprem_data` and
    > re-download). It can also happen if the version of
    > `llama-cpp-python` needs to be upgraded to the latest.

6.  **I’m getting an `β€œIllegal instruction (core dumped)` error when
    instantiating a `langchain.llms.Llamacpp` or `onprem.LLM` object?**

    > Your CPU may not support instructions that `cmake` is using for
    > one reason or another (e.g., [due to Hyper-V in VirtualBox
    > settings](https://stackoverflow.com/questions/65780506/how-to-enable-avx-avx2-in-virtualbox-6-1-16-with-ubuntu-20-04-64bit)).
    > You can try turning them off when building and installing
    > `llama-cpp-python`:

    > ``` sh
    > # example
    > CMAKE_ARGS="-DGGML_CUDA=ON -DGGML_AVX2=OFF -DGGML_AVX=OFF -DGGML_F16C=OFF -DGGML_FMA=OFF" FORCE_CMAKE=1 pip install --force-reinstall llama-cpp-python --no-cache-dir
    > ```

7.  **How can I speed up
    [`LLM.ingest`](https://amaiya.github.io/onprem/llm.base.html#llm.ingest)?**

    > By default, a GPU, if available, will be used to compute
    > embeddings, so ensure PyTorch is installed with GPU support. You
    > can explicitly control the device used for computing embeddings
    > with the `embedding_model_kwargs` argument.
    >
    > ``` python
    > from onprem import LLM
    > llm  = LLM(embedding_model_kwargs={'device':'cuda'})
    > ```

    > You can also supply `store_type="sparse"` to `LLM` to use a sparse
    > vector store, which sacrifices a small amount of inference speed
    > (`LLM.ask`) for significant speed ups during ingestion
    > (`LLM.ingest`).
    >
    > ``` python
    > from onprem import LLM
    > llm  = LLM(store_type="sparse")
    > ```
    >
    > Note, however, that, unlike dense vector stores, sparse vector
    > stores assume answer sources will contain at least one word in
    > common with the question.

<!--
8. **What are ways in which OnPrem.LLM has been used?**
    > Examples include:
    > - extracting key performance parameters and other performance attributes from engineering documents
    > - auto-coding responses to government requests for information (RFIs)
    > - analyzing the Federal Aquisition Regulations (FAR)
    > - understanding where and how Executive Order 14028 on cybersecurity aligns with the National Cybersecurity Strategy
    > - generating a summary of ways to improve a course from thousdands of reviews
    > - extracting specific information of interest from resumes for talent acquisition.
&#10;-->

## How to Cite

Please cite the [following paper](https://arxiv.org/abs/2505.07672) when
using **OnPrem.LLM**:

    @article{maiya2025onprem,
          title={OnPrem.LLM: A Privacy-Conscious Document Intelligence Toolkit}, 
          author={Arun S. Maiya},
          year={2025},
          eprint={2505.07672},
          archivePrefix={arXiv},
          primaryClass={cs.CL},
          url={https://arxiv.org/abs/2505.07672}, 
    }

            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/amaiya/onprem",
    "name": "onprem",
    "maintainer": null,
    "docs_url": null,
    "requires_python": ">=3.9",
    "maintainer_email": null,
    "keywords": "nbdev jupyter notebook python",
    "author": "Arun S. Maiya",
    "author_email": "arun@maiya.net",
    "download_url": "https://files.pythonhosted.org/packages/6f/42/b4fcec6455dd8b20e5abdc7d980beb2adb109cfe4a3783f6a6e4b2389bb6/onprem-0.17.1.tar.gz",
    "platform": null,
    "description": "# OnPrem.LLM\n\n\n<!-- WARNING: THIS FILE WAS AUTOGENERATED! DO NOT EDIT! -->\n\n> A privacy-conscious toolkit for document intelligence \u2014 local by\n> default, cloud-capable\n\n**[OnPrem.LLM](https://github.com/amaiya/onprem)** (or \u201cOnPrem\u201d for\nshort) is a Python-based toolkit for applying large language models\n(LLMs) to sensitive, non-public data in offline or restricted\nenvironments. Inspired largely by the\n[privateGPT](https://github.com/imartinez/privateGPT) project,\n**OnPrem.LLM** is designed for fully local execution, but also supports\nintegration with a wide range of cloud LLM providers (e.g., OpenAI,\nAnthropic).\n\nThe full documentation is [here](https://amaiya.github.io/onprem/).\n\n<!--A Google Colab demo of installing and using **OnPrem.LLM** is [here](https://colab.research.google.com/drive/1LVeacsQ9dmE1BVzwR3eTLukpeRIMmUqi?usp=sharing).\n-->\n\n**Quick Start**\n\n``` python\n# install\n!pip install onprem[chroma]\nfrom onprem import LLM, utils\n\n# local LLM with Ollama as backend\n!ollama pull llama3.2\nllm = LLM('ollama/llama3.2')\n\n# basic prompting\nresult = llm.prompt('Give me a short one sentence definition of an LLM.')\n\n# RAG\nutils.download('https://www.arxiv.org/pdf/2505.07672', '/tmp/my_documents/paper.pdf')\nllm.ingest('/tmp/my_documents')\nresult = llm.ask('What is OnPrem.LLM?')\n\n# switch to cloud LLM using Anthropic as backend\nllm = LLM(\"anthropic/claude-3-7-sonnet-latest\")\n\n# structured outputs\nfrom pydantic import BaseModel, Field\nclass MeasuredQuantity(BaseModel):\n    value: str = Field(description=\"numerical value\")\n    unit: str = Field(description=\"unit of measurement\")\nstructured_output = llm.pydantic_prompt('He was going 35 mph.', pydantic_model=MeasuredQuantity)\nprint(structured_output.value) # 35\nprint(structured_output.unit)  # mph\n```\n\nMany LLM backends are supported (e.g.,\n[llama_cpp](https://github.com/abetlen/llama-cpp-python),\n[transformers](https://github.com/huggingface/transformers),\n[Ollama](https://ollama.com/),\n[vLLM](https://github.com/vllm-project/vllm),\n[OpenAI](https://platform.openai.com/docs/models),\n[Anthropic](https://docs.anthropic.com/en/docs/about-claude/models/overview),\netc.).\n\n------------------------------------------------------------------------\n\n<center>\n<p align=\"center\">\n<img src=\"https://raw.githubusercontent.com/amaiya/onprem/refs/heads/master/images/onprem.png\" border=\"0\" alt=\"onprem.llm\" width=\"200\"/>\n</p>\n</center>\n<center>\n<p align=\"center\">\n\n**[Install](https://amaiya.github.io/onprem/#install) \\|\n[Usage](https://amaiya.github.io/onprem/#how-to-use) \\|\n[Examples](https://amaiya.github.io/onprem/#examples) \\| [Web\nUI](https://amaiya.github.io/onprem/webapp.html) \\|\n[FAQ](https://amaiya.github.io/onprem/#faq) \\| [How to\nCite](https://amaiya.github.io/onprem/#how-to-cite)**\n\n</p>\n</center>\n\n*Latest News* \ud83d\udd25\n\n- \\[2025/07\\] v0.17.0 released and now allows you to connect directly to\n  SharePoint for search and RAG. See the [example notebook on vector\n  stores](https://amaiya.github.io/onprem/examples_vectorstore_factory.html#rag-with-sharepoint-documents)\n  for more information.\n- \\[2025/07\\] v0.16.0 released and now includes out-of-the-box support\n  for **Elasticsearch** as a vector store for RAG and semantic search in\n  addition to other vector store backends. See the [example notebook on\n  vector\n  stores](https://amaiya.github.io/onprem/examples_vectorstore_factory.html)\n  for more information.\n- \\[2025/06\\] v0.15.0 released and now includes support for solving\n  tasks with **agents**. See the [example notebook on\n  agents](https://amaiya.github.io/onprem/examples_agent.html) for more\n  information.\n- \\[2025/05\\] v0.14.0 released and now includes a point-and-click\n  interface for **Document Analysis**: applying prompts to individual\n  passages in uploaded documents. See the [Web UI\n  documentation](https://amaiya.github.io/onprem/webapp.html) for more\n  information.\n- \\[2025/04\\] v0.13.0 released and now includes streamlined support for\n  Ollama and many cloud LLMs via special URLs (e.g.,\n  `model_url=\"ollama://llama3.2\"`,\n  `model_url=\"anthropic://claude-3-7-sonnet-latest\"`). See the [cheat\n  sheet](https://amaiya.github.io/onprem/#how-to-use) for examples.\n  (**Note: Please use `onprem>=0.13.1` due to bug in v0.13.0.**)\n- \\[2025/04\\] v0.12.0 released and now includes a re-vamped and improved\n  Web UI with support for interactive chatting, document\n  question-answering (RAG), and document search (both keyword searches\n  and semantic searches). See the [Web UI\n  documentation](https://amaiya.github.io/onprem/webapp.html) for more\n  information.\n\n------------------------------------------------------------------------\n\n## Install\n\nOnce you have [installed\nPyTorch](https://pytorch.org/get-started/locally/), you can install\n**OnPrem.LLM** with the following steps:\n\n1.  Install **llama-cpp-python** (*optional* - see below):\n    - **CPU:** `pip install llama-cpp-python` ([extra\n      steps](https://github.com/amaiya/onprem/blob/master/MSWindows.md)\n      required for Microsoft Windows)\n    - **GPU**: Follow [instructions\n      below](https://amaiya.github.io/onprem/#on-gpu-accelerated-inference).\n2.  Install **OnPrem.LLM**: `pip install onprem`\n\nFor RAG using the [default dense\nvectorstore](https://amaiya.github.io/onprem/#step-1-ingest-the-documents-into-a-vector-database),\nplease also install chroma packages: `pip install onprem[chroma]`.\n\n**Note:** Installing **llama-cpp-python** is *optional* if any of the\nfollowing is true:\n\n- You use Hugging Face Transformers (instead of llama-cpp-python) as the\n  LLM backend by supplying the `model_id` parameter when instantiating\n  an LLM, as [shown\n  here](https://amaiya.github.io/onprem/#using-hugging-face-transformers-instead-of-llama.cpp).\n- You are using **OnPrem.LLM** with an LLM being served through an\n  [external REST API](#connecting-to-llms-served-through-rest-apis)\n  (e.g., Ollama, vLLM, OpenLLM).\n- You are using **Onprem.LLM** with a cloud LLM (more information\n  below).\n\n### On GPU-Accelerated Inference With `llama-cpp-python`\n\nWhen installing **llama-cpp-python** with\n`pip install llama-cpp-python`, the LLM will run on your **CPU**. To\ngenerate answers much faster, you can run the LLM on your **GPU** by\nbuilding **llama-cpp-python** based on your operating system.\n\n- **Linux**:\n  `CMAKE_ARGS=\"-DGGML_CUDA=on\" FORCE_CMAKE=1 pip install --upgrade --force-reinstall llama-cpp-python --no-cache-dir`\n- **Mac**: `CMAKE_ARGS=\"-DGGML_METAL=on\" pip install llama-cpp-python`\n- **Windows 11**: Follow the instructions\n  [here](https://github.com/amaiya/onprem/blob/master/MSWindows.md#using-the-system-python-in-windows-11s).\n- **Windows Subsystem for Linux (WSL2)**: Follow the instructions\n  [here](https://github.com/amaiya/onprem/blob/master/MSWindows.md#using-wsl2-with-gpu-acceleration).\n\nFor Linux and Windows, you will need [an up-to-date NVIDIA\ndriver](https://www.nvidia.com/en-us/drivers/) along with the [CUDA\ntoolkit](https://developer.nvidia.com/cuda-downloads) installed before\nrunning the installation commands above.\n\nAfter following the instructions above, supply the `n_gpu_layers=-1`\nparameter when instantiating an LLM to use your GPU for fast inference:\n\n``` python\nllm = LLM(n_gpu_layers=-1, ...)\n```\n\nQuantized models with 8B parameters and below can typically run on GPUs\nwith as little as 6GB of VRAM. If a model does not fit on your GPU\n(e.g., you get a \u201cCUDA Error: Out-of-Memory\u201d error), you can offload a\nsubset of layers to the GPU by experimenting with different values for\nthe `n_gpu_layers` parameter (e.g., `n_gpu_layers=20`). Setting\n`n_gpu_layers=-1`, as shown above, offloads all layers to the GPU.\n\nSee [the FAQ](https://amaiya.github.io/onprem/#faq) for extra tips, if\nyou experience issues with\n[llama-cpp-python](https://pypi.org/project/llama-cpp-python/)\ninstallation.\n\n## How to Use\n\n### Setup\n\n``` python\nfrom onprem import LLM\n\nllm = LLM(verbose=False) # default model and backend are used\n```\n\n#### Cheat Sheet\n\n*Local Models:* A number of different local LLM backends are supported.\n\n- **Llama-cpp**: `llm = LLM(default_model=\"llama\", n_gpu_layers=-1)`\n\n- **Llama-cpp with selected GGUF model via URL**:\n\n  ``` python\n   # prompt templates are required for user-supplied GGUF models (see FAQ)\n   llm = LLM(model_url='https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF/resolve/main/zephyr-7b-beta.Q4_K_M.gguf', \n             prompt_template= \"<|system|>\\n</s>\\n<|user|>\\n{prompt}</s>\\n<|assistant|>\", n_gpu_layers=-1)\n  ```\n\n- **Llama-cpp with selected GGUF model via file path**:\n\n  ``` python\n   # prompt templates are required for user-supplied GGUF models (see FAQ)\n   llm = LLM(model_url='zephyr-7b-beta.Q4_K_M.gguf', \n             model_download_path='/path/to/folder/to/where/you/downloaded/model',\n             prompt_template= \"<|system|>\\n</s>\\n<|user|>\\n{prompt}</s>\\n<|assistant|>\", n_gpu_layers=-1)\n  ```\n\n- **Hugging Face Transformers**:\n  `llm = LLM(model_id='Qwen/Qwen2.5-0.5B-Instruct', device='cuda')`\n\n- **Ollama**: `llm = LLM(model_url=\"ollama://llama3.2\", api_key='na')`\n\n- **Also Ollama**:\n  `llm = LLM(model_url=\"ollama/llama3.2\", api_key='na')`\n\n- **Also Ollama**:\n  `llm = LLM(model_url='http://localhost:11434/v1', api_key='na', model='llama3.2')`\n\n- **VLLM**:\n  `llm = LLM(model_url='http://localhost:8000/v1', api_key='na', model='Qwen/Qwen2.5-0.5B-Instruct')`\n\n*Cloud Models:* Despite the focus on local LLMs, cloud LLMs are also\nsupported:\n\n- **Anthropic Claude**:\n  `llm = LLM(model_url=\"anthropic://claude-3-7-sonnet-latest\")`\n- **Also Anthropic Claude**:\n  `llm = LLM(model_url=\"anthropic/claude-3-7-sonnet-latest\")`\n- **OpenAI GPT-4o**: `llm = LLM(model_url=\"openai://gpt-4o\")`\n- **Also OpenAI GPT-4o**: `llm = LLM(model_url=\"openai/gpt-4o\")`\n\nThe instantiations above are described in more detail below.\n\n#### Specifying the Local Model to Use\n\nThe default LLM backend is\n[llama-cpp-python](https://github.com/abetlen/llama-cpp-python), and the\ndefault model is currently a 7B-parameter model called\n**Zephyr-7B-beta**, which is automatically downloaded and used. The two\nother default models are `llama` and `mistral`. For instance, if\n`default_model='llama'` is supplied, then a **Llama-3.1-8B-Instsruct**\nmodel is automatically downloaded and used:\n\n``` python\n# Llama 3.1 is downloaded here and the correct prompt template for Llama-3.1 is automatically configured and used\nllm = LLM(default_model='llama')\n```\n\n*Choosing Your Own Models:* Of course, you can also easily supply the\nURL or path to an LLM of your choosing to\n[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm) (see the\n[FAQ](https://amaiya.github.io/onprem/#faq) for an example).\n\n*Supplying Extra Parameters:* Any extra parameters supplied to\n[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm) are forwarded\ndirectly to\n[llama-cpp-python](https://github.com/abetlen/llama-cpp-python), the\ndefault LLM backend.\n\n#### Changing the Default LLM Backend\n\nIf `default_engine=\"transformers\"` is supplied to\n[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm), Hugging Face\n[transformers](https://github.com/huggingface/transformers) is used as\nthe LLM backend. Extra parameters to\n[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm) (e.g.,\n\u2018device=\u2019cuda\u2019`) are forwarded diretly to`transformers.pipeline`. If supplying a`model_id\\`\nparameter, the default LLM backend is automatically changed to Hugging\nFace [transformers](https://github.com/huggingface/transformers).\n\n``` python\n# LLama-3.1 model quantized using AWQ is downloaded and run with Hugging Face transformers (requires GPU)\nllm = LLM(default_model='llama', default_engine='transformers')\n\n# Using a custom model with Hugging Face Transformers\nllm = LLM(model_id='Qwen/Qwen2.5-0.5B-Instruct', device_map='cpu')\n```\n\nSee\n[here](https://amaiya.github.io/onprem/#using-hugging-face-transformers-instead-of-llama.cpp)\nfor more information about using Hugging Face\n[transformers](https://github.com/huggingface/transformers) as the LLM\nbackend.\n\nYou can also connect to **Ollama**, local LLM APIs (e.g., vLLM), and\ncloud LLMs.\n\n``` python\n# connecting to an LLM served by Ollama\nlm = LLM(model_url='ollama/llama3.2')\n\n# connecting to an LLM served through vLLM (set API key as needed)\nllm = LLM(model_url='http://localhost:8000/v1', api_key='token-abc123', model='Qwen/Qwen2.5-0.5B-Instruct')`\n\n# connecting to a cloud-backed LLM (e.g., OpenAI, Anthropic).\nllm = LLM(model_url=\"openai/gpt-4o-mini\")  # OpenAI\nllm = LLM(model_url=\"anthropic/claude-3-7-sonnet-20250219\") # Anthropic\n```\n\n**OnPrem.LLM** suppports any provider and model supported by the\n[LiteLLM](https://github.com/BerriAI/litellm) package.\n\nSee\n[here](https://amaiya.github.io/onprem/#connecting-to-llms-served-through-rest-apis)\nfor more information on *local* LLM APIs.\n\nMore information on using OpenAI models specifically with **OnPrem.LLM**\nis [here](https://amaiya.github.io/onprem/examples_openai.html).\n\n#### Supplying Parameters to the LLM Backend\n\nExtra parameters supplied to\n[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm) and\n[`LLM.prompt`](https://amaiya.github.io/onprem/llm.base.html#llm.prompt)\nare passed directly to the LLM backend. Parameter names will vary\ndepending on the backend you chose.\n\nFor instance, with the default llama-cpp backend, the default context\nwindow size (`n_ctx`) is set to 3900 and the default output size\n(`max_tokens`) is set 512. Both are configurable parameters to\n[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm). Increase if\nyou have larger prompts or need longer outputs. Other parameters (e.g.,\n`api_key`, `device_map`, etc.) can be supplied directly to\n[`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm) and will be\nrouted to the LLM backend or API (e.g., llama-cpp-python, Hugging Face\ntransformers, vLLM, OpenAI, etc.). The `max_tokens` parameter can also\nbe adjusted on-the-fly by supplying it to\n[`LLM.prompt`](https://amaiya.github.io/onprem/llm.base.html#llm.prompt).\n\nOn the other hand, for Ollama models, context window and output size are\ncontrolled by `num_ctx` and `num_predict`, respectively.\n\nWith the Hugging Face transformers, setting the context window size is\nnot needed, but the output size is controlled by the `max_new_tokens`\nparameter to\n[`LLM.prompt`](https://amaiya.github.io/onprem/llm.base.html#llm.prompt).\n\n### Send Prompts to the LLM to Solve Problems\n\nThis is an example of few-shot prompting, where we provide an example of\nwhat we want the LLM to do.\n\n``` python\nprompt = \"\"\"Extract the names of people in the supplied sentences.\nSeparate names with commas and place on a single line.\n\n# Example 1:\nSentence: James Gandolfini and Paul Newman were great actors.\nPeople:\nJames Gandolfini, Paul Newman\n\n# Example 2:\nSentence:\nI like Cillian Murphy's acting. Florence Pugh is great, too.\nPeople:\"\"\"\n\nsaved_output = llm.prompt(prompt, stop=['\\n\\n'])\n```\n\n\n    Cillian Murphy, Florence Pugh\n\n**Additional prompt examples are [shown\nhere](https://amaiya.github.io/onprem/examples.html).**\n\n### Talk to Your Documents\n\nAnswers are generated from the content of your documents (i.e.,\n[retrieval augmented generation](https://arxiv.org/abs/2005.11401) or\nRAG). Here, we will use [GPU\noffloading](https://amaiya.github.io/onprem/#speeding-up-inference-using-a-gpu)\nto speed up answer generation using the default model. However, the\nZephyr-7B model may perform even better, responds faster, and is used in\nour **[RAG example\nnotebook](https://amaiya.github.io/onprem/examples_rag.html)**.\n\n``` python\nfrom onprem import LLM\n\nllm = LLM(n_gpu_layers=-1, store_type='sparse', verbose=False)\n```\n\n    llama_new_context_with_model: n_ctx_per_seq (3904) < n_ctx_train (32768) -- the full capacity of the model will not be utilized\n\n#### Step 1: Ingest the Documents into a Vector Database\n\nAs of v0.10.0, you have the option of storing documents in either a\ndense vector store (i.e., Chroma) or a sparse vector store (i.e., a\nbuilt-in keyword search index). Sparse vector stores sacrifice a small\namount of inference speed for significant improvements in ingestion\nspeed (useful for larger document sets) and also assume answer sources\nwill include at least one word from the question. To select the store\ntype, supply either `store_type=\"dense\"` or `store_type=\"sparse\"` when\ncreating the [`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm).\nAs you can see above, we use a sparse vector store here.\n\n``` python\nllm.ingest(\"./tests/sample_data\")\n```\n\n    Creating new vectorstore at /home/amaiya/onprem_data/vectordb/sparse\n    Loading documents from ./tests/sample_data\n    Split into 354 chunks of text (max. 500 chars each for text; max. 2000 chars for tables)\n    Ingestion complete! You can now query your documents using the LLM.ask or LLM.chat methods\n\n    Loading new documents: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6/6 [00:09<00:00,  1.51s/it]\n    Processing and chunking 43 new documents: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1/1 [00:00<00:00, 116.11it/s]\n    100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 354/354 [00:00<00:00, 2548.70it/s]\n\n#### Step 2: Answer Questions About the Documents\n\n``` python\nquestion = \"\"\"What is  ktrain?\"\"\"\nresult = llm.ask(question)\n```\n\n     ktrain is a low-code machine learning platform. It provides out-of-the-box support for training models on various types of data such as text, vision, graph, and tabular.\n\nThe sources used by the model to generate the answer are stored in\n`result['source_documents']`:\n\n``` python\nprint(\"\\nSources:\\n\")\nfor i, document in enumerate(result[\"source_documents\"]):\n    print(f\"\\n{i+1}.> \" + document.metadata[\"source\"] + \":\")\n    print(document.page_content)\n```\n\n\n    Sources:\n\n\n    1.> /home/amaiya/projects/ghub/onprem/nbs/tests/sample_data/ktrain_paper/ktrain_paper.pdf:\n    transferred to, and executed on new data in a production environment.\n    ktrain is a Python library for machine learning with the goal of presenting a simple,\n    uni\ufb01ed interface to easily perform the above steps regardless of the type of data (e.g., text\n    vs. images vs. graphs). Moreover, each of the three steps above can be accomplished in\n    \u00a92022 Arun S. Maiya.\n    License: CC-BY 4.0, see https://creativecommons.org/licenses/by/4.0/. Attribution requirements are\n\n    2.> /home/amaiya/projects/ghub/onprem/nbs/tests/sample_data/ktrain_paper/ktrain_paper.pdf:\n    custom models and data formats, as well. Inspired by other low-code (and no-code) open-\n    source ML libraries such as fastai (Howard and Gugger, 2020) and ludwig (Molino et al.,\n    2019), ktrain is intended to help further democratize machine learning by enabling begin-\n    ners and domain experts with minimal programming or data science experience to build\n    sophisticated machine learning models with minimal coding. It is also a useful toolbox for\n\n    3.> /home/amaiya/projects/ghub/onprem/nbs/tests/sample_data/ktrain_paper/ktrain_paper.pdf:\n    Apache license, and available on GitHub at: https://github.com/amaiya/ktrain.\n    2. Building Models\n    Supervised learning tasks in ktrain follow a standard, easy-to-use template.\n    STEP 1: Load and Preprocess Data. This step involves loading data from di\ufb00erent\n    sources and preprocessing it in a way that is expected by the model. In the case of text,\n    this may involve language-speci\ufb01c preprocessing (e.g., tokenization). In the case of images,\n\n    4.> /home/amaiya/projects/ghub/onprem/nbs/tests/sample_data/ktrain_paper/ktrain_paper.pdf:\n    AutoKeras (Jin et al., 2019) and AutoGluon (Erickson et al., 2020) lack some key \u201cpre-\n    canned\u201d features in ktrain, which has the strongest support for natural language processing\n    and graph-based data. Support for additional features is planned for the future.\n    5. Conclusion\n    This work presented ktrain, a low-code platform for machine learning. ktrain currently in-\n    cludes out-of-the-box support for training models on text, vision, graph, and tabular\n\n### Extract Text from Documents\n\nThe\n[`load_single_document`](https://amaiya.github.io/onprem/ingest.base.html#load_single_document)\nfunction can extract text from a range of different document formats\n(e.g., PDFs, Microsoft PowerPoint, Microsoft Word, etc.). It is\nautomatically invoked when calling\n[`LLM.ingest`](https://amaiya.github.io/onprem/llm.base.html#llm.ingest).\nExtracted text is represented as LangChain `Document` objects, where\n`Document.page_content` stores the extracted text and\n`Document.metadata` stores any extracted document metadata.\n\nFor PDFs, in particular, a number of different options are available\ndepending on your use case.\n\n**Fast PDF Extraction (default)**\n\n- **Pro:** Fast\n- **Con:** Does not infer/retain structure of tables in PDF documents\n\n``` python\nfrom onprem.ingest import load_single_document\n\ndocs = load_single_document('tests/sample_data/ktrain_paper/ktrain_paper.pdf')\ndocs[0].metadata\n```\n\n    {'source': '/home/amaiya/projects/ghub/onprem/nbs/sample_data/1/ktrain_paper.pdf',\n     'file_path': '/home/amaiya/projects/ghub/onprem/nbs/sample_data/1/ktrain_paper.pdf',\n     'page': 0,\n     'total_pages': 9,\n     'format': 'PDF 1.4',\n     'title': '',\n     'author': '',\n     'subject': '',\n     'keywords': '',\n     'creator': 'LaTeX with hyperref',\n     'producer': 'dvips + GPL Ghostscript GIT PRERELEASE 9.22',\n     'creationDate': \"D:20220406214054-04'00'\",\n     'modDate': \"D:20220406214054-04'00'\",\n     'trapped': ''}\n\n**Automatic OCR of PDFs**\n\n- **Pro:** Automatically extracts text from scanned PDFs\n- **Con:** Slow\n\nThe\n[`load_single_document`](https://amaiya.github.io/onprem/ingest.base.html#load_single_document)\nfunction will automatically OCR PDFs that require it (i.e., PDFs that\nare scanned hard-copies of documents). If a document is OCR\u2019ed during\nextraction, the `metadata['ocr']` field will be populated with `True`.\n\n``` python\ndocs = load_single_document('tests/sample_data/ocr_document/lynn1975.pdf')\ndocs[0].metadata\n```\n\n    {'source': '/home/amaiya/projects/ghub/onprem/nbs/sample_data/4/lynn1975.pdf',\n     'ocr': True}\n\n**Markdown Conversion in PDFs**\n\n- **Pro**: Better chunking for QA\n- **Con**: Slower than default PDF extraction\n\nThe\n[`load_single_document`](https://amaiya.github.io/onprem/ingest.base.html#load_single_document)\nfunction can convert PDFs to Markdown instead of plain text by supplying\nthe `pdf_markdown=True` as an argument:\n\n``` python\ndocs = load_single_document('your_pdf_document.pdf', \n                            pdf_markdown=True)\n```\n\nConverting to Markdown can facilitate downstream tasks like\nquestion-answering. For instance, when supplying `pdf_markdown=True` to\n[`LLM.ingest`](https://amaiya.github.io/onprem/llm.base.html#llm.ingest),\ndocuments are chunked in a Markdown-aware fashion (e.g., the abstract of\na research paper tends to be kept together into a single chunk instead\nof being split up). Note that Markdown will not be extracted if the\ndocument requires OCR.\n\n**Inferring Table Structure in PDFs**\n\n- **Pro**: Makes it easier for LLMs to analyze information in tables\n- **Con**: Slower than default PDF extraction\n\nWhen supplying `infer_table_structure=True` to either\n[`load_single_document`](https://amaiya.github.io/onprem/ingest.base.html#load_single_document)\nor\n[`LLM.ingest`](https://amaiya.github.io/onprem/llm.base.html#llm.ingest),\ntables are inferred and extracted from PDFs using a TableTransformer\nmodel. Tables are represented as **Markdown** (or **HTML** if Markdown\nconversion is not possible).\n\n``` python\ndocs = load_single_document('your_pdf_document.pdf', \n                            infer_table_structure=True)\n```\n\n**Parsing Extracted Text Into Sentences or Paragraphs**\n\nFor some analyses (e.g., using prompts for information extraction), it\nmay be useful to parse the text extracted from documents into individual\nsentences or paragraphs. This can be accomplished using the\n[`segment`](https://amaiya.github.io/onprem/utils.html#segment)\nfunction:\n\n``` python\nfrom onprem.ingest import load_single_document\nfrom onprem.utils import segment\ntext = load_single_document('tests/sample_data/sotu/state_of_the_union.txt')[0].page_content\n```\n\n``` python\nsegment(text, unit='paragraph')[0]\n```\n\n    'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.  Members of Congress and the Cabinet.  Justices of the Supreme Court.  My fellow Americans.'\n\n``` python\nsegment(text, unit='sentence')[0]\n```\n\n    'Madam Speaker, Madam Vice President, our First Lady and Second Gentleman.'\n\n### Summarization Pipeline\n\nSummarize your raw documents (e.g., PDFs, MS Word) with an LLM.\n\n#### Map-Reduce Summarization\n\nSummarize each chunk in a document and then generate a single summary\nfrom the individual summaries.\n\n``` python\nfrom onprem import LLM\nllm = LLM(n_gpu_layers=-1, verbose=False, mute_stream=True) # disabling viewing of intermediate summarization prompts/inferences\n```\n\n``` python\nfrom onprem.pipelines import Summarizer\nsumm = Summarizer(llm)\n\nresp = summ.summarize('tests/sample_data/ktrain_paper/ktrain_paper.pdf', max_chunks_to_use=5) # omit max_chunks_to_use parameter to consider entire document\nprint(resp['output_text'])\n```\n\n     Ktrain is an open-source machine learning library that offers a unified interface for various machine learning tasks. The library supports both supervised and non-supervised machine learning, and includes methods for training models, evaluating models, making predictions on new data, and providing explanations for model decisions. Additionally, the library integrates with various explainable AI libraries such as shap, eli5 with lime, and others to provide more interpretable models.\n\n#### Concept-Focused Summarization\n\nSummarize a large document with respect to a particular concept of\ninterest.\n\n``` python\nfrom onprem import LLM\nfrom onprem.pipelines import Summarizer\n```\n\n``` python\nllm = LLM(default_model='zephyr', n_gpu_layers=-1, verbose=False, temperature=0)\nsumm = Summarizer(llm)\nsummary, sources = summ.summarize_by_concept('tests/sample_data/ktrain_paper/ktrain_paper.pdf', concept_description=\"question answering\")\n```\n\n\n    The context provided describes the implementation of an open-domain question-answering system using ktrain, a low-code library for augmented machine learning. The system follows three main steps: indexing documents into a search engine, locating documents containing words in the question, and extracting candidate answers from those documents using a BERT model pretrained on the SQuAD dataset. Confidence scores are used to sort and prune candidate answers before returning results. The entire workflow can be implemented with only three lines of code using ktrain's SimpleQA module. This system allows for the submission of natural language questions and receives exact answers, as demonstrated in the provided example. Overall, the context highlights the ease and accessibility of building sophisticated machine learning models, including open-domain question-answering systems, through ktrain's low-code interface.\n\n### Information Extraction Pipeline\n\nExtract information from raw documents (e.g., PDFs, MS Word documents)\nwith an LLM.\n\n``` python\nfrom onprem import LLM\nfrom onprem.pipelines import Extractor\n# Notice that we're using a cloud-based, off-premises model here! See \"OpenAI\" section below.\nllm = LLM(model_url='openai://gpt-3.5-turbo', verbose=False, mute_stream=True, temperature=0) \nextractor = Extractor(llm)\nprompt = \"\"\"Extract the names of research institutions (e.g., universities, research labs, corporations, etc.) \nfrom the following sentence delimited by three backticks. If there are no organizations, return NA.  \nIf there are multiple organizations, separate them with commas.\n```{text}```\n\"\"\"\ndf = extractor.apply(prompt, fpath='tests/sample_data/ktrain_paper/ktrain_paper.pdf', pdf_pages=[1], stop=['\\n'])\ndf.loc[df['Extractions'] != 'NA'].Extractions[0]\n```\n\n    /home/amaiya/projects/ghub/onprem/onprem/core.py:159: UserWarning: The model you supplied is gpt-3.5-turbo, an external service (i.e., not on-premises). Use with caution, as your data and prompts will be sent externally.\n      warnings.warn(f'The model you supplied is {self.model_name}, an external service (i.e., not on-premises). '+\\\n\n    'Institute for Defense Analyses'\n\n### Few-Shot Classification\n\nMake accurate text classification predictions using only a tiny number\nof labeled examples.\n\n``` python\n# create classifier\nfrom onprem.pipelines import FewShotClassifier\nclf = FewShotClassifier(use_smaller=True)\n\n# Fetching data\nfrom sklearn.datasets import fetch_20newsgroups\nimport pandas as pd\nimport numpy as np\nclasses = [\"soc.religion.christian\", \"sci.space\"]\nnewsgroups = fetch_20newsgroups(subset=\"all\", categories=classes)\ncorpus, group_labels = np.array(newsgroups.data), np.array(newsgroups.target_names)[newsgroups.target]\n\n# Wrangling data into a dataframe and selecting training examples\ndata = pd.DataFrame({\"text\": corpus, \"label\": group_labels})\ntrain_df = data.groupby(\"label\").sample(5)\ntest_df = data.drop(index=train_df.index)\n\n# X_sample only contains 5 examples of each class!\nX_sample, y_sample = train_df['text'].values, train_df['label'].values\n\n# test set\nX_test, y_test = test_df['text'].values, test_df['label'].values\n\n# train\nclf.train(X_sample,  y_sample, max_steps=20)\n\n# evaluate\nprint(clf.evaluate(X_test, y_test, print_report=False)['accuracy'])\n#output: 0.98\n\n# make predictions\nclf.predict(['Elon Musk likes launching satellites.']).tolist()[0]\n#output: sci.space\n```\n\n**TIP:** You can also easily train a wide range of [traditional text\nclassification\nmodels](https://amaiya.github.io/onprem/pipelines.classifier.html) using\nboth Hugging Face transformers and scikit-learn as backends.\n\n### Using Hugging Face Transformers Instead of Llama.cpp\n\nBy default, the LLM backend employed by **OnPrem.LLM** is\n[llama-cpp-python](https://github.com/abetlen/llama-cpp-python), which\nrequires models in [GGUF format](https://huggingface.co/docs/hub/gguf).\nAs of v0.5.0, it is now possible to use [Hugging Face\ntransformers](https://github.com/huggingface/transformers) as the LLM\nbackend instead. This is accomplished by using the `model_id` parameter\n(instead of supplying a `model_url` argument). In the example below, we\nrun the\n[Llama-3.1-8B](https://huggingface.co/hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4)\nmodel.\n\n``` python\n# llama-cpp-python does NOT need to be installed when using model_id parameter\nllm = LLM(model_id=\"hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4\", device_map='cuda')\n```\n\nThis allows you to more easily use any model on the Hugging Face hub in\n[SafeTensors format](https://huggingface.co/docs/safetensors/index)\nprovided it can be loaded with the Hugging Face `transformers.pipeline`.\nNote that, when using the `model_id` parameter, the `prompt_template` is\nset automatically by `transformers`.\n\nThe Llama-3.1 model loaded above was quantized using\n[AWQ](https://huggingface.co/docs/transformers/main/en/quantization/awq),\nwhich allows the model to fit onto smaller GPUs (e.g., laptop GPUs with\n6GB of VRAM) similar to the default GGUF format. AWQ models will require\nthe [autoawq](https://pypi.org/project/autoawq/) package to be\ninstalled: `pip install autoawq` (AWQ only supports Linux system,\nincluding Windows Subsystem for Linux). If you do need to load a model\nthat is not quantized, you can supply a quantization configuration at\nload time (known as \u201cinflight quantization\u201d). In the following example,\nwe load an unquantized [Zephyr-7B-beta\nmodel](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) that will be\nquantized during loading to fit on GPUs with as little as 6GB of VRAM:\n\n``` python\nfrom transformers import BitsAndBytesConfig\nquantization_config = BitsAndBytesConfig(\n    load_in_4bit=True,\n    bnb_4bit_quant_type=\"nf4\",\n    bnb_4bit_compute_dtype=\"float16\",\n    bnb_4bit_use_double_quant=True,\n)\nllm = LLM(model_id=\"HuggingFaceH4/zephyr-7b-beta\", device_map='cuda', \n          model_kwargs={\"quantization_config\":quantization_config})\n```\n\nWhen supplying a `quantization_config`, the\n[bitsandbytes](https://huggingface.co/docs/bitsandbytes/main/en/installation)\nlibrary, a lightweight Python wrapper around CUDA custom functions, in\nparticular 8-bit optimizers, matrix multiplication (LLM.int8()), and 8 &\n4-bit quantization functions, is used. There are ongoing efforts by the\nbitsandbytes team to support multiple backends in addition to CUDA. If\nyou receive errors related to bitsandbytes, please refer to the\n[bitsandbytes\ndocumentation](https://huggingface.co/docs/bitsandbytes/main/en/installation).\n\n### Connecting to LLMs Served Through REST APIs\n\n**OnPrem.LLM** can be used with LLMs being served through any\nOpenAI-compatible REST API. This means you can easily use **OnPrem.LLM**\nwith tools like [vLLM](https://github.com/vllm-project/vllm),\n[OpenLLM](https://github.com/bentoml/OpenLLM),\n[Ollama](https://ollama.com/blog/openai-compatibility), and the\n[llama.cpp\nserver](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md).\n\n#### vLLM Example\n\nFor instance, using [vLLM](https://github.com/vllm-project/vllm), you\ncan serve an LLM as follows (replace the `--model` argument with model\nyou want to use):\n\n``` sh\npython -m vllm.entrypoints.openai.api_server --model Qwen/Qwen2.5-0.5B-Instruct --dtype auto --api-key token-abc123\n```\n\nYou can then connect OnPrem.LLM to the LLM by supplying the URL of the\nserver you just started:\n\n``` python\nfrom onprem import LLM\nllm = LLM(model_url='http://localhost:8000/v1', api_key='token-abc123', model='Qwen/Qwen2.5-0.5B-Instruct') \n# Note: The API key can either be supplied directly or stored in the OPENAI_API_KEY environment variable.\n#       If the server does not require an API key, `api_key` should still be supplied with a dummy value like 'na'.\n#       The model argument must exactly match what was supplied when starting the vLLM server.\n```\n\nThat\u2019s it! Solve problems with **OnPrem.LLM** as you normally would\n(e.g., RAG question-answering, summarization, few-shot prompting, code\ngeneration, etc.).\n\n#### Ollama Example\n\nAfter [downloading and installing Ollama](https://ollama.com/) and\npulling a model (eg., `ollama pull llama3.2`), you can use it in\nOnPrem.LLM as follows:\n\n``` python\nfrom onprem import LLM\nllm = LLM(model_url='http://localhost:11434/v1', api_key='NA', model='llama3.2')\noutput = llm.prompt('What is the capital of France?')\n\n# OUTPUT:\n# The capital of France is Paris.\n```\n\nIf using OnPrem.LLM with Ollama or vLLM, then `llama-cpp-python` does\n**not** need to be installed.\n\n### Using OpenAI Models with OnPrem.LLM\n\nEven when using on-premises language models, it can sometimes be useful\nto have easy access to non-local, cloud-based models (e.g., OpenAI) for\ntesting, producing baselines for comparison, and generating synthetic\nexamples for fine-tuning. For these reasons, in spite of the name,\n**OnPrem.LLM** now includes support for OpenAI chat models:\n\n``` python\nfrom onprem import LLM\nllm = LLM(model_url='openai://gpt-4o', temperature=0)\n```\n\n    /home/amaiya/projects/ghub/onprem/onprem/core.py:196: UserWarning: The model you supplied is gpt-4o, an external service (i.e., not on-premises). Use with caution, as your data and prompts will be sent externally.\n      warnings.warn(f'The model you supplied is {self.model_name}, an external service (i.e., not on-premises). '+\\\n\nThis OpenAI [`LLM`](https://amaiya.github.io/onprem/llm.base.html#llm)\ninstance can now be used for most features in OnPrem.LLM (e.g., RAG,\ninformation extraction, summarization, etc.). Here we simply use it for\ngeneral prompting:\n\n``` python\nsaved_result = llm.prompt('List three cute  names for a cat and explain why each is cute.')\n```\n\n    Certainly! Here are three cute names for a cat, along with explanations for why each is adorable:\n\n    1. **Whiskers**: This name is cute because it highlights one of the most distinctive and charming features of a cat\u2014their whiskers. It's playful and endearing, evoking the image of a curious cat twitching its whiskers as it explores its surroundings.\n\n    2. **Mittens**: This name is cute because it conjures up the image of a cat with little white paws that look like they are wearing mittens. It's a cozy and affectionate name that suggests warmth and cuddliness, much like a pair of soft mittens.\n\n    3. **Pumpkin**: This name is cute because it brings to mind the warm, orange hues of a pumpkin, which can be reminiscent of certain cat fur colors. It's also associated with the fall season, which is often linked to comfort and coziness. Plus, the name \"Pumpkin\" has a sweet and affectionate ring to it, making it perfect for a beloved pet.\n\n**Using Vision Capabilities in GPT-4o**\n\n``` python\nimage_url = \"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg\"\nsaved_result = llm.prompt('Describe the weather in this image.', image_path_or_url=image_url)\n```\n\n    The weather in the image appears to be clear and sunny. The sky is mostly blue with some scattered clouds, suggesting a pleasant day with good visibility. The sunlight is bright, illuminating the green grass and landscape.\n\n**Using OpenAI-Style Message Dictionaries**\n\n``` python\nmessages = [\n    {'content': [{'text': 'describe the weather in this image', \n                  'type': 'text'},\n                 {'image_url': {'url': image_url},\n                  'type': 'image_url'}],\n     'role': 'user'}]\nsaved_result = llm.prompt(messages)\n```\n\n    The weather in the image appears to be clear and sunny. The sky is mostly blue with some scattered clouds, suggesting a pleasant day with good visibility. The sunlight is bright, casting clear shadows and illuminating the green landscape.\n\n**Azure OpenAI**\n\nFor Azure OpenAI models, use the following URL format:\n\n``` python\nllm = LLM(model_url='azure://<deployment_name>', ...) \n# <deployment_name> is the Azure deployment name and additional Azure-specific parameters \n# can be supplied as extra arguments to LLM (or set as environment variables)\n```\n\n### Structured and Guided Outputs\n\nThe\n[`LLM.pydantic_prompt`](https://amaiya.github.io/onprem/llm.base.html#llm.pydantic_prompt)\nmethod allows you to specify the desired structure of the LLM\u2019s output\nas a Pydantic model.\n\n``` python\nfrom pydantic import BaseModel, Field\n\nclass Joke(BaseModel):\n    setup: str = Field(description=\"question to set up a joke\")\n    punchline: str = Field(description=\"answer to resolve the joke\")\n\nfrom onprem import LLM\nllm = LLM(default_model='llama', verbose=False)\nstructured_output = llm.pydantic_prompt('Tell me a joke.', pydantic_model=Joke)\n```\n\n    llama_new_context_with_model: n_ctx_per_seq (3904) < n_ctx_train (131072) -- the full capacity of the model will not be utilized\n\n    {\n      \"setup\": \"Why couldn't the bicycle stand alone?\",\n      \"punchline\": \"Because it was two-tired!\"\n    }\n\nThe output is a Pydantic object instead of a string:\n\n``` python\nstructured_output\n```\n\n    Joke(setup=\"Why couldn't the bicycle stand alone?\", punchline='Because it was two-tired!')\n\n``` python\nprint(structured_output.setup)\nprint()\nprint(structured_output.punchline)\n```\n\n    Why couldn't the bicycle stand alone?\n\n    Because it was two-tired!\n\nYou can also use **OnPrem.LLM** with the\n[Guidance](https://github.com/guidance-ai/guidance) package to guide the\nLLM to generate outputs based on your conditions and constraints. We\u2019ll\nshow a couple of examples here, but see [our documentation on guided\nprompts](https://amaiya.github.io/onprem/examples_guided_prompts.html)\nfor more information.\n\n``` python\nfrom onprem import LLM\n\nllm = LLM(n_gpu_layers=-1, verbose=False)\nfrom onprem.pipelines.guider import Guider\nguider = Guider(llm)\n```\n\nWith the Guider, you can use use Regular Expressions to control LLM\ngeneration:\n\n``` python\nprompt = f\"\"\"Question: Luke has ten balls. He gives three to his brother. How many balls does he have left?\nAnswer: \"\"\" + gen(name='answer', regex='\\d+')\n\nguider.prompt(prompt, echo=False)\n```\n\n    {'answer': '7'}\n\n``` python\nprompt = '19, 18,' + gen(name='output', max_tokens=50, stop_regex='[^\\d]7[^\\d]')\nguider.prompt(prompt)\n```\n\n<pre style='margin: 0px; padding: 0px; padding-left: 8px; margin-left: -8px; border-radius: 0px; border-left: 1px solid rgba(127, 127, 127, 0.2); white-space: pre-wrap; font-family: ColfaxAI, Arial; font-size: 15px; line-height: 23px;'>19, 18<span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>7</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>6</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>5</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>4</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>3</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>2</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 1</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>0</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 9</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'> 8</span><span style='background-color: rgba(0, 165, 0, 0.15); border-radius: 3px;' title='0.0'>,</span></pre>\n\n    {'output': ' 17, 16, 15, 14, 13, 12, 11, 10, 9, 8,'}\n\nSee [the\ndocumentation](https://amaiya.github.io/onprem/examples_guided_prompts.html)\nfor more examples of how to use\n[Guidance](https://github.com/guidance-ai/guidance) with **OnPrem.LLM**.\n\n## Solving Tasks With Agents\n\n``` python\nfrom onprem import LLM\nfrom onprem.pipelines import Agent\nllm = LLM('openai/gpt-4o-mini', mute_stream=True) \nagent = Agent(llm)\nagent.add_webview_tool()\nanswer = agent.run(\"What is the highest level of education of the person listed on this page: https://arun.maiya.net?\")\n# ANSWER: Ph.D. in Computer Science\n```\n\nSee the **[example notebook on\nagents](https://amaiya.github.io/onprem/examples_agent.html)** for more\ninformation\n\n## Built-In Web App\n\n**OnPrem.LLM** includes a built-in Web app to access the LLM. To start\nit, run the following command after installation:\n\n``` shell\nonprem --port 8000\n```\n\nThen, enter `localhost:8000` (or `<domain_name>:8000` if running on\nremote server) in a Web browser to access the application:\n\n<img src=\"https://raw.githubusercontent.com/amaiya/onprem/master/images/onprem_welcome.png\" border=\"1\" alt=\"screenshot\" width=\"775\"/>\n\nFor more information, [see the corresponding\ndocumentation](https://amaiya.github.io/onprem/webapp.html).\n\n## Examples\n\nThe [documentation](https://amaiya.github.io/onprem/) includes many\nexamples, including:\n\n- [Prompts for\n  Problem-Solving](https://amaiya.github.io/onprem/examples.html)\n- [RAG Example](https://amaiya.github.io/onprem/examples_rag.html)\n- [Code Generation](https://amaiya.github.io/onprem/examples_code.html)\n- [Semantic\n  Similarity](https://amaiya.github.io/onprem/examples_semantic.html)\n- [Document\n  Summarization](https://amaiya.github.io/onprem/examples_summarization.html)\n- [Information\n  Extraction](https://amaiya.github.io/onprem/examples_information_extraction.html)\n- [Text\n  Classification](https://amaiya.github.io/onprem/examples_classification.html)\n- [Agent-Based Task\n  Execution](https://amaiya.github.io/onprem/examples_agent.html)\n- [Audo-Coding Survey\n  Responses](https://amaiya.github.io/onprem/examples_qualitative_survey_analysis.html)\n- [Legal and Regulatory\n  Analysis](https://amaiya.github.io/onprem/examples_legal_analysis.html)\n\n## FAQ\n\n1.  **How do I use other models with OnPrem.LLM?**\n\n    > You can supply any model of your choice using the `model_url` and\n    > `model_id` parameters to `LLM` (see cheat sheet above).\n\n    > Here, we will go into detail on how to supply a custom GGUF model\n    > using the llma.cpp backend.\n\n    > You can find llama.cpp-supported models with `GGUF` in the file\n    > name on\n    > [huggingface.co](https://huggingface.co/models?sort=trending&search=gguf).\n\n    > Make sure you are pointing to the URL of the actual GGUF model\n    > file, which is the \u201cdownload\u201d link on the model\u2019s page. An example\n    > for **Mistral-7B** is shown below:\n\n    > <img src=\"https://raw.githubusercontent.com/amaiya/onprem/master/images/model_download_link.png\" border=\"1\" alt=\"screenshot\" width=\"775\"/>\n\n    > When using the llama.cpp backend, GGUF models have specific prompt\n    > formats that need to supplied to `LLM`. For instance, the prompt\n    > template required for **Zephyr-7B**, as described on the [model\u2019s\n    > page](https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF), is:\n    >\n    > `<|system|>\\n</s>\\n<|user|>\\n{prompt}</s>\\n<|assistant|>`\n    >\n    > So, to use the **Zephyr-7B** model, you must supply the\n    > `prompt_template` argument to the `LLM` constructor (or specify it\n    > in the `webapp.yml` configuration for the Web app).\n    >\n    > ``` python\n    > # how to use Zephyr-7B with OnPrem.LLM\n    > llm = LLM(model_url='https://huggingface.co/TheBloke/zephyr-7B-beta-GGUF/resolve/main/zephyr-7b-beta.Q4_K_M.gguf',\n    >           prompt_template = \"<|system|>\\n</s>\\n<|user|>\\n{prompt}</s>\\n<|assistant|>\",\n    >           n_gpu_layers=33)\n    > llm.prompt(\"List three cute names for a cat.\")\n    > ```\n\n    > Prompt templates are **not** required for any other LLM backend\n    > (e.g., when using Ollama as backend or when using `model_id`\n    > parameter for transformers models). Prompt templates are also not\n    > required if using any of the default models.\n\n2.  **When installing `onprem`, I\u2019m getting \u201cbuild\u201d errors related to\n    `llama-cpp-python` (or `chroma-hnswlib`) on Windows/Mac/Linux?**\n\n    > See [this LangChain documentation on\n    > LLama.cpp](https://python.langchain.com/docs/integrations/llms/llamacpp)\n    > for help on installing the `llama-cpp-python` package for your\n    > system. Additional tips for different operating systems are shown\n    > below:\n\n    > For **Linux** systems like Ubuntu, try this:\n    > `sudo apt-get install build-essential g++ clang`. Other tips are\n    > [here](https://github.com/oobabooga/text-generation-webui/issues/1534).\n\n    > For **Windows** systems, please try following [these\n    > instructions](https://github.com/amaiya/onprem/blob/master/MSWindows.md).\n    > We recommend you use [Windows Subsystem for Linux\n    > (WSL)](https://learn.microsoft.com/en-us/windows/wsl/install)\n    > instead of using Microsoft Windows directly. If you do need to use\n    > Microsoft Window directly, be sure to install the [Microsoft C++\n    > Build\n    > Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/)\n    > and make sure the **Desktop development with C++** is selected.\n\n    > For **Macs**, try following [these\n    > tips](https://github.com/imartinez/privateGPT/issues/445#issuecomment-1563333950).\n\n    > There are also various other tips for each of the above OSes in\n    > [this privateGPT repo\n    > thread](https://github.com/imartinez/privateGPT/issues/445). Of\n    > course, you can also [easily\n    > use](https://colab.research.google.com/drive/1LVeacsQ9dmE1BVzwR3eTLukpeRIMmUqi?usp=sharing)\n    > **OnPrem.LLM** on Google Colab.\n\n    > Finally, if you still can\u2019t overcome issues with building\n    > `llama-cpp-python`, you can try [installing the pre-built wheel\n    > file](https://abetlen.github.io/llama-cpp-python/whl/cpu/llama-cpp-python/)\n    > for your system:\n\n    > **Example:**\n    > `pip install llama-cpp-python==0.2.90 --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu`\n    >\n    > **Tip:** There are [pre-built wheel files for\n    > `chroma-hnswlib`](https://pypi.org/project/chroma-hnswlib/#files),\n    > as well. If running `pip install onprem` fails on building\n    > `chroma-hnswlib`, it may be because a pre-built wheel doesn\u2019t yet\n    > exist for the version of Python you\u2019re using (in which case you\n    > can try downgrading Python).\n\n3.  **I\u2019m behind a corporate firewall and am receiving an SSL error when\n    trying to download the model?**\n\n    > Try this:\n    >\n    > ``` python\n    > from onprem import LLM\n    > LLM.download_model(url, ssl_verify=False)\n    > ```\n\n    > You can download the embedding model (used by `LLM.ingest` and\n    > `LLM.ask`) as follows:\n    >\n    > ``` sh\n    > wget --no-check-certificate https://public.ukp.informatik.tu-darmstadt.de/reimers/sentence-transformers/v0.2/all-MiniLM-L6-v2.zip\n    > ```\n\n    > Supply the unzipped folder name as the `embedding_model_name`\n    > argument to `LLM`.\n\n    > If you\u2019re getting SSL errors when even running `pip install`, try\n    > this:\n    >\n    > ``` sh\n    > pip install \u2013-trusted-host pypi.org \u2013-trusted-host files.pythonhosted.org pip_system_certs\n    > ```\n\n4.  **How do I use this on a machine with no internet access?**\n\n    > Use the `LLM.download_model` method to download the model files to\n    > `<your_home_directory>/onprem_data` and transfer them to the same\n    > location on the air-gapped machine.\n\n    > For the `ingest` and `ask` methods, you will need to also download\n    > and transfer the embedding model files:\n    >\n    > ``` python\n    > from sentence_transformers import SentenceTransformer\n    > model = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2')\n    > model.save('/some/folder')\n    > ```\n\n    > Copy the `some/folder` folder to the air-gapped machine and supply\n    > the path to `LLM` via the `embedding_model_name` parameter.\n\n5.  **My model is not loading when I call `llm = LLM(...)`?**\n\n    > This can happen if the model file is corrupt (in which case you\n    > should delete from `<home directory>/onprem_data` and\n    > re-download). It can also happen if the version of\n    > `llama-cpp-python` needs to be upgraded to the latest.\n\n6.  **I\u2019m getting an `\u201cIllegal instruction (core dumped)` error when\n    instantiating a `langchain.llms.Llamacpp` or `onprem.LLM` object?**\n\n    > Your CPU may not support instructions that `cmake` is using for\n    > one reason or another (e.g., [due to Hyper-V in VirtualBox\n    > settings](https://stackoverflow.com/questions/65780506/how-to-enable-avx-avx2-in-virtualbox-6-1-16-with-ubuntu-20-04-64bit)).\n    > You can try turning them off when building and installing\n    > `llama-cpp-python`:\n\n    > ``` sh\n    > # example\n    > CMAKE_ARGS=\"-DGGML_CUDA=ON -DGGML_AVX2=OFF -DGGML_AVX=OFF -DGGML_F16C=OFF -DGGML_FMA=OFF\" FORCE_CMAKE=1 pip install --force-reinstall llama-cpp-python --no-cache-dir\n    > ```\n\n7.  **How can I speed up\n    [`LLM.ingest`](https://amaiya.github.io/onprem/llm.base.html#llm.ingest)?**\n\n    > By default, a GPU, if available, will be used to compute\n    > embeddings, so ensure PyTorch is installed with GPU support. You\n    > can explicitly control the device used for computing embeddings\n    > with the `embedding_model_kwargs` argument.\n    >\n    > ``` python\n    > from onprem import LLM\n    > llm  = LLM(embedding_model_kwargs={'device':'cuda'})\n    > ```\n\n    > You can also supply `store_type=\"sparse\"` to `LLM` to use a sparse\n    > vector store, which sacrifices a small amount of inference speed\n    > (`LLM.ask`) for significant speed ups during ingestion\n    > (`LLM.ingest`).\n    >\n    > ``` python\n    > from onprem import LLM\n    > llm  = LLM(store_type=\"sparse\")\n    > ```\n    >\n    > Note, however, that, unlike dense vector stores, sparse vector\n    > stores assume answer sources will contain at least one word in\n    > common with the question.\n\n<!--\n8. **What are ways in which OnPrem.LLM has been used?**\n    > Examples include:\n    > - extracting key performance parameters and other performance attributes from engineering documents\n    > - auto-coding responses to government requests for information (RFIs)\n    > - analyzing the Federal Aquisition Regulations (FAR)\n    > - understanding where and how Executive Order 14028 on cybersecurity aligns with the National Cybersecurity Strategy\n    > - generating a summary of ways to improve a course from thousdands of reviews\n    > - extracting specific information of interest from resumes for talent acquisition.\n&#10;-->\n\n## How to Cite\n\nPlease cite the [following paper](https://arxiv.org/abs/2505.07672) when\nusing **OnPrem.LLM**:\n\n    @article{maiya2025onprem,\n          title={OnPrem.LLM: A Privacy-Conscious Document Intelligence Toolkit}, \n          author={Arun S. Maiya},\n          year={2025},\n          eprint={2505.07672},\n          archivePrefix={arXiv},\n          primaryClass={cs.CL},\n          url={https://arxiv.org/abs/2505.07672}, \n    }\n",
    "bugtrack_url": null,
    "license": "Apache Software License 2.0",
    "summary": "A tool for running on-premises large language models on non-public data",
    "version": "0.17.1",
    "project_urls": {
        "Homepage": "https://github.com/amaiya/onprem"
    },
    "split_keywords": [
        "nbdev",
        "jupyter",
        "notebook",
        "python"
    ],
    "urls": [
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "e7c326aac87ac7ba73f7e0d97980b78dfdf68ec9364138bc9d52d126c97971f3",
                "md5": "96393553966390fd65a046cb7286f017",
                "sha256": "8255d5f7148612c5324e92130b96d1cf9ad1d2e148e2e762f7ae2d188f143d2a"
            },
            "downloads": -1,
            "filename": "onprem-0.17.1-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "96393553966390fd65a046cb7286f017",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.9",
            "size": 191075,
            "upload_time": "2025-07-30T01:04:22",
            "upload_time_iso_8601": "2025-07-30T01:04:22.581315Z",
            "url": "https://files.pythonhosted.org/packages/e7/c3/26aac87ac7ba73f7e0d97980b78dfdf68ec9364138bc9d52d126c97971f3/onprem-0.17.1-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": null,
            "digests": {
                "blake2b_256": "6f42b4fcec6455dd8b20e5abdc7d980beb2adb109cfe4a3783f6a6e4b2389bb6",
                "md5": "05db68219816fb2c1ba5230ea84084c3",
                "sha256": "9a97bf6e6f3e9969ef81733d18e98f1cad9b26da819199ca438bead7ddfbf99d"
            },
            "downloads": -1,
            "filename": "onprem-0.17.1.tar.gz",
            "has_sig": false,
            "md5_digest": "05db68219816fb2c1ba5230ea84084c3",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": ">=3.9",
            "size": 196373,
            "upload_time": "2025-07-30T01:04:24",
            "upload_time_iso_8601": "2025-07-30T01:04:24.091904Z",
            "url": "https://files.pythonhosted.org/packages/6f/42/b4fcec6455dd8b20e5abdc7d980beb2adb109cfe4a3783f6a6e4b2389bb6/onprem-0.17.1.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2025-07-30 01:04:24",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "amaiya",
    "github_project": "onprem",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "lcname": "onprem"
}
        
Elapsed time: 0.77276s