bitnet


Namebitnet JSON
Version 0.2.5 PyPI version JSON
download
home_pagehttps://github.com/kyegomez/bitnet
Summarybitnet - Pytorch
upload_time2024-04-28 04:23:26
maintainerNone
docs_urlNone
authorKye Gomez
requires_python<4.0,>=3.10
licenseMIT
keywords artificial intelligence deep learning optimizers prompt engineering
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            [![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)

# BitNet
![bitnet](/bitnet.png)
PyTorch Implementation of the linear methods and model from the paper "BitNet: Scaling 1-bit Transformers for Large Language Models"

[Paper link:](https://arxiv.org/pdf/2310.11453.pdf)

BitLinear = tensor -> layernorm -> Binarize -> abs max quantization -> dequant

"The implementation of the BitNet architecture is quite simple, requiring only the replacement of linear projections (i.e., nn.Linear in PyTorch) in the Transformer. " -- BitNet is really easy to implement just swap out the linears with the BitLinear modules! 

## **NEWS**
- **New Iteration** 🔥 There is an all-new iteration from the paper "[The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits](https://arxiv.org/abs/2402.17764)", we're implementing it now. Join the Agora discord and contribute! [Join Here](https://discord.gg/hFzevCjG8c)
- **New Optimizations** The first `BitLinear` has been optimized and we now have a Bit Attention `BitMGQA` That implements BitLinear into the attention mechanism. Multi Grouped Query Attention is also widely recognized as the best attention for its fast decoding and long context handling, thanks to Frank for his easy to use implementation!
- **BitLinear 1.5 Launch 🔥**: The new BitLinear 1.5 is still in progress 🔥 [Here is the file]() There are still some bugs like with the dequantization algorithm and we still need to replace the multiplication with elementwisw addition, if you could help with this, this would be amazing.
- **NOTICE**: A model obviously needs to be finetuned from scratch to use BitLinear, just changing the linear methods in an already trained model isn't going to work. Finetune or train from scratch.

## Appreciation
- Dimitry, Nullonix for analysis and code review and revision
- Vyom, for providing 4080 to train!

## Installation
`pip install bitnet`

## Usage:

### `BitLinear`
- Example of the BitLinear layer which is the main innovation of the paper!
```python
import torch

from bitnet import BitLinear

# Input
x = torch.randn(10, 1000, 512)

# BitLinear layer
layer = BitLinear(512, 400)

# Output
y = layer(x)

print(y)

```

### BitLinearNew
```python
import torch
from bitnet import BitLinearNew

# Create a random tensor of shape (16, 10)
x = torch.randn(16, 1000, 512)

# Create an instance of the BitLinearNew class with input size 10, output size 20, and 2 groups
layer = BitLinearNew(
    512,
    20,
)

# Perform a forward pass through the BitLinearNew layer with input x
output = layer(x)

# Print the output tensor
print(output)
print(output.shape)
```
----

### `BitNetTransformer`
- Fully implemented Transformer as described in the diagram with MHA, and BitFeedforwards
- Can be utilized not just for text but for images and maybe even video or audio processing
- Complete with residuals and skip connections for gradient flow

```python
# Import the necessary libraries
import torch
from bitnet import BitNetTransformer

# Create a random tensor of integers
x = torch.randint(0, 20000, (1, 1024))

# Initialize the BitNetTransformer model
bitnet = BitNetTransformer(
    num_tokens=20000,  # Number of unique tokens in the input
    dim=1024,  # Dimension of the input and output embeddings
    depth=6,  # Number of transformer layers
    heads=8,  # Number of attention heads
    ff_mult=4,  # Multiplier for the hidden dimension in the feed-forward network
)

# Pass the tensor through the transformer model
logits = bitnet(x)

# Print the shape of the output
print(logits)

```


### `BitAttention`
This Attention has been modified to use BitLinear instead of the default linear projection. It's also using Multi-Grouped Query Attention instead of regular multi-head attention for faster decoding and longer context handling.

```python
import torch
from bitnet import BitMGQA

# Create a random tensor of shape (1, 10, 512)
x = torch.randn(1, 10, 512)

# Create an instance of the BitMGQA model with input size 512, 8 attention heads, and 4 layers
gqa = BitMGQA(512, 8, 4)

# Pass the input tensor through the BitMGQA model and get the output and attention weights
out, _ = gqa(x, x, x, need_weights=True)

# Print the shapes of the output tensor and attention tensor
print(out)
```

### `BitFeedForward`
- Feedforward as shown in the diagram with BitLinear and a GELU:
- Linear -> GELU -> Linear
- You can add dropouts, or layernorms, or other layers for a better ffn

```python
import torch
from bitnet import BitFeedForward

# Create a random input tensor of shape (10, 512)
x = torch.randn(10, 512)

# Create an instance of the BitFeedForward class with the following parameters:
# - input_dim: 512
# - hidden_dim: 512
# - num_layers: 4
# - swish: True (use Swish activation function)
# - post_act_ln: True (apply Layer Normalization after each activation)
# - dropout: 0.1 (apply dropout with a probability of 0.1)
ff = BitFeedForward(512, 512, 4, swish=True, post_act_ln=True, dropout=0.1)

# Apply the BitFeedForward network to the input tensor x
y = ff(x)

# Print the shape of the output tensor y
print(y)  # torch.Size([10, 512])
```

## Inference
```python
from bitnet import BitNetInference

bitnet = BitNetInference()
bitnet.load_model("../model_checkpoint.pth")  # Download model
output_str = bitnet.generate("The dog jumped over the ", 512)
print(output_str)
```

## Huggingface Usage
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

from bitnet import replace_linears_in_hf

# Load a model from Hugging Face's Transformers
model_name = "bert-base-uncased"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)

# Replace Linear layers with BitLinear
replace_linears_in_hf(model)

# Example text to classify
text = "Replace this with your text"
inputs = tokenizer(
    text, return_tensors="pt", padding=True, truncation=True, max_length=512
)

# Perform inference
model.eval()  # Set the model to evaluation mode
with torch.no_grad():
    outputs = model(**inputs)
    predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)
    print(predictions)

# Process predictions
predicted_class_id = predictions.argmax().item()
print(f"Predicted class ID: {predicted_class_id}")

# Optionally, map the predicted class ID to a label, if you know the classification labels
# labels = ["Label 1", "Label 2", ...]  # Define your labels corresponding to the model's classes
# print(f"Predicted label: {labels[predicted_class_id]}")
```


## Drop in Replacement for Pytorch Models
```python
import torch
from torch import nn
from bitnet import replace_linears_in_pytorch_model

# Define a simple model
model = nn.Sequential(
    nn.Linear(10, 20),
    nn.ReLU(),
    nn.Linear(20, 30),
)

print("Before replacement:")
print(model)

# Replace nn.Linear with BitLinear
replace_linears_in_pytorch_model(model)

print("After replacement:")
print(model)

# Now you can use the model for training or inference
# For example, pass a random input through the model
input = torch.randn(1, 10)
output = model(input)
```


### Optimized Cuda Kernel
`python setup.py build_ext --inplace`

```python
import torch
import gemm_lowbit_ext  # This imports the compiled module

# Example usage
a = torch.randn(10, 20, dtype=torch.half, device='cuda')  # Example tensor
b = torch.randn(20, 30, dtype=torch.half, device='cuda')  # Example tensor
c = torch.empty(10, 30, dtype=torch.half, device='cuda')  # Output tensor

w_scale = 1.0  # Example scale factor
x_scale = 1.0  # Example scale factor

# Call the custom CUDA GEMM operation
gemm_lowbit_ext.gemm_lowbit(a, b, c, w_scale, x_scale)

print(c)  # View the result

```


## `BitLora`
Implementation of BitLora!

```python
import torch
from bitnet import BitLora

# Random text tensor
x = torch.randn(1, 12, 200)

# Create an instance of the BitLora model
model = BitLora(in_features=200, out_features=200, rank=4, lora_alpha=1)

# Perform the forward pass
out = model(x)

# Print the shape of the output tensor
print(out.shape)
```


## BitMamba
```python
import torch
from bitnet import BitMamba

# Create a tensor of size (2, 10) with random values between 0 and 100
x = torch.randint(0, 100, (2, 10))

# Create an instance of the BitMamba model with input size 512, hidden size 100, output size 10, and depth size 6
model = BitMamba(512, 100, 10, 6, return_tokens=True)

# Pass the input tensor through the model and get the output
output = model(x)

# Print the output tensor
print(output)

# Print the shape of the output tensor
print(output.shape)

```

## `BitMoE`

```python
import torch
from bitnet.bit_moe import BitMoE

# Create input tensor
x = torch.randn(2, 4, 8)

# Create BitMoE model with specified input and output dimensions
model = BitMoE(8, 4, 2)

# Forward pass through the model
output = model(x)

# Print the output
print(output)
```

# License
MIT

# Citation
```bibtex
@misc{2310.11453,
Author = {Hongyu Wang and Shuming Ma and Li Dong and Shaohan Huang and Huaijie Wang and Lingxiao Ma and Fan Yang and Ruiping Wang and Yi Wu and Furu Wei},
Title = {BitNet: Scaling 1-bit Transformers for Large Language Models},
Year = {2023},
Eprint = {arXiv:2310.11453},
}

```


# Todo
- [x] Double check BitLinear implementation and make sure it works exactly as in paper 
- [x] Implement training script for `BitNetTransformer`
- [x] Train on Enwiki8, copy and past code and data from Lucidrains repos
- [x] Benchmark performance
- [x] Look into Straight Through Estimator for non-differentiable backprop
- [x] Implement BitFeedForward
- [x] Clean up codebase 
- [x] Add unit tests for each module
- [x] Implement the new BitNet1.5b from the [paper](https://arxiv.org/abs/2402.17764)
- [ ] Implement the BitNet15b in Cuda
            

Raw data

            {
    "_id": null,
    "home_page": "https://github.com/kyegomez/bitnet",
    "name": "bitnet",
    "maintainer": null,
    "docs_url": null,
    "requires_python": "<4.0,>=3.10",
    "maintainer_email": null,
    "keywords": "artificial intelligence, deep learning, optimizers, Prompt Engineering",
    "author": "Kye Gomez",
    "author_email": "kye@apac.ai",
    "download_url": "https://files.pythonhosted.org/packages/2a/d3/fbceb69bbad24dc80eceb334a884188cad1263b657feb9dad317ef642371/bitnet-0.2.5.tar.gz",
    "platform": null,
    "description": "[![Multi-Modality](agorabanner.png)](https://discord.gg/qUtxnK2NMf)\n\n# BitNet\n![bitnet](/bitnet.png)\nPyTorch Implementation of the linear methods and model from the paper \"BitNet: Scaling 1-bit Transformers for Large Language Models\"\n\n[Paper link:](https://arxiv.org/pdf/2310.11453.pdf)\n\nBitLinear = tensor -> layernorm -> Binarize -> abs max quantization -> dequant\n\n\"The implementation of the BitNet architecture is quite simple, requiring only the replacement of linear projections (i.e., nn.Linear in PyTorch) in the Transformer. \" -- BitNet is really easy to implement just swap out the linears with the BitLinear modules! \n\n## **NEWS**\n- **New Iteration** \ud83d\udd25 There is an all-new iteration from the paper \"[The Era of 1-bit LLMs: All Large Language Models are in 1.58 Bits](https://arxiv.org/abs/2402.17764)\", we're implementing it now. Join the Agora discord and contribute! [Join Here](https://discord.gg/hFzevCjG8c)\n- **New Optimizations** The first `BitLinear` has been optimized and we now have a Bit Attention `BitMGQA` That implements BitLinear into the attention mechanism. Multi Grouped Query Attention is also widely recognized as the best attention for its fast decoding and long context handling, thanks to Frank for his easy to use implementation!\n- **BitLinear 1.5 Launch \ud83d\udd25**: The new BitLinear 1.5 is still in progress \ud83d\udd25 [Here is the file]() There are still some bugs like with the dequantization algorithm and we still need to replace the multiplication with elementwisw addition, if you could help with this, this would be amazing.\n- **NOTICE**: A model obviously needs to be finetuned from scratch to use BitLinear, just changing the linear methods in an already trained model isn't going to work. Finetune or train from scratch.\n\n## Appreciation\n- Dimitry, Nullonix for analysis and code review and revision\n- Vyom, for providing 4080 to train!\n\n## Installation\n`pip install bitnet`\n\n## Usage:\n\n### `BitLinear`\n- Example of the BitLinear layer which is the main innovation of the paper!\n```python\nimport torch\n\nfrom bitnet import BitLinear\n\n# Input\nx = torch.randn(10, 1000, 512)\n\n# BitLinear layer\nlayer = BitLinear(512, 400)\n\n# Output\ny = layer(x)\n\nprint(y)\n\n```\n\n### BitLinearNew\n```python\nimport torch\nfrom bitnet import BitLinearNew\n\n# Create a random tensor of shape (16, 10)\nx = torch.randn(16, 1000, 512)\n\n# Create an instance of the BitLinearNew class with input size 10, output size 20, and 2 groups\nlayer = BitLinearNew(\n    512,\n    20,\n)\n\n# Perform a forward pass through the BitLinearNew layer with input x\noutput = layer(x)\n\n# Print the output tensor\nprint(output)\nprint(output.shape)\n```\n----\n\n### `BitNetTransformer`\n- Fully implemented Transformer as described in the diagram with MHA, and BitFeedforwards\n- Can be utilized not just for text but for images and maybe even video or audio processing\n- Complete with residuals and skip connections for gradient flow\n\n```python\n# Import the necessary libraries\nimport torch\nfrom bitnet import BitNetTransformer\n\n# Create a random tensor of integers\nx = torch.randint(0, 20000, (1, 1024))\n\n# Initialize the BitNetTransformer model\nbitnet = BitNetTransformer(\n    num_tokens=20000,  # Number of unique tokens in the input\n    dim=1024,  # Dimension of the input and output embeddings\n    depth=6,  # Number of transformer layers\n    heads=8,  # Number of attention heads\n    ff_mult=4,  # Multiplier for the hidden dimension in the feed-forward network\n)\n\n# Pass the tensor through the transformer model\nlogits = bitnet(x)\n\n# Print the shape of the output\nprint(logits)\n\n```\n\n\n### `BitAttention`\nThis Attention has been modified to use BitLinear instead of the default linear projection. It's also using Multi-Grouped Query Attention instead of regular multi-head attention for faster decoding and longer context handling.\n\n```python\nimport torch\nfrom bitnet import BitMGQA\n\n# Create a random tensor of shape (1, 10, 512)\nx = torch.randn(1, 10, 512)\n\n# Create an instance of the BitMGQA model with input size 512, 8 attention heads, and 4 layers\ngqa = BitMGQA(512, 8, 4)\n\n# Pass the input tensor through the BitMGQA model and get the output and attention weights\nout, _ = gqa(x, x, x, need_weights=True)\n\n# Print the shapes of the output tensor and attention tensor\nprint(out)\n```\n\n### `BitFeedForward`\n- Feedforward as shown in the diagram with BitLinear and a GELU:\n- Linear -> GELU -> Linear\n- You can add dropouts, or layernorms, or other layers for a better ffn\n\n```python\nimport torch\nfrom bitnet import BitFeedForward\n\n# Create a random input tensor of shape (10, 512)\nx = torch.randn(10, 512)\n\n# Create an instance of the BitFeedForward class with the following parameters:\n# - input_dim: 512\n# - hidden_dim: 512\n# - num_layers: 4\n# - swish: True (use Swish activation function)\n# - post_act_ln: True (apply Layer Normalization after each activation)\n# - dropout: 0.1 (apply dropout with a probability of 0.1)\nff = BitFeedForward(512, 512, 4, swish=True, post_act_ln=True, dropout=0.1)\n\n# Apply the BitFeedForward network to the input tensor x\ny = ff(x)\n\n# Print the shape of the output tensor y\nprint(y)  # torch.Size([10, 512])\n```\n\n## Inference\n```python\nfrom bitnet import BitNetInference\n\nbitnet = BitNetInference()\nbitnet.load_model(\"../model_checkpoint.pth\")  # Download model\noutput_str = bitnet.generate(\"The dog jumped over the \", 512)\nprint(output_str)\n```\n\n## Huggingface Usage\n```python\nimport torch\nfrom transformers import AutoModelForSequenceClassification, AutoTokenizer\n\nfrom bitnet import replace_linears_in_hf\n\n# Load a model from Hugging Face's Transformers\nmodel_name = \"bert-base-uncased\"\ntokenizer = AutoTokenizer.from_pretrained(model_name)\nmodel = AutoModelForSequenceClassification.from_pretrained(model_name)\n\n# Replace Linear layers with BitLinear\nreplace_linears_in_hf(model)\n\n# Example text to classify\ntext = \"Replace this with your text\"\ninputs = tokenizer(\n    text, return_tensors=\"pt\", padding=True, truncation=True, max_length=512\n)\n\n# Perform inference\nmodel.eval()  # Set the model to evaluation mode\nwith torch.no_grad():\n    outputs = model(**inputs)\n    predictions = torch.nn.functional.softmax(outputs.logits, dim=-1)\n    print(predictions)\n\n# Process predictions\npredicted_class_id = predictions.argmax().item()\nprint(f\"Predicted class ID: {predicted_class_id}\")\n\n# Optionally, map the predicted class ID to a label, if you know the classification labels\n# labels = [\"Label 1\", \"Label 2\", ...]  # Define your labels corresponding to the model's classes\n# print(f\"Predicted label: {labels[predicted_class_id]}\")\n```\n\n\n## Drop in Replacement for Pytorch Models\n```python\nimport torch\nfrom torch import nn\nfrom bitnet import replace_linears_in_pytorch_model\n\n# Define a simple model\nmodel = nn.Sequential(\n    nn.Linear(10, 20),\n    nn.ReLU(),\n    nn.Linear(20, 30),\n)\n\nprint(\"Before replacement:\")\nprint(model)\n\n# Replace nn.Linear with BitLinear\nreplace_linears_in_pytorch_model(model)\n\nprint(\"After replacement:\")\nprint(model)\n\n# Now you can use the model for training or inference\n# For example, pass a random input through the model\ninput = torch.randn(1, 10)\noutput = model(input)\n```\n\n\n### Optimized Cuda Kernel\n`python setup.py build_ext --inplace`\n\n```python\nimport torch\nimport gemm_lowbit_ext  # This imports the compiled module\n\n# Example usage\na = torch.randn(10, 20, dtype=torch.half, device='cuda')  # Example tensor\nb = torch.randn(20, 30, dtype=torch.half, device='cuda')  # Example tensor\nc = torch.empty(10, 30, dtype=torch.half, device='cuda')  # Output tensor\n\nw_scale = 1.0  # Example scale factor\nx_scale = 1.0  # Example scale factor\n\n# Call the custom CUDA GEMM operation\ngemm_lowbit_ext.gemm_lowbit(a, b, c, w_scale, x_scale)\n\nprint(c)  # View the result\n\n```\n\n\n## `BitLora`\nImplementation of BitLora!\n\n```python\nimport torch\nfrom bitnet import BitLora\n\n# Random text tensor\nx = torch.randn(1, 12, 200)\n\n# Create an instance of the BitLora model\nmodel = BitLora(in_features=200, out_features=200, rank=4, lora_alpha=1)\n\n# Perform the forward pass\nout = model(x)\n\n# Print the shape of the output tensor\nprint(out.shape)\n```\n\n\n## BitMamba\n```python\nimport torch\nfrom bitnet import BitMamba\n\n# Create a tensor of size (2, 10) with random values between 0 and 100\nx = torch.randint(0, 100, (2, 10))\n\n# Create an instance of the BitMamba model with input size 512, hidden size 100, output size 10, and depth size 6\nmodel = BitMamba(512, 100, 10, 6, return_tokens=True)\n\n# Pass the input tensor through the model and get the output\noutput = model(x)\n\n# Print the output tensor\nprint(output)\n\n# Print the shape of the output tensor\nprint(output.shape)\n\n```\n\n## `BitMoE`\n\n```python\nimport torch\nfrom bitnet.bit_moe import BitMoE\n\n# Create input tensor\nx = torch.randn(2, 4, 8)\n\n# Create BitMoE model with specified input and output dimensions\nmodel = BitMoE(8, 4, 2)\n\n# Forward pass through the model\noutput = model(x)\n\n# Print the output\nprint(output)\n```\n\n# License\nMIT\n\n# Citation\n```bibtex\n@misc{2310.11453,\nAuthor = {Hongyu Wang and Shuming Ma and Li Dong and Shaohan Huang and Huaijie Wang and Lingxiao Ma and Fan Yang and Ruiping Wang and Yi Wu and Furu Wei},\nTitle = {BitNet: Scaling 1-bit Transformers for Large Language Models},\nYear = {2023},\nEprint = {arXiv:2310.11453},\n}\n\n```\n\n\n# Todo\n- [x] Double check BitLinear implementation and make sure it works exactly as in paper \n- [x] Implement training script for `BitNetTransformer`\n- [x] Train on Enwiki8, copy and past code and data from Lucidrains repos\n- [x] Benchmark performance\n- [x] Look into Straight Through Estimator for non-differentiable backprop\n- [x] Implement BitFeedForward\n- [x] Clean up codebase \n- [x] Add unit tests for each module\n- [x] Implement the new BitNet1.5b from the [paper](https://arxiv.org/abs/2402.17764)\n- [ ] Implement the BitNet15b in Cuda",
    "bugtrack_url": null,
    "license": "MIT",
    "summary": "bitnet - Pytorch",
    "version": "0.2.5",
    "project_urls": {
        "Documentation": "https://github.com/kyegomez/bitnet",
        "Homepage": "https://github.com/kyegomez/bitnet",
        "Repository": "https://github.com/kyegomez/bitnet"
    },
    "split_keywords": [
        "artificial intelligence",
        " deep learning",
        " optimizers",
        " prompt engineering"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "8dd6012c12744058f5fdd1dddbb74d818f5bd2aca8248e53bda9302a367e5393",
                "md5": "2374301ba5e37d11166c39328cf2fe43",
                "sha256": "a5a38450735390b5724a401a618640adbfcd4e52be2711640794c71330307b10"
            },
            "downloads": -1,
            "filename": "bitnet-0.2.5-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "2374301ba5e37d11166c39328cf2fe43",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": "<4.0,>=3.10",
            "size": 29927,
            "upload_time": "2024-04-28T04:23:25",
            "upload_time_iso_8601": "2024-04-28T04:23:25.534497Z",
            "url": "https://files.pythonhosted.org/packages/8d/d6/012c12744058f5fdd1dddbb74d818f5bd2aca8248e53bda9302a367e5393/bitnet-0.2.5-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        },
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "2ad3fbceb69bbad24dc80eceb334a884188cad1263b657feb9dad317ef642371",
                "md5": "4d6e35ef459262d079656226348062d4",
                "sha256": "77c26d7ab5151826a7686cef06a44287380e7f6e4b6d121571666d4e99c2fe3f"
            },
            "downloads": -1,
            "filename": "bitnet-0.2.5.tar.gz",
            "has_sig": false,
            "md5_digest": "4d6e35ef459262d079656226348062d4",
            "packagetype": "sdist",
            "python_version": "source",
            "requires_python": "<4.0,>=3.10",
            "size": 29293,
            "upload_time": "2024-04-28T04:23:26",
            "upload_time_iso_8601": "2024-04-28T04:23:26.829090Z",
            "url": "https://files.pythonhosted.org/packages/2a/d3/fbceb69bbad24dc80eceb334a884188cad1263b657feb9dad317ef642371/bitnet-0.2.5.tar.gz",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2024-04-28 04:23:26",
    "github": true,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "github_user": "kyegomez",
    "github_project": "bitnet",
    "travis_ci": false,
    "coveralls": false,
    "github_actions": true,
    "requirements": [],
    "lcname": "bitnet"
}
        
Elapsed time: 0.28020s