| Name | byllm JSON |
| Version |
0.4.5
JSON |
| download |
| home_page | None |
| Summary | byLLM Provides Easy to use APIs for different LLM Providers to be used with Jaseci's Jaclang Programming Language. |
| upload_time | 2025-10-20 18:44:19 |
| maintainer | Jason Mars |
| docs_url | None |
| author | Jason Mars |
| requires_python | None |
| license | MIT |
| keywords |
llm
jaclang
jaseci
byllm
|
| VCS |
|
| bugtrack_url |
|
| requirements |
No requirements were recorded.
|
| Travis-CI |
No Travis.
|
| coveralls test coverage |
No coveralls.
|
<div align="center">
<img src="../docs/docs/assets/byLLM_name_logo.png" height="150">
[About byLLM] | [Get started] | [Usage docs] | [Research Paper]
</div>
[About byLLM]: https://www.jac-lang.org/learn/jac-byllm/with_llm/
[Get started]: https://www.jac-lang.org/learn/jac-byllm/quickstart/
[Usage docs]: https://www.jac-lang.org/learn/jac-byllm/usage/
[Research Paper]: https://arxiv.org/abs/2405.08965
# byLLM : Less Prompting! More Coding!
[](https://pypi.org/project/byllm/) [](https://github.com/jaseci-labs/jaseci/actions/workflows/test-jaseci.yml)
byLLM is an innovative AI integration framework built for the Jaseci ecosystem, implementing the cutting-edge Meaning Typed Programming (MTP) paradigm. MTP revolutionizes AI integration by embedding prompt engineering directly into code semantics, making AI interactions more natural and maintainable. While primarily designed to complement the Jac programming language, byLLM also provides a powerful Python library interface.
Installation is simple via PyPI:
```bash
pip install byllm
```
## Basic Example
Consider building an application that translates english to other languages using an LLM. This can be simply built as follows:
```python
import from byllm.lib { Model }
glob llm = Model(model_name="gpt-4o");
def translate_to(language: str, phrase: str) -> str by llm();
with entry {
output = translate_to(language="Welsh", phrase="Hello world");
print(output);
}
```
This simple piece of code replaces traditional prompt engineering without introducing additional complexity.
## Power of Types with LLMs
Consider a program that detects the personality type of a historical figure from their name. This can eb built in a way that LLM picks from an enum and the output strictly adhere this type.
```python
import from byllm.lib { Model }
glob llm = Model(model_name="gemini/gemini-2.0-flash");
enum Personality {
INTROVERT, EXTROVERT, AMBIVERT
}
def get_personality(name: str) -> Personality by llm();
with entry {
name = "Albert Einstein";
result = get_personality(name);
print(f"{result} personality detected for {name}");
}
```
> Similarly, custom types can be used as output types which force the LLM to adhere to the specified type and produce a valid result.
## Control! Control! Control!
Even if we are elimination prompt engineering entierly, we allow specific ways to enrich code semantics through **docstrings** and **semstrings**.
```python
"""Represents the personal record of a person"""
obj Person {
has name: str;
has dob: str;
has ssn: str;
}
sem Person.name = "Full name of the person";
sem Person.dob = "Date of Birth";
sem Person.ssn = "Last four digits of the Social Security Number of a person";
"""Calculate eligibility for various services based on person's data."""
def check_eligibility(person: Person, service_type: str) -> bool by llm();
```
Docstrings naturally enhance the semantics of their associated code constructs, while the `sem` keyword provides an elegant way to enrich the meaning of class attributes and function arguments. Our research shows these concise semantic strings are more effective than traditional multi-line prompts.
## How well does byLLM work?
byLLM is built using the underline priciple of Meaning Typed Programming and we shown our evaluation data compared with two such AI integration frameworks for python, such as DSPy and LMQL. We show significant performance gain against LMQL while allowing on par or better performance to DSPy, while having a lower cost and faster runtime.
<div align="center">
<img src="../docs/docs/assets/correctness_comparison.png" alt="Correctness Comparison" width="600" style="max-width: 100%;">
<br>
<em>Figure: Correctness comparison of byLLM with DSPy and LMQL on benchmark tasks.</em>
</div>
**๐ Full Documentation**: [Jac byLLM Documentation](https://www.jac-lang.org/learn/jac-byllm/with_llm/)
**๐ฎ Complete Examples**:
- [Fantasy Trading Game](https://www.jac-lang.org/learn/examples/mtp_examples/fantasy_trading_game/) - Interactive RPG with AI-generated characters
- [RPG Level Generator](https://www.jac-lang.org/learn/examples/mtp_examples/rpg_game/) - AI-powered game level creation
- [RAG Chatbot Tutorial](https://www.jac-lang.org/learn/examples/rag_chatbot/Overview/) - Building chatbots with document retrieval
**๐ฌ Research**: The research journey of MTP is available on [Arxiv](https://arxiv.org/abs/2405.08965) and accepted for OOPSLA 2025.
## Quick Links
- [Getting Started Guide](https://www.jac-lang.org/learn/jac-byllm/quickstart/)
- [Jac Language Documentation](https://www.jac-lang.org/)
- [GitHub Repository](https://github.com/jaseci-labs/jaseci)
## Contributing
We welcome contributions to byLLM! Whether you're fixing bugs, improving documentation, or adding new features, your help is appreciated.
Areas we actively seek contributions:
- ๐ Bug fixes and improvements
- ๐ Documentation enhancements
- โจ New examples and tutorials
- ๐งช Test cases and benchmarks
Please see our [Contributing Guide](https://www.jac-lang.org/internals/contrib/) for detailed instructions.
If you find a bug or have a feature request, please [open an issue](https://github.com/jaseci-labs/jaseci/issues/new/choose).
## Community
Join our vibrant community:
- [Discord Server](https://discord.gg/6j3QNdtcN6) - Chat with the team and community
## License
This project is licensed under the MIT License.
### Third-Party Dependencies
byLLM integrates with various LLM providers (OpenAI, Anthropic, Google, etc.) through LiteLLM.
## Cite our research
> Jayanaka L. Dantanarayana, Yiping Kang, Kugesan Sivasothynathan, Christopher Clarke, Baichuan Li, Savini
Kashmira, Krisztian Flautner, Lingjia Tang, and Jason Mars. 2025. MTP: A Meaning-Typed Language Ab-
straction for AI-Integrated Programming. Proc. ACM Program. Lang. 9, OOPSLA2, Article 314 (October 2025),
29 pages. https://doi.org/10.1145/3763092
## Jaseci Contributors
<a href="https://github.com/jaseci-labs/jaseci/graphs/contributors">
<img src="https://contrib.rocks/image?repo=jaseci-labs/jaseci" />
</a>
Raw data
{
"_id": null,
"home_page": null,
"name": "byllm",
"maintainer": "Jason Mars",
"docs_url": null,
"requires_python": null,
"maintainer_email": "jason@mars.ninja",
"keywords": "llm, jaclang, jaseci, byLLM",
"author": "Jason Mars",
"author_email": "jason@mars.ninja",
"download_url": "https://files.pythonhosted.org/packages/5d/46/0323cd11a975269b63e786d811d968c58335006b895a0dd6ad80ae62fa93/byllm-0.4.5.tar.gz",
"platform": null,
"description": "<div align=\"center\">\n <img src=\"../docs/docs/assets/byLLM_name_logo.png\" height=\"150\">\n\n [About byLLM] | [Get started] | [Usage docs] | [Research Paper]\n</div>\n\n[About byLLM]: https://www.jac-lang.org/learn/jac-byllm/with_llm/\n[Get started]: https://www.jac-lang.org/learn/jac-byllm/quickstart/\n[Usage docs]: https://www.jac-lang.org/learn/jac-byllm/usage/\n[Research Paper]: https://arxiv.org/abs/2405.08965\n\n# byLLM : Less Prompting! More Coding!\n\n[](https://pypi.org/project/byllm/) [](https://github.com/jaseci-labs/jaseci/actions/workflows/test-jaseci.yml)\n\nbyLLM is an innovative AI integration framework built for the Jaseci ecosystem, implementing the cutting-edge Meaning Typed Programming (MTP) paradigm. MTP revolutionizes AI integration by embedding prompt engineering directly into code semantics, making AI interactions more natural and maintainable. While primarily designed to complement the Jac programming language, byLLM also provides a powerful Python library interface.\n\nInstallation is simple via PyPI:\n\n```bash\npip install byllm\n```\n\n## Basic Example\n\nConsider building an application that translates english to other languages using an LLM. This can be simply built as follows:\n\n```python\nimport from byllm.lib { Model }\n\nglob llm = Model(model_name=\"gpt-4o\");\n\ndef translate_to(language: str, phrase: str) -> str by llm();\n\nwith entry {\n output = translate_to(language=\"Welsh\", phrase=\"Hello world\");\n print(output);\n}\n```\n\nThis simple piece of code replaces traditional prompt engineering without introducing additional complexity.\n\n## Power of Types with LLMs\n\nConsider a program that detects the personality type of a historical figure from their name. This can eb built in a way that LLM picks from an enum and the output strictly adhere this type.\n\n```python\nimport from byllm.lib { Model }\nglob llm = Model(model_name=\"gemini/gemini-2.0-flash\");\n\nenum Personality {\n INTROVERT, EXTROVERT, AMBIVERT\n}\n\ndef get_personality(name: str) -> Personality by llm();\n\nwith entry {\n name = \"Albert Einstein\";\n result = get_personality(name);\n print(f\"{result} personality detected for {name}\");\n}\n```\n\n> Similarly, custom types can be used as output types which force the LLM to adhere to the specified type and produce a valid result.\n\n## Control! Control! Control!\n\nEven if we are elimination prompt engineering entierly, we allow specific ways to enrich code semantics through **docstrings** and **semstrings**.\n\n```python\n\"\"\"Represents the personal record of a person\"\"\"\nobj Person {\n has name: str;\n has dob: str;\n has ssn: str;\n}\n\nsem Person.name = \"Full name of the person\";\nsem Person.dob = \"Date of Birth\";\nsem Person.ssn = \"Last four digits of the Social Security Number of a person\";\n\n\"\"\"Calculate eligibility for various services based on person's data.\"\"\"\ndef check_eligibility(person: Person, service_type: str) -> bool by llm();\n\n```\n\nDocstrings naturally enhance the semantics of their associated code constructs, while the `sem` keyword provides an elegant way to enrich the meaning of class attributes and function arguments. Our research shows these concise semantic strings are more effective than traditional multi-line prompts.\n\n## How well does byLLM work?\n\nbyLLM is built using the underline priciple of Meaning Typed Programming and we shown our evaluation data compared with two such AI integration frameworks for python, such as DSPy and LMQL. We show significant performance gain against LMQL while allowing on par or better performance to DSPy, while having a lower cost and faster runtime.\n\n<div align=\"center\">\n <img src=\"../docs/docs/assets/correctness_comparison.png\" alt=\"Correctness Comparison\" width=\"600\" style=\"max-width: 100%;\">\n <br>\n <em>Figure: Correctness comparison of byLLM with DSPy and LMQL on benchmark tasks.</em>\n</div>\n\n**\ud83d\udcda Full Documentation**: [Jac byLLM Documentation](https://www.jac-lang.org/learn/jac-byllm/with_llm/)\n\n**\ud83c\udfae Complete Examples**:\n- [Fantasy Trading Game](https://www.jac-lang.org/learn/examples/mtp_examples/fantasy_trading_game/) - Interactive RPG with AI-generated characters\n- [RPG Level Generator](https://www.jac-lang.org/learn/examples/mtp_examples/rpg_game/) - AI-powered game level creation\n- [RAG Chatbot Tutorial](https://www.jac-lang.org/learn/examples/rag_chatbot/Overview/) - Building chatbots with document retrieval\n\n**\ud83d\udd2c Research**: The research journey of MTP is available on [Arxiv](https://arxiv.org/abs/2405.08965) and accepted for OOPSLA 2025.\n\n## Quick Links\n\n- [Getting Started Guide](https://www.jac-lang.org/learn/jac-byllm/quickstart/)\n- [Jac Language Documentation](https://www.jac-lang.org/)\n- [GitHub Repository](https://github.com/jaseci-labs/jaseci)\n\n## Contributing\n\nWe welcome contributions to byLLM! Whether you're fixing bugs, improving documentation, or adding new features, your help is appreciated.\n\nAreas we actively seek contributions:\n- \ud83d\udc1b Bug fixes and improvements\n- \ud83d\udcda Documentation enhancements\n- \u2728 New examples and tutorials\n- \ud83e\uddea Test cases and benchmarks\n\nPlease see our [Contributing Guide](https://www.jac-lang.org/internals/contrib/) for detailed instructions.\n\nIf you find a bug or have a feature request, please [open an issue](https://github.com/jaseci-labs/jaseci/issues/new/choose).\n\n## Community\n\nJoin our vibrant community:\n- [Discord Server](https://discord.gg/6j3QNdtcN6) - Chat with the team and community\n\n## License\n\nThis project is licensed under the MIT License.\n\n### Third-Party Dependencies\n\nbyLLM integrates with various LLM providers (OpenAI, Anthropic, Google, etc.) through LiteLLM.\n\n## Cite our research\n\n\n> Jayanaka L. Dantanarayana, Yiping Kang, Kugesan Sivasothynathan, Christopher Clarke, Baichuan Li, Savini\nKashmira, Krisztian Flautner, Lingjia Tang, and Jason Mars. 2025. MTP: A Meaning-Typed Language Ab-\nstraction for AI-Integrated Programming. Proc. ACM Program. Lang. 9, OOPSLA2, Article 314 (October 2025),\n29 pages. https://doi.org/10.1145/3763092\n\n\n## Jaseci Contributors\n\n<a href=\"https://github.com/jaseci-labs/jaseci/graphs/contributors\">\n <img src=\"https://contrib.rocks/image?repo=jaseci-labs/jaseci\" />\n</a>",
"bugtrack_url": null,
"license": "MIT",
"summary": "byLLM Provides Easy to use APIs for different LLM Providers to be used with Jaseci's Jaclang Programming Language.",
"version": "0.4.5",
"project_urls": null,
"split_keywords": [
"llm",
" jaclang",
" jaseci",
" byllm"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "a0f708959dfd32733e336be60ad68ca56eb7f2e5ece7aabb46748352442ffa07",
"md5": "075a815f47d075f33446670844043614",
"sha256": "8d02021920fe336b0c9cbd4b4987afb6793da271cc63b460d37dc910c2c68169"
},
"downloads": -1,
"filename": "byllm-0.4.5-py2.py3-none-any.whl",
"has_sig": false,
"md5_digest": "075a815f47d075f33446670844043614",
"packagetype": "bdist_wheel",
"python_version": "py2.py3",
"requires_python": null,
"size": 18136,
"upload_time": "2025-10-20T18:44:17",
"upload_time_iso_8601": "2025-10-20T18:44:17.756978Z",
"url": "https://files.pythonhosted.org/packages/a0/f7/08959dfd32733e336be60ad68ca56eb7f2e5ece7aabb46748352442ffa07/byllm-0.4.5-py2.py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "5d460323cd11a975269b63e786d811d968c58335006b895a0dd6ad80ae62fa93",
"md5": "2bd67e42c766a6641d9b403f927004ec",
"sha256": "6e6b9da27692311ada0859e35eb404b33363121d77b6f0194ca3d8e5accf78ae"
},
"downloads": -1,
"filename": "byllm-0.4.5.tar.gz",
"has_sig": false,
"md5_digest": "2bd67e42c766a6641d9b403f927004ec",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 17015,
"upload_time": "2025-10-20T18:44:19",
"upload_time_iso_8601": "2025-10-20T18:44:19.078998Z",
"url": "https://files.pythonhosted.org/packages/5d/46/0323cd11a975269b63e786d811d968c58335006b895a0dd6ad80ae62fa93/byllm-0.4.5.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2025-10-20 18:44:19",
"github": false,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"lcname": "byllm"
}