Name | jemma JSON |
Version |
0.1.4207
JSON |
| download |
home_page | None |
Summary | jemma & her ai agents that build software |
upload_time | 2024-05-22 01:36:11 |
maintainer | None |
docs_url | None |
author | tolitius |
requires_python | None |
license | None |
keywords |
ai
agents
llm
code generation
|
VCS |
|
bugtrack_url |
|
requirements |
No requirements were recorded.
|
Travis-CI |
No Travis.
|
coveralls test coverage |
No coveralls.
|
# <img src="docs/jemma-logo.png" width="50px"> jemma
> hey, I am Jemma. I convert your thoughts to code
<img src="docs/jemma.gif" width="850">
- [đ§Ŧ what I do](#-what-i-do)
- [am skechin'](#-am-sketchin)
- [đšī¸ can I play?](#%EF%B8%8F-can-i-play)
- [install me](#install-me)
- [convert ideas to code](#convert-ideas-to-code)
- [đ ī¸ how I do it](#%EF%B8%8F-how-i-do-it)
- [models](#models)
- [problems](#problems)
- [development](#development)
- [license](#license)
# đ§Ŧ what I do
I take an idea in a form of:
* a few words, such as "`Bill Pay Service`", "`2048`" or "`Kanban Board`"
* OR a text file with requirements
and I create a web based prototype đ
> _in fact I just created all three đ (so you can quickly see what I mean):_
<img width="714" alt="image" src="docs/jemma-builds.png">
after the prototype is built, I take feedback and refactor it.
## đ¨ am sketchin'
I also dabble in converting sketches to web app mockups:
```bash
$ jemma --prompt "Learning Portal" --sketch ~/tmp/sketch.png --build-prototype --claude
```
<img width="814" alt="image" src="https://github.com/tolitius/jemma/assets/136575/e7da9bb4-71ab-4e1e-ab11-c89217b921c3">
_this does require one or two hints of feedback, but I'm getting better_
# đšī¸ can I play?
of course!
## install me
```
$ pip install jemma
```
add a "`.env`" file from where I am going to be called from with API keys of your choice:
```bash
export ANTHROPIC_API_KEY=sk-ant-api03...
export OPENAI_API_KEY=sk-puk...
export REPLICATE_API_TOKEN=r8_ai...
```
ready to rock! :metal:
## convert ideas to code
```
$ jemma --prompt "Bill Pay Service" --build-prototype --claude
```
I will assemble a team who will build a prototype, open a browser with it, and wait for your feedback:
```bash
Claude đ§ claude-3-haiku-20240307 â
> Project Manager:
Dear Business Owner, in this meeting we'll work on creating requirements based on the đĄ idea
> Business Owner: đ creating detailed requirements ...đī¸
> Project Manager:
Dear Engineer, in this meeting let's put our heads together to build a prototype based on the requirements.
> Engineer: đĢ creating a prototype based on the requirements...
> Engineer: crafting css đ¨ (a.k.a. "visual beauty")
> Engineer: cooking javascript đŽ (a.k.a. "master of interactions")
> Engineer: creating html đ¸ī¸ (a.k.a. "the skeleton of the web")
prototype files created successfully:
- prototype/index.html
- prototype/app.js
- prototype/app.css
opened prototype in the web browser
tell me how to make it better >
```
# đ ī¸ how I do it
I rely on my team of project managers, business owners and engineers<br>
yes... "AI Agents"
When I get an idea a Project Manager meets with a Business Owner to take it in and create a comprehensive set of requirements<br/>
then the Project Manager meets with an Engineer to build the idea based on these new requirements.
## models
I best work with Claude models, that is why my examples all end in "`--claude`":
```bash
$ jemma --prompt "Trivia Game" --build-prototype --claude
```
by default though I will call Ollama (llama3 model):
```bash
$ jemma --prompt "Trivia Game" --build-prototype
Ollama đ§ llama3:8b-instruct-fp16 â
```
here are the default models I would use:
| model param | default model|
| ---- | ---- |
| `--claude` | `claude-3-haiku-20240307` |
| `--openai` | `gpt-3.5-turbo`|
| `--ollama` | `llama3:8b-instruct-fp16`|
| `--replicate` | `meta/meta-llama-3-70b-instruct`|
| `--copilot` | `gpt-3.5-turbo`|
but you can override all of these with your (local, or not) models:
```bash
$ jemma --prompt "Trivia Game" --build-prototype --claude claude-3-opus-20240229
$ jemma --prompt "Trivia Game" --build-prototype --ollama dolphin-mistral:7b-v2.6-dpo-laser-fp16
$ jemma --prompt "Trivia Game" --build-prototype --openai gpt-4-turbo-preview
$ jemma --prompt "Trivia Game" --build-prototype --replicate meta/llama-2-70b-chat
$ jemma --prompt "Trivia Game" --build-prototype --copilot gpt-4
$ ...
```
> _but, at least for now, the best results are produced with **Claude** based models_
## problems
I am still learning, so some prototypes that I build might result in errors<br/>
this would especially be likely with non Claude based models
but, we are all learning, _and_ I love feedback:
```bash
tell me how to make it better > I see an error "app.js:138: uncaught TypeError: chordProgressionData.find(...) is undefined"
> Project Manager:
Dear Engineer, we have met with the user and received a valuable feedback. sudo make it better! đ ī¸
> Engineer: đĢ refactoring prototype based on the feedback...
> Engineer: âģī¸ crafting css đ¨ (a.k.a. "visual beauty")
> Engineer: âģī¸ cooking javascript đŽ (a.k.a. "master of interactions")
> Engineer: âģī¸ creating html đ¸ī¸ (a.k.a. "the skeleton of the web")
prototype files created successfully:
- prototype/index.html
- prototype/app.js
- prototype/app.css
opened prototype in the web browser
tell me how to make it better >
```
_you can check for / find errors in your browser console_
>_iff you know "how to HTML", you can help fix the code as well<br/>_
>_it is often something simple: adding a CSS class, updating the "width", etc._
## development
in order to run from source<br/>
clone jemma:
```bash
$ git clone git@github.com:tolitius/jemma.git
```
and
```bash
$ cd jemma
$ python huddle.py --prompt "Code Editor" --build-prototype --claude
Claude đ§ claude-3-haiku-20240307 â
...
```
# license
Copyright Š 2024 tolitius
Distributed under the Eclipse Public License either version 1.0 or (at
your option) any later version.
Raw data
{
"_id": null,
"home_page": null,
"name": "jemma",
"maintainer": null,
"docs_url": null,
"requires_python": null,
"maintainer_email": null,
"keywords": "ai, agents, llm, code generation",
"author": "tolitius",
"author_email": null,
"download_url": "https://files.pythonhosted.org/packages/d2/2b/3b0d7306ee00e5a2f2a46a6d0e594a064bbf4b605f0362b898344e57baaf/jemma-0.1.4207.tar.gz",
"platform": null,
"description": "# <img src=\"docs/jemma-logo.png\" width=\"50px\"> jemma\n> hey, I am Jemma. I convert your thoughts to code\n<img src=\"docs/jemma.gif\" width=\"850\">\n\n- [\ud83e\uddec what I do](#-what-i-do)\n - [am skechin'](#-am-sketchin)\n- [\ud83d\udd79\ufe0f can I play?](#%EF%B8%8F-can-i-play)\n - [install me](#install-me)\n - [convert ideas to code](#convert-ideas-to-code)\n- [\ud83d\udee0\ufe0f how I do it](#%EF%B8%8F-how-i-do-it)\n - [models](#models)\n - [problems](#problems)\n - [development](#development)\n- [license](#license)\n\n# \ud83e\uddec what I do\n\nI take an idea in a form of:\n* a few words, such as \"`Bill Pay Service`\", \"`2048`\" or \"`Kanban Board`\"\n* OR a text file with requirements\n\nand I create a web based prototype \ud83d\ude80\n\n> _in fact I just created all three \ud83d\udc46 (so you can quickly see what I mean):_\n\n<img width=\"714\" alt=\"image\" src=\"docs/jemma-builds.png\">\n\n\nafter the prototype is built, I take feedback and refactor it.\n\n## \ud83c\udfa8 am sketchin'\n\nI also dabble in converting sketches to web app mockups:\n\n```bash\n$ jemma --prompt \"Learning Portal\" --sketch ~/tmp/sketch.png --build-prototype --claude\n```\n<img width=\"814\" alt=\"image\" src=\"https://github.com/tolitius/jemma/assets/136575/e7da9bb4-71ab-4e1e-ab11-c89217b921c3\">\n\n_this does require one or two hints of feedback, but I'm getting better_\n\n# \ud83d\udd79\ufe0f can I play?\n\nof course!\n\n## install me\n\n```\n$ pip install jemma\n```\n\nadd a \"`.env`\" file from where I am going to be called from with API keys of your choice:\n\n```bash\nexport ANTHROPIC_API_KEY=sk-ant-api03...\nexport OPENAI_API_KEY=sk-puk...\nexport REPLICATE_API_TOKEN=r8_ai...\n```\n\nready to rock! :metal:\n\n## convert ideas to code\n\n```\n$ jemma --prompt \"Bill Pay Service\" --build-prototype --claude\n```\n\nI will assemble a team who will build a prototype, open a browser with it, and wait for your feedback:\n```bash\nClaude \ud83e\udde0 claude-3-haiku-20240307 \u2705\n\n> Project Manager:\nDear Business Owner, in this meeting we'll work on creating requirements based on the \ud83d\udca1 idea\n\n> Business Owner: \ud83d\udcda creating detailed requirements ...\ud83d\udd8b\ufe0f\n\n> Project Manager:\nDear Engineer, in this meeting let's put our heads together to build a prototype based on the requirements.\n\n> Engineer: \ud83d\udcab creating a prototype based on the requirements...\n> Engineer: crafting css \ud83c\udfa8 (a.k.a. \"visual beauty\")\n> Engineer: cooking javascript \ud83c\udfae (a.k.a. \"master of interactions\")\n> Engineer: creating html \ud83d\udd78\ufe0f (a.k.a. \"the skeleton of the web\")\n\nprototype files created successfully:\n- prototype/index.html\n- prototype/app.js\n- prototype/app.css\nopened prototype in the web browser\n\ntell me how to make it better >\n```\n\n# \ud83d\udee0\ufe0f how I do it\n\nI rely on my team of project managers, business owners and engineers<br>\nyes... \"AI Agents\"\n\nWhen I get an idea a Project Manager meets with a Business Owner to take it in and create a comprehensive set of requirements<br/>\nthen the Project Manager meets with an Engineer to build the idea based on these new requirements.\n\n## models\n\nI best work with Claude models, that is why my examples all end in \"`--claude`\":\n```bash\n$ jemma --prompt \"Trivia Game\" --build-prototype --claude\n```\nby default though I will call Ollama (llama3 model):\n\n```bash\n$ jemma --prompt \"Trivia Game\" --build-prototype\nOllama \ud83e\udde0 llama3:8b-instruct-fp16 \u2705\n```\n\nhere are the default models I would use:\n\n| model param | default model|\n| ---- | ---- |\n| `--claude` | `claude-3-haiku-20240307` |\n| `--openai` | `gpt-3.5-turbo`|\n| `--ollama` | `llama3:8b-instruct-fp16`|\n| `--replicate` | `meta/meta-llama-3-70b-instruct`|\n| `--copilot` | `gpt-3.5-turbo`|\n\nbut you can override all of these with your (local, or not) models:\n\n```bash\n$ jemma --prompt \"Trivia Game\" --build-prototype --claude claude-3-opus-20240229\n$ jemma --prompt \"Trivia Game\" --build-prototype --ollama dolphin-mistral:7b-v2.6-dpo-laser-fp16\n$ jemma --prompt \"Trivia Game\" --build-prototype --openai gpt-4-turbo-preview\n$ jemma --prompt \"Trivia Game\" --build-prototype --replicate meta/llama-2-70b-chat\n$ jemma --prompt \"Trivia Game\" --build-prototype --copilot gpt-4\n$ ...\n```\n\n> _but, at least for now, the best results are produced with **Claude** based models_\n\n## problems\n\nI am still learning, so some prototypes that I build might result in errors<br/>\nthis would especially be likely with non Claude based models\n\nbut, we are all learning, _and_ I love feedback:\n\n```bash\ntell me how to make it better > I see an error \"app.js:138: uncaught TypeError: chordProgressionData.find(...) is undefined\"\n\n> Project Manager:\nDear Engineer, we have met with the user and received a valuable feedback. sudo make it better! \ud83d\udee0\ufe0f\n\n> Engineer: \ud83d\udcab refactoring prototype based on the feedback...\n\n> Engineer: \u267b\ufe0f crafting css \ud83c\udfa8 (a.k.a. \"visual beauty\")\n\n> Engineer: \u267b\ufe0f cooking javascript \ud83c\udfae (a.k.a. \"master of interactions\")\n\n> Engineer: \u267b\ufe0f creating html \ud83d\udd78\ufe0f (a.k.a. \"the skeleton of the web\")\nprototype files created successfully:\n- prototype/index.html\n- prototype/app.js\n- prototype/app.css\nopened prototype in the web browser\n\ntell me how to make it better >\n```\n\n_you can check for / find errors in your browser console_\n\n>_iff you know \"how to HTML\", you can help fix the code as well<br/>_\n>_it is often something simple: adding a CSS class, updating the \"width\", etc._\n\n## development\n\nin order to run from source<br/>\nclone jemma:\n\n```bash\n$ git clone git@github.com:tolitius/jemma.git\n```\nand\n```bash\n$ cd jemma\n$ python huddle.py --prompt \"Code Editor\" --build-prototype --claude\nClaude \ud83e\udde0 claude-3-haiku-20240307 \u2705\n...\n```\n\n# license\n\nCopyright \u00a9 2024 tolitius\n\nDistributed under the Eclipse Public License either version 1.0 or (at\nyour option) any later version.\n",
"bugtrack_url": null,
"license": null,
"summary": "jemma & her ai agents that build software",
"version": "0.1.4207",
"project_urls": {
"Documentation": "https://github.com/tolitius/jemma?tab=readme-ov-file#-jemma",
"Homepage": "https://github.com/tolitius/jemma",
"Issues": "https://github.com/tolitius/jemma/issues",
"Repository": "https://github.com/tolitius/jemma"
},
"split_keywords": [
"ai",
" agents",
" llm",
" code generation"
],
"urls": [
{
"comment_text": "",
"digests": {
"blake2b_256": "fe1aeea033cec67665b58e17fe2f2991cf70deb666221072409f74c970af8cd0",
"md5": "ac492914ebc1bb1d91ce12e8d7aac80f",
"sha256": "90d2cf91f82fd1938c108eaa333002b7aae52d7afd6bf9d67d27cb02d41246e1"
},
"downloads": -1,
"filename": "jemma-0.1.4207-py3-none-any.whl",
"has_sig": false,
"md5_digest": "ac492914ebc1bb1d91ce12e8d7aac80f",
"packagetype": "bdist_wheel",
"python_version": "py3",
"requires_python": null,
"size": 41914,
"upload_time": "2024-05-22T01:36:09",
"upload_time_iso_8601": "2024-05-22T01:36:09.842642Z",
"url": "https://files.pythonhosted.org/packages/fe/1a/eea033cec67665b58e17fe2f2991cf70deb666221072409f74c970af8cd0/jemma-0.1.4207-py3-none-any.whl",
"yanked": false,
"yanked_reason": null
},
{
"comment_text": "",
"digests": {
"blake2b_256": "d22b3b0d7306ee00e5a2f2a46a6d0e594a064bbf4b605f0362b898344e57baaf",
"md5": "33d6e912b3a57b3f8412514c792b7265",
"sha256": "fe95c90e24d49e63e9820fec88ffa94275914805f92de6d2f7f9351946174679"
},
"downloads": -1,
"filename": "jemma-0.1.4207.tar.gz",
"has_sig": false,
"md5_digest": "33d6e912b3a57b3f8412514c792b7265",
"packagetype": "sdist",
"python_version": "source",
"requires_python": null,
"size": 36226,
"upload_time": "2024-05-22T01:36:11",
"upload_time_iso_8601": "2024-05-22T01:36:11.025467Z",
"url": "https://files.pythonhosted.org/packages/d2/2b/3b0d7306ee00e5a2f2a46a6d0e594a064bbf4b605f0362b898344e57baaf/jemma-0.1.4207.tar.gz",
"yanked": false,
"yanked_reason": null
}
],
"upload_time": "2024-05-22 01:36:11",
"github": true,
"gitlab": false,
"bitbucket": false,
"codeberg": false,
"github_user": "tolitius",
"github_project": "jemma?tab=readme-ov-file#-jemma",
"travis_ci": false,
"coveralls": false,
"github_actions": false,
"lcname": "jemma"
}