Arvixgpt


NameArvixgpt JSON
Version 0.0.0.3 PyPI version JSON
download
home_page
SummarySearch for updated article on arXiv.org
upload_time2023-07-21 06:40:46
maintainer
docs_urlNone
authorAli Nemati and AI Team
requires_python>=3.7
license
keywords python pandas numpy request pypdf2
VCS
bugtrack_url
requirements No requirements were recorded.
Travis-CI No Travis.
coveralls test coverage No coveralls.
            
# Arvixgpt



## Step 1:



run the python script ArXixLatestArticle.py



```python

python Arvixgpt.py

```



then, select Please select one or more prefix. This line of code helps you to

search the article by title, author, abstract, comment, journal reference,...



## Step 2:



```text

Please select one or more prefix codes:

Explanation: prefix

Title: ti

Author: au

Abstract: abs

Comment: co

Journal Reference: jr

Subject Category: cat

Report Number: rn

Id (use id_list instead): id

All of the above: all



Please enter one or more prefix codes (separated by a comma if more than one): ti,au



```



## Step 3:



````text



## Below is our output example for our Summary:



```text

Title:	A Comprehensive Overview of Large Language Models

Summary:

Large Language Models (LLMs) have shown excellent generalization capabilities

that have led to the development of numerous models. These models propose

various new architectures, tweaking existing architectures with refined

training strategies, increasing context length, using high-quality training

data, and increasing training time to outperform baselines. Analyzing new

developments is crucial for identifying changes that enhance training stability

and improve generalization in LLMs. This survey paper comprehensively analyses

the LLMs architectures and their categorization, training strategies, training

datasets, and performance evaluations and discusses future research directions.

Moreover, the paper also discusses the basic building blocks and concepts

behind LLMs, followed by a complete overview of LLMs, including their important

features and functions. Finally, the paper summarizes significant findings from

LLM research and consolidates essential architectural and training strategies

for developing advanced LLMs. Given the continuous advancements in LLMs, we

intend to regularly update this paper by incorporating new sections and

featuring the latest LLM models.



PDF URL:	http://arxiv.org/pdf/2307.06435v1

Authors:	[arxiv.Result.Author('Humza Naveed'), arxiv.Result.Author('Asad Ullah Khan'), arxiv.Result.Author('Shi Qiu'), arxiv.Result.Author('Muhammad Saqib'), arxiv.Result.Author('Saeed Anwar'), arxiv.Result.Author('Muhammad Usman'), arxiv.Result.Author('Nick Barnes'), arxiv.Result.Author('Ajmal Mian')]

````


            

Raw data

            {
    "_id": null,
    "home_page": "",
    "name": "Arvixgpt",
    "maintainer": "",
    "docs_url": null,
    "requires_python": ">=3.7",
    "maintainer_email": "",
    "keywords": "python,pandas,numpy,request,PyPDF2",
    "author": "Ali Nemati and AI Team",
    "author_email": "<Alinemati1983@gmail.com>",
    "download_url": "",
    "platform": null,
    "description": "\n# Arvixgpt\n\n\n\n## Step 1:\n\n\n\nrun the python script ArXixLatestArticle.py\n\n\n\n```python\n\npython Arvixgpt.py\n\n```\n\n\n\nthen, select Please select one or more prefix. This line of code helps you to\n\nsearch the article by title, author, abstract, comment, journal reference,...\n\n\n\n## Step 2:\n\n\n\n```text\n\nPlease select one or more prefix codes:\n\nExplanation: prefix\n\nTitle: ti\n\nAuthor: au\n\nAbstract: abs\n\nComment: co\n\nJournal Reference: jr\n\nSubject Category: cat\n\nReport Number: rn\n\nId (use id_list instead): id\n\nAll of the above: all\n\n\n\nPlease enter one or more prefix codes (separated by a comma if more than one): ti,au\n\n\n\n```\n\n\n\n## Step 3:\n\n\n\n````text\n\n\n\n## Below is our output example for our Summary:\n\n\n\n```text\n\nTitle:\tA Comprehensive Overview of Large Language Models\n\nSummary:\n\nLarge Language Models (LLMs) have shown excellent generalization capabilities\n\nthat have led to the development of numerous models. These models propose\n\nvarious new architectures, tweaking existing architectures with refined\n\ntraining strategies, increasing context length, using high-quality training\n\ndata, and increasing training time to outperform baselines. Analyzing new\n\ndevelopments is crucial for identifying changes that enhance training stability\n\nand improve generalization in LLMs. This survey paper comprehensively analyses\n\nthe LLMs architectures and their categorization, training strategies, training\n\ndatasets, and performance evaluations and discusses future research directions.\n\nMoreover, the paper also discusses the basic building blocks and concepts\n\nbehind LLMs, followed by a complete overview of LLMs, including their important\n\nfeatures and functions. Finally, the paper summarizes significant findings from\n\nLLM research and consolidates essential architectural and training strategies\n\nfor developing advanced LLMs. Given the continuous advancements in LLMs, we\n\nintend to regularly update this paper by incorporating new sections and\n\nfeaturing the latest LLM models.\n\n\n\nPDF URL:\thttp://arxiv.org/pdf/2307.06435v1\n\nAuthors:\t[arxiv.Result.Author('Humza Naveed'), arxiv.Result.Author('Asad Ullah Khan'), arxiv.Result.Author('Shi Qiu'), arxiv.Result.Author('Muhammad Saqib'), arxiv.Result.Author('Saeed Anwar'), arxiv.Result.Author('Muhammad Usman'), arxiv.Result.Author('Nick Barnes'), arxiv.Result.Author('Ajmal Mian')]\n\n````\n\n",
    "bugtrack_url": null,
    "license": "",
    "summary": "Search for updated article on arXiv.org",
    "version": "0.0.0.3",
    "project_urls": null,
    "split_keywords": [
        "python",
        "pandas",
        "numpy",
        "request",
        "pypdf2"
    ],
    "urls": [
        {
            "comment_text": "",
            "digests": {
                "blake2b_256": "64a1bd359a371005154affba01ea44a188eaad6f4ab7fd2d9ae85d6059eac7c6",
                "md5": "7bdfcf39acb6c238d6acb30b35925fd0",
                "sha256": "fc26d1cf9d344b8445a720977ffb06c6474574fbe1993ba9e88390eccacf5074"
            },
            "downloads": -1,
            "filename": "Arvixgpt-0.0.0.3-py3-none-any.whl",
            "has_sig": false,
            "md5_digest": "7bdfcf39acb6c238d6acb30b35925fd0",
            "packagetype": "bdist_wheel",
            "python_version": "py3",
            "requires_python": ">=3.7",
            "size": 2350,
            "upload_time": "2023-07-21T06:40:46",
            "upload_time_iso_8601": "2023-07-21T06:40:46.144090Z",
            "url": "https://files.pythonhosted.org/packages/64/a1/bd359a371005154affba01ea44a188eaad6f4ab7fd2d9ae85d6059eac7c6/Arvixgpt-0.0.0.3-py3-none-any.whl",
            "yanked": false,
            "yanked_reason": null
        }
    ],
    "upload_time": "2023-07-21 06:40:46",
    "github": false,
    "gitlab": false,
    "bitbucket": false,
    "codeberg": false,
    "lcname": "arvixgpt"
}
        
Elapsed time: 0.09102s