Generative pre-trained transformer
Generative pre-trained transformers (GPT) are a type of large language model (LLM)[1][2][3] and a prominent framework for generative artificial intelligence.[4][5] The concept and first such model were introduced in 2018 by the American artificial intelligence organization OpenAI.[6] GPT models are artificial neural networks that are based on the transformer architecture, pre-trained on large data sets of unlabelled text, and able to generate novel human-like content.[2][3] As of 2023, most LLMs have these characteristics[7] and are sometimes referred to broadly as GPTs.[8]

OpenAI has released very influential GPT foundation models that have been sequentially numbered, to comprise its "GPT-n" series.[9] Each of these was significantly more capable than the previous, due to increased size (number of trainable parameters) and training. The most recent of these, GPT-4, was released in March 2023. Such models have been the basis for their more task-specific GPT systems, including models fine-tuned for instruction following—which in turn power the ChatGPT chatbot service.[1]
The term "GPT" is also often used in the names and/or descriptions of such models developed by others. For example, other GPT foundation models include a series of GPT-3 inspired models created by EleutherAI,[10] and recently a series of seven models created by Cerebras.[11] Also, companies in different industries have developed task-specific GPTs in their respective fields, such as Salesforce's "EinsteinGPT" (for CRM)[12] and Bloomberg's "BloombergGPT" (for finance).[13]
History
Generative pre-training (GP) was a long-established concept in machine learning applications,[14][15] but the transformer architecture was not available until 2017 when it was invented by Google.[16] That development led to the emergence of large language models like BERT in 2018[17] and XLNet in 2019,[18] which were pre-trained transformers (PT) but not designed to be generative (they were "encoder-only").[19] Also around that time, in 2018, OpenAI published its article entitled "Improving Language Understanding by Generative Pre-Training," in which it introduced the first generative pretrained transformer (GPT) system.[20]
Prior to transformer-based architectures, the best-performing neural NLP (natural language processing) models commonly employed supervised learning from large amounts of manually-labeled data. The reliance on supervised learning limited their use on datasets that were not well-annotated, and also made it prohibitively expensive and time-consuming to train extremely large language models.[20]
The semi-supervised approach OpenAI employed to make a large scale generative system—and was first to do with a transformer model—involved two stages: an unsupervised generative "pre-training" stage to set initial parameters using a language modeling objective, and a supervised discriminative "fine-tuning" stage to adapt these parameters to a target task.[20]
Foundational models
A foundational model is an AI model trained on broad data at scale such that it can be adapted to a wide range of downstream tasks.[21]
Thus far, the most notable GPT foundation models have been from OpenAI's GPT-n series. The most recent from that is GPT-4, for which OpenAI declined to publish the size or training details (citing "the competitive landscape and the safety implications of large-scale models").[22]
Model | Architecture | Parameter count | Training data | Release date | Training cost |
---|---|---|---|---|---|
GPT-1 | 12-level, 12-headed Transformer decoder (no encoder), followed by linear-softmax. | 117 million | BookCorpus:[23] 4.5 GB of text, from 7000 unpublished books of various genres. | June 11, 2018[6] | "1 month on 8 GPUs",[6] or 1.7e19 FLOP.[24] |
GPT-2 | GPT-1, but with modified normalization | 1.5 billion | WebText: 40 GB of text, 8 million documents, from 45 million webpages upvoted on Reddit. | February 14, 2019 (initial/limited version) and November 5, 2019 (full version)[25] | "tens of petaflop/s-day",[26] or 1.5e21 FLOP.[24] |
GPT-3 | GPT-2, but with modification to allow larger scaling | 175 billion | 570 GB plaintext, 0.4 trillion tokens. Mostly CommonCrawl, WebText, English Wikipedia, and two books corpora (Books1 and Books2). | May 28, 2020[26] | 3630 petaflop/s-day (Figure 2.2 [26]), or 3.1e23 FLOP.[24] |
GPT-3.5 | Undisclosed | 175 billion | Undisclosed | March 15, 2022 | Undisclosed |
GPT-4 | Also trained with both text prediction and RLHF; accepts both text and images as input. Further details are not public.[22] | ~1 trillion | Undisclosed | March 14, 2023 | Undisclosed. Estimated 2.1e25 FLOP.[24] |
Other such models include Google's PaLM, a broad foundation model that has been compared to GPT-3 and has recently been made available to developers via an API,[27][28] and Together's GPT-JT, which has been reported as the closest-performing open-source alternative to GPT-3 (and is derived from earlier open-source GPTs).[29]
Foundational GPTs can also employ modalities other than text, for input and/or output. GPT-4 is a multi-modal LLM that is capable of processing text and image input (though its output is limited to text).[30] Regarding multimodal output, some generative transformer-based models are used for text-to-image technologies such as diffusion[31] and parallel decoding[32]. Such kinds of models can serve as visual foundation models (VFMs) for developing downstream systems that can work with images.[33]
Task-specific models
A foundational GPT model can be further adapted to produce more targeted systems directed to specific tasks and/or subject-matter domains. Methods for such adaptation can include additional fine-tuning (beyond that done for the foundation model) as well as certain forms of prompt engineering.[34]
An important example of this is fine-tuning models to follow instructions. In January 2022, OpenAI introduced "InstructGPT"—a series of models which were fine-tuned to follow instructions using a combination of supervised training and reinforcement learning from human feedback (RLHF) on base GPT-3 language models.[35][36] Advantages this had over the bare foundational models included higher accuracy, less negative/toxic sentiment, and generally better alignment with user needs. Hence, OpenAI began using this as the basis for its API service offerings.[37] Other instruction-tuned models have been released by others, including a fully open version.[38][39]
Another (related) kind of task-specific models are chatbots, which engage in human-like conversation. In November 2022, OpenAI launched ChatGPT—an online chat interface powered by an instruction-tuned language model trained in a similar fashion to InstructGPT.[40] They trained this model using RLHF, with human AI trainers providing conversations in which they played both the user and the AI, and mixed this new dialogue dataset with the InstructGPT dataset for a conversational format suitable for a chatbot. Other major chatbots currently include Microsoft's Bing Chat, which uses OpenAI's GPT-4 (as part of a broader close collaboration between OpenAI and Microsoft),[41] and Google's competing chatbot Bard (initially based on their LaMDA family of conversation-trained language models, with plans to switch to PaLM).[42]
Yet another kind of task that a GPT can be used for is the meta-task of generating its own instructions, like developing a series of prompts for 'itself' to be able to effectuate a more general goal given by a human user.[43] This is known as an AI agent, and more specifically a recursive one because it uses results from its previous self-instructions to help it form its subsequent prompts; the first major example of this was Auto-GPT (which uses OpenAI's GPT models), and others have since been developed as well.[44]
Multimodality
Generative transformer-based systems can also be targeted to tasks involving modalities beyond text.
For example, Microsoft’s “Visual ChatGPT” combines ChatGPT with visual foundation models (VFMs) to enable input or output comprising images as well as text.[45] Also, advances in text-to-speech technology offer powerful tools for audio content creation when used in conjunction with foundational GPT language models.[46]
Domain-specificity
GPT systems can be directed toward particular fields or domains. Some reported examples of such models and apps are as follows:
- EinsteinGPT - for sales and marketing domains, to aid with customer relationship management (uses GPT-3.5)[47]
- BloombergGPT - for the financial domain, to aid with financial news and information (uses "freely available" AI methods, combined with their proprietary data)[48]
- Khanmigo – described as a GPT version for tutoring, in the education domain, it aids students using Khan Academy by guiding them through their studies without directly providing answers (powered by GPT-4)[49][50]
- SlackGPT - for the Slack instant-messaging service, to aid with navigating and summarizing discussions on it (uses OpenAI's API)[51]
- BioGPT - for the biomedical domain, to aid with biomedical literature text generation and mining (uses GPT-2)[52]
Sometimes domain-specificity is accomplished via software plug-ins or add-ons. For example, several different companies have developed particular plugins that interact directly with OpenAI's ChatGPT interface,[53][54] and Google Workspace has available add-ons such as “GPT for Sheets and Docs”—which is reported to aid use of spreadsheet functionality in Google Sheets.[55][56]
Selected bibliography
This section lists the main official publications from OpenAI and Microsoft on their GPT models.
GPT-1: report,[6] GitHub release.[57]
GPT-2: blog announcement,[58] report on its decision of "staged release",[59] GitHub release.[60]
GPT-3: report.[26] No GitHub or any other form of code release thenceforth.
InstructGPT: blog announcement,[35] report.[36]
ChatGPT: blog announcement (no report).[40]
GPT-4: blog announcement,[61] reports,[62][63] model card.[64]
References
- Haddad, Mohammed. "How does GPT-4 work and how can you start using it in ChatGPT?". www.aljazeera.com.
- "Generative AI: a game-changer society needs to be ready for". World Economic Forum.
- "The A to Z of Artificial Intelligence". Time. April 13, 2023.
- Hu, Luhui (November 15, 2022). "Generative AI and Future". Medium.
- "CSDL | IEEE Computer Society". www.computer.org.
- "Improving language understanding with unsupervised learning". openai.com. Archived from the original on 2023-03-18. Retrieved 2023-03-18.
- Toews, Rob. "The Next Generation Of Large Language Models". Forbes.
- Mckendrick, Joe (March 13, 2023). "Most Jobs Soon To Be 'Influenced' By Artificial Intelligence, Research Out Of OpenAI And University Of Pennsylvania Suggests". Forbes.
- https://www.makeuseof.com/gpt-models-explained-and-compared/
- "EleutherAI Open-Sources Six Billion Parameter GPT-3 Clone GPT-J".
- "News" (Press release).
- Morrison, Ryan (7 March 2023). "Salesforce launches EinsteinGPT built with OpenAI technology". Tech Monitor.
- "The ChatGPT of Finance is Here, Bloomberg is Combining AI and Fintech". Forbes.
- http://cs224d.stanford.edu/papers/maas_paper.pdf
- https://www.cambridge.org/core/journals/apsipa-transactions-on-signal-and-information-processing/article/tutorial-survey-of-architectures-algorithms-and-applications-for-deep-learning/023B6ADF962FA37F8EC684B209E3DFAE
- Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N.; Kaiser, Lukasz; Polosukhin, Illia (December 5, 2017). "Attention Is All You Need". arXiv:1706.03762 – via arXiv.org.
{{cite journal}}
: Cite journal requires|journal=
(help) - Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (May 24, 2019). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". arXiv:1810.04805v2 – via arXiv.org.
{{cite journal}}
: Cite journal requires|journal=
(help) - https://proceedings.neurips.cc/paper_files/paper/2019/file/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf
- Naik, Amit Raja (September 23, 2021). "Google Introduces New Architecture To Reduce Cost Of Transformers". Analytics India Magazine.
- Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (11 June 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived (PDF) from the original on 26 January 2021. Retrieved 23 January 2021.
- "Introducing the Center for Research on Foundation Models (CRFM)". Stanford HAI.
- OpenAI (2023). "GPT-4 Technical Report" (PDF). Archived (PDF) from the original on 2023-03-14. Retrieved 2023-03-16.
- Zhu, Yukun; Kiros, Ryan; Zemel, Rich; Salakhutdinov, Ruslan; Urtasun, Raquel; Torralba, Antonio; Fidler, Sanja (2015). Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. IEEE International Conference on Computer Vision (ICCV) 2015. pp. 19–27. arXiv:1506.06724. Archived from the original on 2023-02-05. Retrieved 2023-02-07.
- "ML input trends visualization". Epoch. Retrieved 2023-05-02.
- Vincent, James (November 7, 2019). "OpenAI has published the text-generating AI it said was too dangerous to share". The Verge.
- Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (May 28, 2020). "Language Models are Few-Shot Learners". arXiv:2005.14165v4 – via arXiv.org.
{{cite journal}}
: Cite journal requires|journal=
(help) - Vincent, James (March 14, 2023). "Google opens up its AI language model PaLM to challenge OpenAI and GPT-3". The Verge.
- "Google Opens Access to PaLM Language Model".
- Iyer, Aparna (November 30, 2022). "Meet GPT-JT, the Closest Open Source Alternative to GPT-3". Analytics India Magazine.
- https://www.marktechpost.com/2023/03/27/multimodal-language-models-the-future-of-artificial-intelligence-ai/
- https://www.marktechpost.com/2022/11/14/how-do-dall%C2%B7e-2-stable-diffusion-and-midjourney-work/
- https://analyticsindiamag.com/google-launches-muse-a-new-text-to-image-transformer-model/
- https://arxiv.org/pdf/2303.04671.pdf
- https://arxiv.org/pdf/2108.07258.pdf%20
- "Aligning language models to follow instructions". openai.com. Archived from the original on 23 March 2023. Retrieved 23 March 2023.
- Ouyang, Long; Wu, Jeff; Jiang, Xu; et al. (4 March 2022). "Training language models to follow instructions with human feedback". arXiv:2203.02155.
{{cite journal}}
: Cite journal requires|journal=
(help) - Ramnani, Meeta (January 28, 2022). "OpenAI dumps its own GPT-3 for something called InstructGPT, and for right reason". Analytics India Magazine.
- https://crfm.stanford.edu/2023/03/13/alpaca.html
- https://www.databricks.com/blog/2023/04/12/dolly-first-open-commercially-viable-instruction-tuned-llm
- "Introducing ChatGPT". openai.com. Archived from the original on 2023-03-16. Retrieved 2023-03-16.
- https://techcrunch.com/2023/05/04/microsoft-doubles-down-on-ai-with-new-bing-features/
- "ChatGPT vs. Bing vs. Google Bard: Which AI Is the Most Helpful?". CNET.
- https://mashable.com/article/autogpt-ai-agents-how-to-get-access
- https://www.forbes.com/sites/bernardmarr/2023/04/24/auto-gpt-may-be-the-strong-ai-tool-that-surpasses-chatgpt/?sh=2dc1e8a77640
- https://www.infoq.com/news/2023/04/microsoft-visual-chatgpt/
- https://arstechnica.com/information-technology/2023/01/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio/
- https://techmonitor.ai/technology/ai-and-automation/salesforce-einsteingpt-openai-chatgpt
- https://www.cnbc.com/2023/04/13/bloomberg-plans-to-integrate-gpt-style-ai-into-its-terminal.html
- https://www.fastcompany.com/90891522/the-learning-nonprofit-khan-academy-piloting-a-version-of-gpt-called-khanmigo
- https://thejournal.com/articles/2023/03/14/khan-academy-pilots-gpt-4-powered-tool-khanmigo-for-teachers.aspx
- https://www.pcworld.com/article/1807402/slack-gpt-will-bring-ai-chatbots-to-your-conversations.html
- https://arxiv.org/pdf/2210.10341.pdf
- https://wire19.com/chatgpt-plugins/
- https://openai.com/blog/chatgpt-plugins
- https://www.makeuseof.com/how-use-chatgpt-google-sheets/
- https://www.infoworld.com/article/3689175/embrace-and-extend-excel-for-ai-data-prep.html
- finetune-transformer-lm, OpenAI, 2023-05-01, retrieved 2023-05-01
- "GPT-2: 1.5B release". openai.com. Retrieved 2023-05-01.
- Solaiman, Irene; Brundage, Miles; Clark, Jack; Askell, Amanda; Herbert-Voss, Ariel; Wu, Jeff; Radford, Alec; Krueger, Gretchen; Kim, Jong Wook; Kreps, Sarah; McCain, Miles; Newhouse, Alex; Blazakis, Jason; McGuffie, Kris; Wang, Jasmine (2019-11-12). "Release Strategies and the Social Impacts of Language Models". arXiv:1908.09203 [cs].
- gpt-2, OpenAI, 2023-05-01, retrieved 2023-05-01
- "GPT-4". openai.com. Retrieved 2023-05-01.
- OpenAI (2023-03-27). "GPT-4 Technical Report". arXiv:2303.08774 [cs].
- Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Lee, Yin Tat; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Ribeiro, Marco Tulio; Zhang, Yi (2023-04-13). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv:2303.12712 [cs].
- GPT-4 System Card, OpenAI, March 23 2023