Revolutionising artificial intelligence

Pushing the boundaries of language processing

Revolutionising  artificial intelligence


O

penAI is a research laboratory consisting of the for-profit technological company OpenAI LP and its parent company, the non-profit OpenAI Inc. It was founded in December 2015 with the goal of promoting and developing artificial intelligence that benefits humanity as a whole.

OpenAI’s most notable development has been its chatbot, called GPT-3.

The history of artificial intelligence can be traced back to the 1950s, when John McCarthy coined the term at the Dartmouth Conference. The first AI system was built in the same decade by Allen Newell and Herbert A Simon, who developed the Logic Theorist, which could prove mathematical theorems. This marked the beginning of AI research and development.

In the 1970s, AI research was primarily focused on rule-based systems, where decisions were made based on pre-defined rules. These systems were limited in their ability to learn and improve. The 1980s saw a resurgence in AI research, which focused on expert systems, which were rule-based systems with a knowledge base that could be consulted to make decisions. However, expert systems were still limited by their pre-defined knowledge base and their inability to learn.

In the 1990s, AI research took a new direction with the development of machine learning algorithms. Machine learning algorithms enabled AI systems to learn from data and improve their performance over time. With the advent of deep learning algorithms in the 2010s, AI systems became more sophisticated. They could now perform certain tasks that had been previously impossible, such as image and speech recognition.

OpenAI’s GPT-3 is a state-of-the-art language model that is capable of natural language processing and understanding. GPT-3 is a continuation of the GPT series, which started with GPT-1 and GPT-2. The GPT series of models is based on the transformer architecture introduced by Vaswani et al in 2017. The transformer architecture is designed to handle sequences of data, such as sequences of words in a language model.

GPT-3 sets itself apart from the previous generations of AI systems in several ways. First, it is a massive model, with over 175 billion parameters. This means that it has the capacity to learn from a massive amount of data. This enables it to perform a wide range of tasks. Second, GPT-3 is trained using unsupervised learning, which means that it is not explicitly told what to do. Instead, it is trained on a large corpus of text data, and it learns to generate text that is similar to the data it was trained on. This enables GPT-3 to generate text that is coherent and makes sense, even though it has not been explicitly programmed to do so.

Another important feature of GPT-3 is its ability to transfer learning, which means that it can use its knowledge of one task to perform another task. For example, if GPT-3 has been trained on a large corpus of text data, it can use this knowledge to generate text, answer questions and translate text from one language to another.


The writer is a PhD scholar in management sciences and administration. 

Revolutionising artificial intelligence