In June 2020, OpenAI launched GPT-3 (generative pre-training) that is the third generation of the artificial intelligence (AI) model to create documents in the form of API beta, which has attracted a great attention from the community. GPT-3 is the most powerful language model ever, but it has also raised some concerns.
As we know, OpenAI is a non-profit company working on Artificial Intelligence, which is behind big names like Peter Thiel, Elon Musk, Reid Hoffman, Marc Benioff, Sam Altman etc. OpenAI vision is to develop an AGI (Artificial General Intelligence) system, which capable of understanding or learning any intellectual task that humans can.
GPT-3 is essentially a machine learning system with an input of 45TB of textual data (trained on). The model has 175 billion parameters (the values that a neural network tries to optimize during training). It is useful to note that the entire English data of the Wikipedia network is only 0.6% of the amount of data used for model training in GPT-3. GPT-3 is known as the most powerful language model ever, far surpassing GPT-2 launched in late 2019 with only 1.5 billion parameters. After being trained, the system can create all kinds of smart text content that are no different from humans like a story, an article, a programming code, a poem, a piece of music, just with data input as a few words or sentences. Besides, GPT-3 can produce information that we cannot distinguish between human or written creation.
In particular, GPT-3 allows searching documents based on the natural language meaning of queries instead of just searching through keywords in Twitter. GPT-3 helps improve chatbot by enabling complex, fast and consistent natural language discussions, as well as covering a range of topics, from space travel to history. GPT-3 also helps to provide better customer service with the ability to quickly search, chat, respond and provide customers with relevant information through analysis and semantic search. Moreover, GPT-3 can create productivity tools that allow you to analyze text into excel spreadsheets, summarize email discussions and expand the content from bullet points.
With the powerful language model, GPT-3 is one step closer to AGI, although not perfect. This system seems closer to passing the Turing test than any other system so far. Previous AI systems worked well in narrow fields like gaming, but in many areas there was still a great distance. GPT-3 shows impressive ability in many areas, it can learn and perform tasks when only a few examples, when not pre-programmed, it can play chess and GO (all However, it is not very good as special programs), it can write code for computer programs, maybe even design machine learning models. With the rapid progress from GPT-2 to GPT-3, who knows what GPT-4 can do? Human-level AGI may still be decades away, but the timeline seems to be shortening.
However, GPT-3 raises some concerns on ethical system for future GPT-3 deployment. Is GPT-3 really intelligent, contextual, or understandable with human-level? It is not easy to answer this question because of complex human language in enormous number of contexts and settings. Besides, there is a concern on how to avoid use of GPT-3 for malicious purposes such as creating more fake news like real, or helping children cheat in their homework or exam. Moreover, the GPT-3 requires intensive computing power and its performance improves with each iteration, which are might be costly for future commercial product deployment.
As our technology evolution, we probably remember bitcoin and block chain technology, when people find that bitcoin mining could consume electrical energy by a small nation. GPT-3 and machine learning may be impressive AI technology, but sooner or later we still need to figure out its impact on the society if any large scale deployment.
In support of positive AI development and ethics for the society, the Michael Dukakis Institute for Leadership and Innovation (MDI) and Boston Global Forum (BGF) established the Artificial Intelligence World Society (AIWS.net) in 2018. AI can be an important tool to serve and strengthen democracy, human rights, and the rule of law. Its misuse could undermine those ideals. In this effort, MDI invites participation and collaboration with think tanks, universities, non-profits, firms, and other entities that share its commitment to the constructive and development of AI.