market

ChatGPT Rise to World-Changing Ubiquity


ChatGPT, the artificial intelligence chatbot developed by OpenAI, is the fastest technology ever to reach one million users, accomplishing this feat within five days of its Nov. 30 release. 

It’s moved far bayond this initial mark. According to Similarweb, ChatGPT received 1.6 billion visits in March, up 56% from February.

The chatbot, or “large language model” (LLM) was created by OpenAI, a private company which recently received signifcant financing from Microsoft (MSFT). When you ask ChatGPT a question, it uses a massive amount of textual data it was trained on to provide a human-like answer.

How does it work?

When presented with a question or problem, ChatGPT uses statistical models to predict the best answer to a question based on the wording of the question and how it fits into its training data. During that training, ChatGPT learned what word, sequence of words, follows the last one in a given context.

As Stephen Wolfram, founder of Wolfram Research and an expert in neural networks, explains, “ChatGPT is always trying to produce a ‘reasonable continuation’ of whatever text it’s got so far, where by ‘reasonable’, we mean ‘what one might expect someone to write after seeing what people have writton on billion of webpages, etc.’”

The underlying technologies ChatGPT uses are called “Supervised Learning” and “Reinforcement Learning from Human Feedback”. The latter is probably the most important, since when ChatGPT answers a question, users have the possibility to rate (thumbs up/down) and comment its answers and those answers “may be reviewed” by OpenAI to improve its systems.

Readers Also Like:  Sensex loses 250 points, Nifty below 17,100; HAL tanks 6%

Why is it so successful?

Mainly because it’s free and so easy to use. The idea of asking specific questions and getting direct answers in a matter of seconds positions ChatGPT to potentially replace web searches and the process of sifting through search results in pursuit of an answer– though you’ll have to trust the training data is of high quality, which raises the question of control of feedback loops with ChatGPT.

It also quickly became headache for educators as students rushed to ChatGPT to complete work ranging from simple homework assignments to advanced academic writing. Software to detect AI-driven cheating remains far from perfect, an issue most recently highlighted when the 1787 U.S. constitution failed a ChatGPT screening.  

Another reason for the success of ChatGPT is the advance of computing and the ability to access massive amount of computing power at scale and on demand through cloud computing.

An expert quoted by JPMorgan recently indicated that “the industry’s AI computing workload has doubled every three to four months from 2012 onwards, primarily driven by the complexity of AI models (GPT-3.0 with 175 billion parameters / GPT-3.5 with 555 billion parameters) and growing training data size (GPT-3.0 trained with ~45TB data).

It also benefits from the availability of huge amounts of capital from Internet and software giants like Alphabet, Microsoft, Meta Platforms, Amazon and others.

JPMorgan estimates that “large tech companies may monetize generative AI faster” by integrating AI capabilities in their core businesses such as Microsoft Azure/Office and Google’s search engine.

Readers Also Like:  Inside the Boots nerve centre that's waging a £1bn war against shoplifters

According to investment bank Jefferies, the monetization potential of ChatGPT and similar programs will mostly depend on the quality of data they train on. This will affect the quality of the answers and solutions they provide, and hence the speed and breadth of their adoption.

What are its challenges?

ChatGPT sometimes fails to provide a basic answer because of the finite amount of data it uses. For instance, ChatGPT-3’s training data spans until 2021, so it wouldn’t know which country won the 2022 Football World Cup.

Some answers are troubling from an ethical or moral standpoint, and OpenAI has been upfront about some of its output falling short of users’ expectations.

OpenAI has stated in the past that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers”, which the company is trying to fix by developing new iterations of its LLM.

In terms of business applications, ChatGPT and similar technologies are at the cusp of profound disruptions in several industries.

For instance, the release of ChatGPT and its quick adoption by Microsoft in its Bing search engine has forced Google to speed up the release of its own version of LLM (“Bard”) to try not losing its edge in web searches.

According to Adam Thierer, a researcher at RStreet, “AI is generally thought to be in the midst of another ‘spring’ period as enthusiasm grows around specific capabilities and applications”.





READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.