ChatGPT took all the headlines when it launched last November - to the point that it stands to argue whether the number of articles written about ChatGPT is higher than the number of articles ChatGPT has produced itself. Despite that, we all knew that ChatGPT wouldn't be the last word in generative AI. And that was confirmed this week when Google announced the impending release of Bard as it responds to ChatGPT's success.
And in the left corner…
Google's Bard will be released to beta testers next week, with a wider public release in the following weeks. Bard, like ChatGPT, is built upon a large language model. ChatGPT uses OpenAI's GPT-3. Google's Bard uses LaMDA. Large language models (LLM) are a kind of neural network designed to mimic the framework of the human brain with powerful computers. LLMs are fed massive amounts of data (as text) and, from that data, learn how to produce natural and authoritative responses to text-based queries.
To paraphrase AI scholar Murray Shanahan, "LLMs are mathematical models of the statistical distribution of "tokens" (words, parts of words, or individual characters, including punctuation) in a huge repository of human-generated text. If you prompt the model with, for example, "The first man to walk on the moon was…," and it responds with "Neil Armstrong," it doesn't mean the model "knows" anything about the Apollo mission or the moon. It means that the question you actually asked was, "Given the statistical distribution of words in the vast public corpus of English text, what words are most likely to follow the sequence 'The first person to walk on the moon was?'" And those words are 'Neil Armstrong.'
But mathematical models or not, LLMs are powerful stuff. Google's LaMDA (language model for dialogue applications) came into the limelight last year when one of Google's engineers claimed that LaMDA had achieved sentience. It wasn't a layman making that claim; it was a seasoned technologist. Google dismissed the claim as unfounded and ultimately fired the engineer. But the point is that LLMs are just that good.
ChatGPT has enjoyed instant "celebrity status" since its release, so it's no surprise Google feels the need to reaffirm its stature as an AI pioneer. Since its launch just a few months ago, ChatGPT has created all sorts of credible content, from poems to essays to software and code. And in that short time, it has already surpassed 100 million users.
Google has stated that it will integrate Bard into its search engine so it can start dealing with "complex queries" and provide authoritative and easily digestible answers. Google's example of a complex query is whether the piano or guitar is easier to learn - it requires more than a URL list to answer.
To that end, Google CEO Sundar Pichai underscored Bard's ability to produce responses based on up-to-date information in a recent blog post. He states:
"Bard seeks to combine the breadth of the world's knowledge with the power, intelligence, and creativity of our large language models. It draws on information from the web to provide fresh, high-quality responses." And he follows up with some real-world examples: "Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA's James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills."
Pichai further states that Google's latest AI tech, LLMs, like LaMDA and PaLM, Imagen, its image generator, and its music creator, MusicLM, would all be baked into its search engine, giving it the ability to take in complex queries - which command complex, subtle, and multifaceted answers - and output natural and easily digestible answers.
In practice, these comprehensive answers will appear at the top of the results page, above the traditional URL-based search results. Regarding the above example of whether the piano or the guitar is easier to learn, this is what Bard had to say on the matter:
"Some say the piano is easier to learn, as the finger and hand movements are more natural, and learning and memorizing notes can be easier. Others say that it's easier to learn chords on the guitar, and you could pick up a strumming pattern in a couple of hours. Music teachers often recommend that beginners practice at least 1 hour per day. To get to an intermediate level, it typically takes 3-6 months of regular practice for guitar, and 6-18 months for piano."
Not a bad answer for a brainless computer, right?
Call to developers
Google also intends to make the technology behind LaMDA available to developers. It will start onboarding developers, creatives, and businesses and have them play with its Generative Language API, currently powered by LaMDA but with more language models to follow. The goal is to create a suite of tools that make it easier to build innovative applications that integrate the technology.
Community is key to the tech's success.
We've entered the age of generative AI, and there's no turning back. You can expect to see more and more generative AI models cropping up as the tech keeps marching forward toward ubiquity. And integrating it into something that is already ubiquitous - like Google search - will be a major driver of that technological democratization.
We look forward to seeing how generative AI will further turn science fiction into just science.
Modev was founded in 2008 on the simple belief that human connection is vital in the era of digital transformation. Modev believes markets are made. From mobile to voice, Modev has helped develop ecosystems for new waves of technology. Today, Modev produces market-leading events such as VOICE Global, presented by Google Assistant, VOICE Summit the most important voice-tech conference globally, and the Webby award-winning VOICE Talks internet talk show. Modev staff, better known as "Modevators," include community building and transformation experts worldwide. To learn more about Modev, and the breadth of events and ecosystem services offered live, virtually, locally, and nationally - visit modev.com.