Google Gemini, explained
Another explainer – this one from The Verge – because Google has gotten lost in the weeds of product naming.
Another explainer – this one from The Verge – because Google has gotten lost in the weeds of product naming.
The model can now listen to uploaded audio files and churn out information from things like earnings calls or audio from videos without the need to refer to a written transcript.
When Gemini 1.5 Pro rolls out fully, users will be able to dump in whole book series, codebases, entire legal case histories, or really anything they want. This Google model can ingest all this information quickly and then answer questions about the data.
There are moments when AI takes your breath away. This is one of them.
Gemini 1.5 promises to be faster and more efficient thanks to a specialization technique called “mixture of experts,” also known as MoE. Instead of running the entire model every time it receives a query, Gemini’s MoE can use just the relevant parts…
“Gemini 1.5 Pro comes with a standard 128,000 token context window. But starting today, a limited group of developers and enterprise customers can try it with a context window of up to 1 million tokens”