How generative AI could reinvent what it means to play
Generative AI is opening the door to entirely new kinds of in-game interactions that are open-ended, creative, and unexpected. The game may not always have to end.
Generative AI is opening the door to entirely new kinds of in-game interactions that are open-ended, creative, and unexpected. The game may not always have to end.
Here we go again. Chatbot fails are now a familiar meme. The tendency to make things up is holding chatbots back. But that’s just what they do.
A fake photo—or memory-based reconstruction, as the Barcelona-based design studio Domestic Data Streamers puts it—of the scene that a real photo might have captured. The fake snapshots are blurred and distorted, but they can still rewind a lifetime in an instant.
The biggest models are now so complex that researchers are studying them as if they were strange natural phenomena, carrying out experiments and trying to explain the results.
The number of referrals from services using the Limbic chatbot rose by 15% during the study’s three-month time period, compared with a 6% rise in referrals for the services that weren’t using it.
A team of researchers at New York University wondered if AI could learn like a baby. What could an AI model do when given a far smaller data set—the sights and sounds experienced by a single child learning to talk?
“With the infrastructure in place—the base generative models from OpenAI, Google, Meta, and a handful of others—people other than the ones who built it will start using and misusing it in ways its makers never dreamed of.”
“…How to rein in, or “align,” hypothetical future models that are far smarter than we are, known as superhuman models. Alignment means making sure a model does what you want it to do and does not do what you don’t want it to do…”
“Put simply, in the context of the current paradigm of building larger- and larger-scale AI systems, there is no AI without Big Tech. With vanishingly few exceptions, every startup, new entrant, and even AI research lab is dependent on these firms.”
The key idea behind Copilot and other programs like it, sometimes called code assistants, is to put the information that programmers need right next to the code they are writing.
OpenAI’s charter—a document so sacred that employees’ pay is tied to how well they adhere to it—further declares that OpenAI’s “primary fiduciary duty is to humanity.”
Tech companies are putting this deeply flawed tech in the hands of millions of people and allowing AI models access to sensitive information such as their emails, calendars, and private messages. In doing so, they are making us all vulnerable to scams, phishing, and hacks on a massive scale.
“We saw some remarkable things come up in the last couple of weeks,” said Kendall. “I never would have thought to ask something like this, but look—” He typed: “How many stories is the building on the right?” Three stories.
These kinds of results are feeding a hype machine predicting that these machines will soon come for white-collar jobs, replacing teachers, doctors, journalists, and lawyers.