A short history of AI, and what it is (and isn’t)
Technology tamfitronics
This story originally appeared in The Algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, sign up here.
It’s the simplest questions that are often the hardest to answer. That applies to AI, too. Even though it’s a technology being sold as a solution to the world’s problems, nobody seems to know what it really is. It’s a label that’s been slapped on technologies ranging from self-driving cars to facial recognition, chatbots to fancy Excel. But in general, when we talk about AI, we talk about technologies that make computers do things we think need intelligence when done by people.
For months, my colleague Will Douglas Heaven has been on a quest to go deeper to understand why everybody seems to disagree on exactly what AI is, why nobody even knows, and why you’re right to care about it. He’s been talking to some of the biggest thinkers in the field, asking them, simply: What is AI? It’s a great piece that looks at the past and present of AI to see where it is going next.You can read it here.
Here’s a taste of what to expect:
Artificial intelligence almost wasn’t called “artificial intelligence” at all.The computer scientist John McCarthy is credited with coming up with the term in 1955 when writing a funding application for a summer research program at Dartmouth College in New Hampshire. But more than one of McCarthy’s colleagues hated it. “The word ‘artificial’ makes you think there’s something kind of phony about this,” said one. Others preferred the terms “automata studies,” “complex information processing,” “engineering psychology,” “applied epistemology,” “neural cybernetics,” “non-numerical computing,” “neuraldynamics,” “advanced automatic programming,” and “hypothetical automata.” Not quite as cool and sexy as AI.
AI has several zealous fandoms.AI has acolytes, with a faith-like belief in the technology’s current power and inevitable future improvement. The buzzy popular narrative is shaped by a pantheon of big-name players, from Big Tech marketers in chief like Sundar Pichai and Satya Nadella to edgelords of industry like Elon Musk and Sam Altman to celebrity computer scientists like Geoffrey Hinton. As AI hype has ballooned, a vocal anti-hype lobby has risen in opposition, ready to smack down its ambitious, often wild claims. As a result, it can feel as if different camps are talking past one another, not always in good faith.
This sometimes seemingly ridiculous debate has huge consequences that affect us all.AI has a lot of big egos and vast sums of money at stake. But more than that, these disputes matter when industry leaders and opinionated scientists are summoned by heads of state and lawmakers to explain what this technology is and what it can do (and how scared we should be). They matter when this technology is being built into software we use every day, from search engines to word-processing apps to assistants on your phone. AI is not going away. But if we don’t know what we’re being sold, who’s the dupe?
For example, meet the TESCREALists.A clunky acronym (pronounced “tes-cree-all”) replaces an even clunkier list of labels: transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism. It was coined by Timnit Gebru, who founded the Distributed AI Research Institute and was Google’s former ethical AI co-lead, and Émile Torres, a philosopher and historian at Case Western Reserve University. Some anticipate human immortality; others predict humanity’s colonization of the stars. The common tenet is that an all-powerful technology is not only within reach but inevitable. TESCREALists believe that artificial general intelligence, or AGI, could not only fix the world’s problems but level up humanity. Gebru and Torres link several of these worldviews—with their common focus on “improving” humanity—to the racist eugenics movements of the 20th century.
Is AI math or magic?Either way, people have strong, almost religious beliefs in one or the other. “It’s offensive to some people to suggest that human intelligence could be re-created through these kinds of mechanisms,” Ellie Pavlick, who studies neural networks at Brown University, told Will. “People have strong-held beliefs about this issue—it almost feels religious. On the other hand, there’s people who have a little bit of a God complex. So it’s also offensive to them to suggest that they just can’t do it.”
Will’s piece really is the definitive look at this whole debate. No spoilers—there are no simple answers, but lots of fascinating characters and viewpoints.I’d recommend youread the whole thing here—and see if you can make your mind up about what AI really is.
Technology tamfitronics Now read the rest of The Algorithm
Deeper Learning
AI can make you more creative—but it has limits
Generative AI models have made it simpler and quicker to produce everything from text passages and images to video clips and audio tracks. But while AI’s output can certainly seem creative, do these models actually boost human creativity?
A new study looked at how people used OpenAI’s large language model GPT-4 to write short stories. The model was helpful—but only to an extent. The researchers found that while AI improved the output of less creative writers, it made little difference to the quality of the stories produced by writers who were already creative. The stories in which AI had played a part were also more similar to each other than those dreamed up entirely by humans.Read more from Rhiannon Williams.
Bits and Bytes
Robot-packed meals are coming to the frozen-food aisle
Found everywhere from airplanes to grocery stores, prepared meals are usually packed by hand. AI-powered robotics is changing that. (MIT Technology Review)
AI is poised to automate today’s most mundane manual warehouse task
Pallets are everywhere, but training robots to stack them with goods takes forever. Fixing that could be a tangible win for commercial AI-powered robots. (MIT Technology Review)
The Chinese government is going all-in on autonomous vehicles
The government is finally allowing Tesla to bring its Full Self-Driving feature to China. New government permits let companies test driverless cars on the road and allow cities to build smart road infrastructure that will tell these cars where to go. (MIT Technology Review)
The US and its allies took down a Russian AI bot farm on X
The US seized control of a sophisticated Russian operation that used AI to push propaganda through nearly a thousand covert accounts on the social network X. Western intelligence agencies traced the propaganda mill to an officer of the Russian FSB intelligence force and to a former senior editor at state-controlled publication RT, formerly called Russia Today. (The Washington Post)
AI investors are starting to wonder: Is this just a bubble?
After a massive investment in the language-model boom, the biggest beneficiary is Nvidia, which designs and sells the best chips for training and running modern AI models. Investors are now starting to ask what LLMs are actually going to be used for, and when they will start making them money. (New York magazine)
Goldman Sachs thinks AI is overhyped, wildly expensive, and unreliable
Meanwhile, the major investment bank published a research paper about the economic viability of generative AI. It notes that there is “little to show for” the huge amount of spending on generative AI infrastructure and questions “whether this large spend will ever pay off in terms of AI benefits and returns.” (404 Media)
The UK politician accused of being AI is actually a real person
A hilarious story about how Mark Matlock, a candidate for the far-right Reform UK party, was accused of being a fake candidate created with AI after he didn’t show up to campaign events. Matlock has assured the press he is a real person, and he wasn’t around because he had pneumonia. (The Verge)
Discover more from Tamfis
Subscribe to get the latest posts sent to your email.