Machines of Faithful Obedience

[Crossposted on LessWrong] Throughout history, technological and scientific advances have had both good and ill effects, but their overall impact has been overwhelmingly positive. Thanks to scientific progress, most people on earth live longer, healthier, and better than they did centuries or even decades ago.  I believe that AI (including AGI and ASI) can do … Continue reading Machines of Faithful Obedience

Emergent abilities and grokking: Fundamental, Mirage, or both?

One of the lessons we have seen in language modeling is the power of scale. The original GPT paper of Radford et al. noted that at some point during training, the model “acquired” the ability to do sentiment analysis of a sentence X by predicting whether it is more likely to be followed by “very … Continue reading Emergent abilities and grokking: Fundamental, Mirage, or both?

Reflections on “Making the Atomic Bomb”

[Cross posted on lesswrong; see here for my prior writings ; update: 8/25/23: added a paragraph on secrecy]  [it appears almost certain that in the immediate future, it would be] possible to set up a nuclear chain reaction in a large mass of uranium by which vast amounts of power and large quantities of new radium-like elements would … Continue reading Reflections on “Making the Atomic Bomb”

The shape of AGI: Cartoons and back of envelope

​​[Cross posted on lesswrong; see here for my prior writings] There have been several studies to estimate the timelines for artificial general intelligence (aka AGI). Ajeya Cotra wrote a report in 2020 (see also 2022 update)  forecasting AGI based on comparisons with “biological anchors.” That is, comparing the total number of FLOPs invested in a training run or inference with … Continue reading The shape of AGI: Cartoons and back of envelope

Metaphors for AI, and why I don’t like them

Photo from National Photo Company Collection; see also (Sobel, 2017) [Cross posted in lesswrong and windows on theory see here for my prior writings]“computer, n. /kəmˈpjuːtə/. One who computes; a calculator, reckoner; specifically a person employed to make calculations in an observatory, in surveying, etc”, Oxford English Dictionary “There is no reason why mental as well as bodily labor should not … Continue reading Metaphors for AI, and why I don’t like them

The (local) unit of intelligence is FLOPs

[Crossposting again on Lesswrong and Windowsontheory, with the hope I am not overstaying my welcome in LW.] Wealth can be measured by dollars. This is not a perfect measurement: it’s hard to account for purchasing power and circumstances when comparing people across varying countries or time periods. However, within a particular place and time, one can measure … Continue reading The (local) unit of intelligence is FLOPs

GPT as an “Intelligence Forklift.”

[See my post with Edelman on AI takeover and Aaronson on AI scenarios. This is a rough, with various fine print, caveats, and other discussions missing. Cross-posted on Windows on Theory.]  One challenge for considering the implications of “artificial intelligence,” especially of the “general” variety, is that we don’t have a consensus definition of intelligence. The Oxford Companion … Continue reading GPT as an “Intelligence Forklift.”

AI will change the world, but won’t take it over by playing “3-dimensional chess”.

By Boaz Barak and Ben Edelman [Cross-posted on Lesswrong ; See also Boaz’s posts on longtermism and AGI via scaling , as well as other "philosophizing" posts. This post also puts us in Aaronson's "Reform AI Alignment" religion] [Disclaimer: Predictions are very hard, especially about the future. In fact, this is one of the points of this essay. Hence, … Continue reading AI will change the world, but won’t take it over by playing “3-dimensional chess”.