[Oct 27, 2023: I was hoping for this piece to be posted as an op-ed in the Crimson, since I really want to reach students that are well-intentioned but may not realize they are harmful. However, it was rejected and so I am posting this here]. [Update Jan 3, 2024: In light of my new … Continue reading Harvard, we have a problem
Petition by CS & Math Laureates: Freedom for kidnapped children
[Guest post by Shafi Goldwasser, See also PDF version of document] On the morning of Saturday, October 7, 2023, Hamas launched an attack near the Israel/Gaza border. In villages and towns near the border they went from door-to-door annihilating whole families. They killed children in front of their parents and siblings. They abused women. In … Continue reading Petition by CS & Math Laureates: Freedom for kidnapped children
Open letter to the Harvard community
October 12, 2023 To President Claudine Gay and the Harvard University leadership We attach an open letter for your consideration written on October 8, 2023 and signed by more than 350 faculty members at Harvard. The drafters of the letter (listed below) welcome President Gay’s statement unequivocally condemning Hamas’s terrorism and distancing the university from … Continue reading Open letter to the Harvard community
Letter to Harvard President Claudine Gay
[Update 10/11: I have not received a response to this letter, but On the night of October 9th, Harvard's leadership released a statement, and president Gay added her own statement on October 10th; you can read both here. These partially address what I raised in the letter, but still fall short of condemning, rather than … Continue reading Letter to Harvard President Claudine Gay
Reflections on “Making the Atomic Bomb”
[Cross posted on lesswrong; see here for my prior writings ; update: 8/25/23: added a paragraph on secrecy] [it appears almost certain that in the immediate future, it would be] possible to set up a nuclear chain reaction in a large mass of uranium by which vast amounts of power and large quantities of new radium-like elements would … Continue reading Reflections on “Making the Atomic Bomb”
Cartesian Cafe podcast interviews me on cryptography
[Unrelated announcement: Yael Kalai, Ran Raz, Salil Vadhan, Nisheeth Vishnoi and I recently completed our survey of Avi Wigderson's work for the volume on Abel prize winners. Given the breadth and depth of Avi's work, our survey could only cover a small sample of it, but we still hope it can. be a useful resource … Continue reading Cartesian Cafe podcast interviews me on cryptography
The shape of AGI: Cartoons and back of envelope
[Cross posted on lesswrong; see here for my prior writings] There have been several studies to estimate the timelines for artificial general intelligence (aka AGI). Ajeya Cotra wrote a report in 2020 (see also 2022 update) forecasting AGI based on comparisons with “biological anchors.” That is, comparing the total number of FLOPs invested in a training run or inference with … Continue reading The shape of AGI: Cartoons and back of envelope
Metaphors for AI, and why I don’t like them
Photo from National Photo Company Collection; see also (Sobel, 2017) [Cross posted in lesswrong and windows on theory see here for my prior writings]“computer, n. /kəmˈpjuːtə/. One who computes; a calculator, reckoner; specifically a person employed to make calculations in an observatory, in surveying, etc”, Oxford English Dictionary “There is no reason why mental as well as bodily labor should not … Continue reading Metaphors for AI, and why I don’t like them
The (local) unit of intelligence is FLOPs
[Crossposting again on Lesswrong and Windowsontheory, with the hope I am not overstaying my welcome in LW.] Wealth can be measured by dollars. This is not a perfect measurement: it’s hard to account for purchasing power and circumstances when comparing people across varying countries or time periods. However, within a particular place and time, one can measure … Continue reading The (local) unit of intelligence is FLOPs
GPT as an “Intelligence Forklift.”
[See my post with Edelman on AI takeover and Aaronson on AI scenarios. This is a rough, with various fine print, caveats, and other discussions missing. Cross-posted on Windows on Theory.] One challenge for considering the implications of “artificial intelligence,” especially of the “general” variety, is that we don’t have a consensus definition of intelligence. The Oxford Companion … Continue reading GPT as an “Intelligence Forklift.”


