[See my post with Edelman on AI takeover and Aaronson on AI scenarios. This is a rough, with various fine print, caveats, and other discussions missing. Cross-posted on Windows on Theory.] One challenge for considering the implications of “artificial intelligence,” especially of the “general” variety, is that we don’t have a consensus definition of intelligence. The Oxford Companion … Continue reading GPT as an “Intelligence Forklift.”
Author: Boaz Barak
5 worlds of AI
Scott Aaronson and I wrote a post about 5 possible worlds for (the progress of) Artificial Intelligence. See Scott's blog for the post itself and discussions. The post was, of course, inspired by the classic essay on the 5 worlds of computational complexity by Russell Impagliazzo who will be turning 60 soon - Happy birthday!
Thoughts on AI safety
Last week, I gave a lecture on AI safety as part of my deep learning foundations course. In this post, I’ll try to write down a few of my thoughts on this topic. (The lecture was three hours, and this blog post will not cover all of what we discussed or all the points that … Continue reading Thoughts on AI safety
TCS for all travel grants and speaker nominations (guest post by Elena Grigorescu)
TCS for All (previously TCS Women) Spotlight Workshop at STOC 2023/Theory Fest: Travel grants and call for speaker nominations You are cordially invited to our TCS for All Spotlight Workshop! The workshop will be held on Thursday, June 22nd, 2023 (2-4pm), in Orlando, Florida, USA, as part of the 54th Symposium on Theory of Computing (STOC) and TheoryFest! The workshop … Continue reading TCS for all travel grants and speaker nominations (guest post by Elena Grigorescu)
Interview about this blog in the Bulletin of the EATCS
Luca Trevisan recently interviewed me for the Bulletin of the EATCS (see link for the full issue, including an interview with Alexandra Silva, and technical columns by Naama Ben-David, Ryan Williams, and Yuri Gurevich). With Luca's permission, I am cross-posting it here. (I added some hyperlinks to relevant documents.) Q. Boaz, thanks for taking the … Continue reading Interview about this blog in the Bulletin of the EATCS
Provable Copyright Protection for Generative Models
See arxiv link for paper by Nikhil Vyas, Sham Kakade, and me. Conditional generative models hold much promise for novel content creation. Whether it is generating a snippet of code, piece of text, or image, such models can potentially save substantial human effort and unlock new capabilities. But there is a fly in this ointment. … Continue reading Provable Copyright Protection for Generative Models
Chatting with Claude
In my previous post I discussed how large language models can be thoughts of as the hero of the movie "memento" - their long-term memory is intact but they have limited context, which can be an issue in retrieving not just facts that happened after the training, but also the relevant facts that did appear … Continue reading Chatting with Claude
Theory announcements: Prizes, CFP, and more
Related to my last post on the FOCS 2023 conjectures track, the chairs now put together an FAQ about it. ACM SIGACT is soliciting nominations for several prizes: Knuth Prize by February 15, Distinguished service award by March 1, and Gödel prize by March 31. NSF is looking for a Program Director for the Algorithmic … Continue reading Theory announcements: Prizes, CFP, and more
New in FOCS 2023: A conjectures track
Update 1/27: Amit, Shubhangi, and Thomas put together an FAQ about this. This year, FOCS 2023 will include something new: a Conjectures Track, separate from the Main Track. Submissions to the Main Track will be evaluated along similar lines as STOC/FOCS papers typically are, aiming to accept papers that obtain the very best results across … Continue reading New in FOCS 2023: A conjectures track
Memento and Large Language Models
[Mild spoilers for the 2000 film Memento. See this doc for the full ChatGPT transcripts. --Boaz] Leonard Shelbey, the protagonist of Christopher Nolan's film "Memento", suffers from anterograde amnesia. He remembers everything up to the time in which he was the victim of a violent attack, but cannot form new memories after that. He uses … Continue reading Memento and Large Language Models