<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"><channel><description>AI safeguards &amp; gov. research. PhD student @MIT_CSAIL (mnr. Public Policy) and Fellow at Harvard Berkman Klein. Fmr. UK AISI.  https://stephencasper.com/</description><link>https://bsky.app/profile/scasper.bsky.social</link><title>@scasper.bsky.social - Cas (Stephen Casper)</title><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mlntqszmu42k</link><description>Governing AI is kind of like grief:&#xA;1. Denial -- &#34;...next token prediction...stochastic parrots...&#34;&#xA;2. Anger -- &#34;Elon did WHAT?!&#34;&#xA;3. Bargaining -- &#34;Maybe Newsom will sign a watered-down bill.&#34;&#xA;4. Depression -- &#34;Oh, this is what Collingridge was talking about.&#34;&#xA;5. Acceptance -- :/</description><pubDate>12 May 2026 12:59 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mlntqszmu42k</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mllu4etr6s2j</link><description>When talking about AI and its future, I wish it were more of a faux pas to make vague appeals to the benefits of technological progress -- and I don&#39;t just mean &#34;AI curing cancer&#34;. Technological progress in general probably isn&#39;t as good as we normally think it is.</description><pubDate>11 May 2026 18:00 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mllu4etr6s2j</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mknzokluwk2v</link><description>A recent (short) talk of mine is now up on YouTube.&#xA;&#xA;It is about why, if you are working for an AI company on making its systems safer, I think you should consider quitting.&#xA;&#xA;https://www.youtube.com/watch?v=Ugu77dx2lec&amp;list=PLpvkFqYJXcrcz8Js7lmdr9XT-vvm0EnzM</description><pubDate>29 Apr 2026 21:19 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mknzokluwk2v</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mjp5mhbnik2r</link><description>🚨 One week left to submit your AI-gov-related research to the TAIGR workshop.</description><pubDate>17 Apr 2026 14:37 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mjp5mhbnik2r</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mj5s6yp6oc2r</link><description>Now that Mythos is released, we can start the clock. I&#39;d bet that within 9 months, a system with comparable cyber capabilities will be widely available (either open-weight or openly served). Hopefully, we just have enough time to improve cyberdefense enough to be ready.</description><pubDate>10 Apr 2026 16:58 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mj5s6yp6oc2r</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mj5rj6u7i52z</link><description>Now that Mythos is released, we can start the clock. I&#39;d bet that within 9 months, a system with comparable cyber capabilities will be widely available (either open-weight or openly served). Hopefully, we just have enough time to improve cyberdefense enough to be ready.</description><pubDate>10 Apr 2026 16:46 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mj5rj6u7i52z</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mj2uhqbmj62o</link><description>🧵🧵🧵&#xA;A provocation to the mechanistic interpretability researchers of the world...</description><pubDate>09 Apr 2026 13:00 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mj2uhqbmj62o</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3miydz7w7rj2q</link><description>Me in December: &#34;Wow, freedom of information laws are awesome. I can&#39;t wait to get info.&#34;&#xA;&#xA;Me now: Enters my 4th month of getting bullied and gaslit by 3 governments at once.&#xA;&#xA;Anyway, if you need to do US FOIA, UK FOI, or EU FOIA requests, let me know -- I have advice.</description><pubDate>08 Apr 2026 13:01 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3miydz7w7rj2q</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mivtplzamh2f</link><description>Reasons to submit to the ICML Technical AI Gov. Research (TAIGR) workshop:&#xA;- 8-page limit&#xA;- Broad scope, AI gov-related&#xA;- Workshops don&#39;t trigger dual submission policies&#xA;- Best paper awards both overall and by category&#xA;- Great community&#xA;- Cool stickers&#xA;&#xA;Deadline April 24!</description><pubDate>07 Apr 2026 13:04 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mivtplzamh2f</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mihr7t74f22z</link><description>OpenReview for the #TAIGR workshop for #ICML on technical AI governance research is live as of today. (Not an April Fools joke).&#xA;&#xA;See the call and link to OpenReview here: taigr-workshop.com&#xA;https://taigr-workshop.com/</description><pubDate>01 Apr 2026 22:42 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mihr7t74f22z</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mihdijmgks2d</link><description>I wish more CS papers had tables of contents. Makes them much more navigable. I think one reason it&#39;s rare is that submission venues have page limits. So there&#39;s often just not room. I wish venues would conditionally relax length requirements by the length of an optional ToC.</description><pubDate>01 Apr 2026 18:36 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mihdijmgks2d</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mi77cey3ug2b</link><description>Today, I realized that, taken together, Appendices 3.2, 3.3, and 3.5 of the EU AI Act Codes of Practice *unambiguously* require open-weight model developers to let external evaluators conduct adversarial fine-tuning evals on their frontier models. Good.</description><pubDate>29 Mar 2026 13:00 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mi77cey3ug2b</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mi2j3m6kjl2n</link><description>Just like we have mass school shootings, we now also have mass school AI non-consensual nudifications.</description><pubDate>27 Mar 2026 16:12 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mi2j3m6kjl2n</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mhtmaonvx22k</link><description>Announcing the technical AI Governance Research (TAIGR) ICML workshop in July! Submissions (up to 8 pages) are due April 24. Co-submission with ICML and NeurIPS is encouraged.&#xA;&#xA;taigr-workshop.com</description><pubDate>24 Mar 2026 22:19 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mhtmaonvx22k</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mhilb73ak42f</link><description>Do you do technical AI research? In this talk, I argue that you 🫵 should see yourself quite literally as a type of policymaker. Thanks @far.ai.&#xA;&#xA;https://www.youtube.com/watch?v=Ekp-eg7TcIY&amp;list=PLpvkFqYJXcrenHjBk75z4qiKwTsYrvr9n</description><pubDate>20 Mar 2026 13:03 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mhilb73ak42f</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mgusytqh7k2l</link><description>The official text isn&#39;t out yet &amp; will matter a lot. But Europe may be moving to treat AI nudification similarly to how pirated media or CSAM is treated: as something that, even though it will always be available, can possibly be made much less accessible.&#xA;https://www.politico.eu/article/eu-grok-x-elon-musk-ai-nudification-ban-in-wake-of-scandal/</description><pubDate>12 Mar 2026 16:28 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mgusytqh7k2l</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mg6cn3pbxk2i</link><description>Using a well-timed screenshot and my phone&#39;s cache, I was able to recover some of the since-deleted tweet from Jeremy Lewin, where he admitted that the government sees the new OpenAI contract language as just &#34;memorializing&#34; a vague &#34;commitment&#34; rather than drawing any real new lines.</description><pubDate>03 Mar 2026 17:36 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mg6cn3pbxk2i</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mfu422hdi224</link><description>As someone who is not a fan of @anthropic.com...I think you should use Claude.</description><pubDate>27 Feb 2026 16:12 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mfu422hdi224</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mfmn5iss7w2z</link><description>Given:&#xA;1. Last summer, frontier closed-weight model devs started to share warnings about nasty model capabilities. &#xA;&amp;&#xA;2. Open-weight models are a few months behind closed ones. &#xA;&#xA;We should not be surprised if there is a big cyber/terror incident enabled by a powerful open-weight AI model in 2026.</description><pubDate>24 Feb 2026 16:57 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mfmn5iss7w2z</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mfmdb2f5zn2n</link><description>The 2025 AI Agent Index is now available on arXiv.&#xA;&#xA;arxiv.org/abs/2602.17753&#xA;&#xA;https://x.com/StephenLCasper/status/2024846583574204516&#xA;https://arxiv.org/abs/2602.17753v1</description><pubDate>24 Feb 2026 14:00 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mfmdb2f5zn2n</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mfdg43a6zj2e</link><description>If you&#39;re responding to @facct.bsky.social rebuttals this week, feel free to use my rebuttal template. &#xA;&#xA;https://docs.google.com/document/d/1iNmLWb0z8F4Td-WykXdhTQfzsX26zXm7pY2e34SVmFo/edit?usp=sharing&#xA;https://docs.google.com/document/d/1iNmLWb0z8F4Td-WykXdhTQfzsX26zXm7pY2e34SVmFo/edit?usp=sharing&amp;usp=embed_facebook</description><pubDate>21 Feb 2026 00:57 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mfdg43a6zj2e</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mfcbmiliai2e</link><description>🚨The 2025 AI Agent Index is out! 🚨&#xA;Amidst recent buzz over 🦀 and NIST&#39;s new agent initiative, we find:&#xA;- Selective reporting – esp. on safety&#xA;- Almost all agents backend just 3 model families&#xA;- Many agents don’t ID themselves as bots online&#xA;- Big US/China gaps&#xA;- And more…</description><pubDate>20 Feb 2026 14:04 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mfcbmiliai2e</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mfa2nk6gav2a</link><description>The International AI Safety Report is now available in all 6 official UN languages. 🎉&#xA;&#xA;https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026</description><pubDate>19 Feb 2026 16:54 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mfa2nk6gav2a</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mf5mcdwlfs25</link><description>A project idea that you can feel free to scoop me on -- I&#39;m not working on this (yet?).&#xA;&#xA;Lately, when I have talked with AI governance people about non-consensual AI deepfakes and CSAM, the conversation almost always touches on a particular open problem...</description><pubDate>18 Feb 2026 17:31 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mf5mcdwlfs25</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3meydd2dabk2r</link><description>🚨 New paper led by Joe Kwon with GovAI.&#xA;&#xA;Are you worried about OpenAI automating dev &amp; evals with AI agents? What about Grok reading all of your tweets &amp; info to profile you? Some of the most consequential *internal* deployments of AI systems are in regulatory grey areas.</description><pubDate>16 Feb 2026 15:07 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3meydd2dabk2r</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mewqqwihpi27</link><description>Did you ever notice that the image at the top of OpenAI&#39;s &#34;Our approach to AI Safety&#34; article is a giant red flag???</description><pubDate>16 Feb 2026 00:03 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mewqqwihpi27</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3mejb435lac26</link><description>On the @csis.org podcast with Greg Allen and Stephen Clare, we talk about the International AI Safety Report, technical safeguards, and what engineers can (and can&#39;t!) do to save us from AI risks.&#xA;&#xA;https://www.youtube.com/watch?v=2VlXhGottLw&amp;list=PLnArnDQHeUqeErR8mbkEGUqzGD2b5O3Cc</description><pubDate>10 Feb 2026 15:18 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3mejb435lac26</guid></item><item><link>https://bsky.app/profile/scasper.bsky.social/post/3me3niqtld22j</link><description>Personally, I wish Anthropic would go a step further and also mention at the end of their ads that they aren&#39;t currently embroiled in multiple lawsuits over the deaths of children.&#xA;&#xA;🧵🧵🧵 What I like about the new Claude Ads&#xA;&#xA;https://www.theverge.com/ai-artificial-intelligence/873686/anthropic-claude-ai-ad-free-super-bowl-advert-chatgpt?utm_content=buffercb563&amp;utm_medium=social&amp;utm_source=bsky.app&amp;utm_campaign=verge_social</description><pubDate>05 Feb 2026 05:22 +0000</pubDate><guid isPermaLink="false">at://did:plc:yiwxijbeyioxmvbtbclppgga/app.bsky.feed.post/3me3niqtld22j</guid></item></channel></rss>