<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"><channel><description>Advancing AI safety through collaboration, research, and education&#xA;https://www.horizonomega.org/</description><link>https://bsky.app/profile/horizonomega.org</link><title>@horizonomega.org - Horizon Omega</title><item><link>https://bsky.app/profile/horizonomega.org/post/3mlve7hir4k2k</link><description>Next Steps for AI Welfare Research&#xA;by @jeffsebo.bsky.social, Director of the Center for Mind, Ethics, and Policy, New York University&#xA;&#xA;Our 1st edition of the AI Welfare Seminars: presentations on AI welfare, consciousness and moral status.&#xA;&#xA;Tuesday, May 19, 1PM EDT, on Zoom&#xA;RSVP luma.com/sc15vlr8&#xA;https://luma.com/sc15vlr8</description><pubDate>15 May 2026 12:42 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mlve7hir4k2k</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mlecxzcpwc2b</link><description>We are opening Ω Labs, a cowork, event, and meeting space for the AI safety and governance community of Montréal. &#xA;&#xA;Nous ouvrons Ω Labs, un espace pour le cotravail, les événements, et les rencontres pour la communauté montréalaise de sûreté et gouvernance de l&#39;IA.&#xA;https://horizonomega.substack.com/p/labs-is-open-in-montreal</description><pubDate>08 May 2026 18:04 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mlecxzcpwc2b</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mlaeslk6as2j</link><description>The Secure Program Synthesis Hackathon&#xA;Montréal edition at Ω Labs&#xA;Friday May 22 6pm to Sunday&#xA;&#xA;At the intersection of program synthesis, formal methods, security. Tracks: Spec Elicitation, Spec Validation, Spec-Driven Development, Adversarial Robustness for ITPs and proof tools&#xA;&#xA;luma.com/4q2fnc39&#xA;https://luma.com/4q2fnc39</description><pubDate>07 May 2026 04:27 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mlaeslk6as2j</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mkukt3usds2x</link><description>Guaranteed Safe AI Seminars, May 2026:&#xA;&#xA;Formal Guarantees for Frontier AI&#xA;Gagandeep Singh – Assistant Professor at UIUC, develops formal certification, monitoring, and synthesis methods for frontier AI systems&#xA;&#xA;Thursday, May 14, 1PM EDT&#xA;RSVP: luma.com/euzt6ey7&#xA;https://luma.com/euzt6ey7</description><pubDate>02 May 2026 11:42 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mkukt3usds2x</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mkbnyrf43k2o</link><description>AI Safety Papers We Love, a new reading group in Montréal.&#xA;&#xA;First edition on Wed May 6 18:30 at Ω Labs:&#xA;&#xA;Multi-Agent Risks from Advanced AI, Hammond et al (2025), presented by Orpheus.&#xA;&#xA;luma.com/dnz5hly3&#xA;https://luma.com/dnz5hly3</description><pubDate>24 Apr 2026 23:19 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mkbnyrf43k2o</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mhqoros7qs2v</link><description>Guaranteed Safe AI Seminars, April 2026:&#xA;&#xA;Provably Safe Neural Network Controllers via Differential Dynamic Logic&#xA;Samuel Teuber – PhD Candidate, Institute of Information Security and Dependability (KASTEL), Karlsruhe Institute of Technology&#xA;&#xA;Thursday, April 9, 1 PM EDT&#xA;RSVP: luma.com/920d2h7p&#xA;https://luma.com/920d2h7p</description><pubDate>23 Mar 2026 18:27 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mhqoros7qs2v</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mfjq5uehxs2r</link><description>Guaranteed Safe AI Seminars, March 2026:&#xA;&#xA;Benchmarks for AI-assisted Formal Verification&#xA;By Theodore Ehrenborg, AI Safety researcher at the Beneficial AI Foundation and PIBBSS&#xA;&#xA;Thursday, March 12, 1PM EST&#xA;luma.com/nk8ce7so&#xA;https://luma.com/nk8ce7so</description><pubDate>23 Feb 2026 13:12 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mfjq5uehxs2r</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mfhxjplyu222</link><description>Montréal AI safety event, Tuesday March 3rd, 7 PM:&#xA;&#xA;When Is a Human Actually “Overseeing” an AI System?&#xA;&#xA;By @shalalehrismani.bsky.social postdoc at McGill+Mila, working on system safety, HCI, and the societal impact of AI, and executive director of the Open Roboethics Institute.&#xA;&#xA;luma.com/7kugvplz&#xA;https://luma.com/7kugvplz</description><pubDate>22 Feb 2026 20:19 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mfhxjplyu222</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mf6osujfsk2d</link><description>Montréal AI safety event, Tuesday Feb 24, 7 PM:&#xA;&#xA;Rights Balancing: How the Future Rights of AI Workers will also Protect Human Rights&#xA;&#xA;By Jonathan Simon assist. prof. at Philosophy UdeM and &#xA;Heather Alexander, human rights lawyer. Co-founders of  @futureofcit.bsky.social.&#xA;&#xA;luma.com/hcrp5nmu&#xA;https://luma.com/hcrp5nmu</description><pubDate>19 Feb 2026 03:49 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mf6osujfsk2d</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mdyda2dsvk2i</link><description>Montréal AI safety event, Tuesday Feb 17, 7 PM:&#xA;&#xA;What hackers talk about when they talk about AI: Early-stage diffusion of a cybercrime innovation&#xA;&#xA;​Talk by Benoît Dupont, Chair in Cyber-resilience, Human-Centric Cybersecurity Partnership director, and Criminology prof at UdeM.&#xA;&#xA;luma.com/gifbf18i&#xA;https://luma.com/gifbf18i</description><pubDate>03 Feb 2026 21:41 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mdyda2dsvk2i</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mdyczoamu22i</link><description>For the Montréal AI safety community: let&#39;s meet on some Fridays for coworking + bouldering, starting this week.&#xA;&#xA;Pour la communauté montréalaise de la sûreté de l&#39;IA : retrouvons-nous certains vendredis pour du coworking et de l’escalade de bloc, commençant cette semaine.&#xA;&#xA;luma.com/8nztwanh&#xA;https://luma.com/8nztwanh</description><pubDate>03 Feb 2026 21:37 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mdyczoamu22i</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mdv3v3vggc2b</link><description>Lancement de l&#39;infolettre Montréal AI safety, ethics, governance. Mensuelle, sur les événements, opportunités, politique et recherche.&#xA;&#xA;Launching the Montréal AI safety, ethics, governance newsletter. Monthly on events, opportunities, policy, research.&#xA;&#xA;newsletter.aisafetymontreal.org/fevrier-2026/&#xA;https://newsletter.aisafetymontreal.org/fevrier-2026/</description><pubDate>02 Feb 2026 14:51 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mdv3v3vggc2b</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mdaup2zars2a</link><description>Montréal AI safety event, Tuesday Feb 3, 7 PM:&#xA;&#xA;AI Pluralism: What Models Do, Who Decides, and Why It Matters&#xA;&#xA;Talk by Rashid Mushkani, AI &amp; Urban Studies PhD Candidate @ University of Montréal and Mila&#xA;&#xA;luma.com/azd9irwp&#xA;https://luma.com/azd9irwp</description><pubDate>25 Jan 2026 13:49 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mdaup2zars2a</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3md4e4zgj6k2t</link><description>The Technical AI Governance Challenge&#xA;&#xA;A weekend build-sprint to create verifiable AI safety compliance tools: compute/hardware tracking, privacy-preserving proofs, risk monitoring, and cross-border verification.&#xA;&#xA;Montréal edition, Fri Jan 30 eve to Sun afternoon&#xA;RSVP: luma.com/yyrdzld4&#xA;https://luma.com/yyrdzld4</description><pubDate>23 Jan 2026 18:42 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3md4e4zgj6k2t</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mcs2taetts2r</link><description>Montréal AI safety event, Tuesday Jan 27, 7 PM:&#xA;&#xA;Living with Digital Surveillance in China&#xA;&#xA;A presentation by @arianeollier.bsky.social, Prof and Canada Research Chair in Digital Regulation at Work at ESG-UQAM&#xA;&#xA;luma.com/5vtq60el&#xA;https://luma.com/5vtq60el</description><pubDate>19 Jan 2026 16:29 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mcs2taetts2r</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3mblysyxyf22p</link><description>AI Manipulation Hackathon (Montréal edition, Friday evening to  to Sunday midday)&#xA;&#xA;Build benchmarks, detection + mitigations against deception/sycophancy/sandbagging/dark patterns. $2000 prizes + possible Apart Fellowship + Paris workshop slot.&#xA;&#xA;RSVP: luma.com/mf69p5pi&#xA;https://luma.com/mf69p5pi</description><pubDate>04 Jan 2026 13:12 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3mblysyxyf22p</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3maoevts7kk2a</link><description>HΩ&#39;s year in review&#xA;&#xA;In 2025, we produced the Guaranteed Safe AI Seminars, cultivated the Montréal AI safety community, realized the Limits to Control workshop, and more.&#xA;&#xA;We are also moving from a volunteer org to one with more capacity.&#xA;&#xA;Onward to 2026!&#xA;&#xA;https://horizonomega.substack.com/p/h-2025-review</description><pubDate>23 Dec 2025 18:28 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3maoevts7kk2a</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3m73zriashc2w</link><description>Montréal event, Tuesday December 16, 7 PM:&#xA;&#xA;Can AI systems be conscious? How could we know? And why does it matter?&#xA;&#xA;A presentation by @joaquimstreicher.bsky.social, Ph.D. candidate in Neuroscience and co-founder of Montréal Initiative for Consciousness science (MONIC)&#xA;&#xA;luma.com/paky3mih&#xA;https://luma.com/paky3mih</description><pubDate>03 Dec 2025 17:56 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3m73zriashc2w</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3m6xjouho7s2q</link><description>Montréal event, Tuesday December 2, 7 PM:&#xA;&#xA;Veracity in the Age of Persuasive AI – a presentation by Taylor Lynn Curtis, researcher on misinformation at Mila.&#xA;&#xA;luma.com/mmuqltzq&#xA;https://luma.com/mmuqltzq</description><pubDate>01 Dec 2025 22:58 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3m6xjouho7s2q</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3m6i2g5me7c2j</link><description>Guaranteed Safe AI Seminars, December 2025:&#xA;&#xA;Safe Learning Under Irreversible Dynamics via Asking for Help&#xA;Benjamin Plaut – Postdoc at CHAI studying guaranteed safe AI&#xA;&#xA;Thursday, December 11, 1 PM EST&#xA;luma.com/wcww6xpl&#xA;https://luma.com/wcww6xpl</description><pubDate>25 Nov 2025 19:14 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3m6i2g5me7c2j</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3m5k62stwa22u</link><description>&#34;When AI met Automated Reasoning&#34;&#xA;by Clark Barrett, director of the Stanford Center for Automated Reasoning and co-director of the Stanford Center for AI Safety.&#xA;&#xA;The event occurred today on the Guaranteed Safe AI Seminars.&#xA;&#xA;The recording is now available: https://www.youtube.com/watch?v=AxASkEW8gYE</description><pubDate>13 Nov 2025 22:00 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3m5k62stwa22u</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3m2mltzyui22w</link><description>Our work, purpose, mission are now more clearly expressed as we updated our website.&#xA;&#xA;www.horizonomega.org&#xA;https://www.horizonomega.org/</description><pubDate>07 Oct 2025 16:56 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3m2mltzyui22w</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3lukillywsk24</link><description>Guaranteed Safe AI Seminars, August 2025 edition:&#xA;&#xA;Towards Safe and Hallucination-Free Coding AIs, by GasStationManager (Independent Researcher)&#xA;&#xA;On August 7, 13:00 EDT. To join: lu.ma/6zpndj0w&#xA;https://lu.ma/6zpndj0w</description><pubDate>22 Jul 2025 12:21 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3lukillywsk24</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3lqmqykkwck2m</link><description>You are invited to the July 2025 edition of the Guaranteed Safe AI Seminars. It will be on July 10, 13:00 EDT.&#xA;&#xA;Engineering Rational Cooperative AI via Inverse Planning and Probabilistic Programming – Tan Zhi Xuan&#xA;&#xA;lu.ma/yldjxmej&#xA;https://lu.ma/yldjxmej</description><pubDate>02 Jun 2025 12:16 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3lqmqykkwck2m</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3lphheoh5l424</link><description>Limits to Control Workshop&#xA;June 11-13, Louisville, KY&#xA;&#xA;The workshop aims to explore theoretical and practical limits of controlling increasingly autonomous AI. The 3 days of sessions are to identify controllability boundaries, key research directions, ...&#xA;&#xA;https://horizonomega.substack.com/p/limits-to-control-workshop</description><pubDate>18 May 2025 16:16 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3lphheoh5l424</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3knlgvfibcm2r</link><description>Announcing the Virtual AI Safety Unconference 2024&#xA;&#xA;May 23rd to May 26th 2024. Online, participate from anywhere.&#xA;&#xA;A research event aiming to facilitate collaboration and progress towards problems of AI risk, featuring participant-driven activities, discussions, and talks.&#xA;&#xA;vaisu.ai&#xA;&#xA;– VAISU team&#xA;https://vaisu.ai</description><pubDate>13 Mar 2024 14:01 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3knlgvfibcm2r</guid></item><item><link>https://bsky.app/profile/horizonomega.org/post/3knecup5man2u</link><description>AI Safety Events Tracker, March 2024 edition.&#xA;Newsletter listing upcoming events and open calls related to AI safety.&#xA;https://aisafetyeventstracker.substack.com/p/ai-safety-events-tracker-march-2024</description><pubDate>10 Mar 2024 18:01 +0000</pubDate><guid isPermaLink="false">at://did:plc:iofyvdnhthbohl7vl6kpwngr/app.bsky.feed.post/3knecup5man2u</guid></item></channel></rss>