<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0"><channel><description>Reader in Computer Vision and Machine Learning @ School of Informatics, University of Edinburgh. &#xA;https://homepages.inf.ed.ac.uk/omacaod  </description><link>https://bsky.app/profile/oisinmacaodha.bsky.social</link><title>@oisinmacaodha.bsky.social - Oisin Mac Aodha</title><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3mjefjvmgvc2n</link><description>Registration for the 29th British Machine Vision Association (BMVA) Computer Vision Summer School is now open! It will take place from the 13th to 17th of July 2026 at Durham University, UK. &#xA;&#xA;For more information, please check out the official webpage:  &#xA;cvss.bmva.org&#xA;https://cvss.bmva.org/</description><pubDate>13 Apr 2026 08:00 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3mjefjvmgvc2n</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3mhbcnamzwk26</link><description>Postdoc opportunity (2 years) at the University of Edinburgh on generative models.&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>17 Mar 2026 15:40 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3mhbcnamzwk26</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3mcd6l5ey5k2i</link><description>We will be running the 13th iteration of the workshop on Fine-Grained Visual Categorization (FGVC) at CVPR this summer. We have an open call for papers that explore questions in visual recognition. &#xA;&#xA;Please see our website for more information:  &#xA;https://sites.google.com/view/fgvc13/submission&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>13 Jan 2026 18:26 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3mcd6l5ey5k2i</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3mbomrivpks2j</link><description>We have two open PhD positions at the interface of AI and ecology. Start dates are Sept 2026. &#xA;&#xA;We are looking for candidates with a background in AI/CS, Math, Stats, or Physics that are passionate about solving challenging problems in these domains. &#xA;&#xA;Application deadline is in two weeks.</description><pubDate>05 Jan 2026 14:14 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3mbomrivpks2j</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m7auiqu65c2n</link><description>The goal of semantic correspondence estimation is to establish semantically meaningful matches across different images of an object. It turns out however that recent supervised methods generalize poorly beyond the annotated keypoints seen during training. #NeurIPS2025</description><pubDate>05 Dec 2025 16:05 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m7auiqu65c2n</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m7atsvxfc22t</link><description>Today at #NeurIPS2025, Bingchen Zhao will be presenting our new LLM Speedrunning Benchmark which evaluates an LLM agent&#39;s ability to reproduce scientific finding in the context of LLM training. &#xA;&#xA;Stop by poster #3313 from 4:30pm to 7:30pm at #NuerIPS2025 today in San Diego to learn more.</description><pubDate>05 Dec 2025 15:53 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m7atsvxfc22t</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m765ura6gc2a</link><description>Leonie @bossemel.bsky.social will be presenting CleverBirds, our new  human visual learning dataset, at #NeurIPS2025 in San Diego today.&#xA;&#xA;Stop by poster #2012 from 11am-2pm to learn more.&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>04 Dec 2025 14:15 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m765ura6gc2a</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m73tnepeas2p</link><description>Want to discover and visualise the differences between two different machine learning models? &#xA;&#xA;We will be presenting Neehar&#39;s work on &#34;Representational Difference Explanations&#34; at #NeurIPS2025 in San Diego from 4:30pm to 7:30pm PT today. Poster #1115. &#xA;&#xA;Full paper:  &#xA;arxiv.org/abs/2505.23917&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>03 Dec 2025 16:06 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m73tnepeas2p</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m73t63oxa22p</link><description>We will be presenting Elle Miller&#39;s (@elle-miller.bsky.social) work on Enhancing Tactile-based Reinforcement Learning for Robotic Control at #NeurIPS2025 in San Diego today. &#xA;&#xA;Stop by poster #2317 from 11am to 2pm PT today to learn more. &#xA;&#xA;Full paper:&#xA;arxiv.org/abs/2510.21609</description><pubDate>03 Dec 2025 15:58 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m73t63oxa22p</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m6ia4a2ogk2g</link><description>Check out this fantastic TED talk from Sara Beery @sarameghanbeery.bsky.social &#xA;&#xA;In it, she talks about how computer vision techniques can assist with extracting a wealth of information and insights from large image collections that depict the natural world.&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>25 Nov 2025 20:56 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m6ia4a2ogk2g</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m6ehuquzd222</link><description>Traditional clustering methods aim to group unlabelled data points based on their similarity to each other. However, clustering, in the absence of additional information, is an ill-posed problem as there may be many different, yet equally valid, ways to partition a dataset.</description><pubDate>24 Nov 2025 09:05 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m6ehuquzd222</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m65gclve4k27</link><description>We have PhD opportunity (start date Sep 2026) at the University of Edinburgh at the intersection of biodiversity mapping and zoonotic disease prediction. &#xA;&#xA;It is part of the UKRI AI Centre for Doctoral Training in Biomedical Innovation based in the School of Informatics: &#xA;ai4bi-cdt.ed.ac.uk</description><pubDate>21 Nov 2025 13:48 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m65gclve4k27</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m5yqczzpcs2m</link><description>Check out Neehar&#39;s (@therealpaneni.bsky.social) upcoming #NeurIPS2025 paper titled: &#xA;Representational Difference Explanations&#xA;&#xA;He has a cool new approach for enabling more direct and interpretable model comparisons by isolating representational differences between models. &#xA;&#xA;arxiv.org/abs/2505.23917&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>19 Nov 2025 17:04 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m5yqczzpcs2m</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m5gzb3j3yc2k</link><description>Check out Leonie&#39;s (@bossemel.bsky.social) upcoming NeurIPS Datasets and Benchmarks paper about a really interesting new dataset for evaluating models of human visual learning.&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>12 Nov 2025 15:56 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m5gzb3j3yc2k</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m5efsghduk2n</link><description>Great PhD opportunity to work with Serge and Toke on a really interesting computer vision / machine learning problem.&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>11 Nov 2025 15:02 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m5efsghduk2n</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m3wze4g43c2f</link><description>Viacheslav&#39;s group does fantastic work in machine learning in geometric learning and uncertainty quantification. &#xA;&#xA;For those looking to start a PhD in the near future this is a brilliant opportunity to work with a great advisor! &#xA;&#xA;#phd #ml #edinburgh&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>24 Oct 2025 13:50 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m3wze4g43c2f</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3m3cgb52lns2r</link><description>Interested in doing a PhD in machine learning at the University of Edinburgh starting Sept 2026?  &#xA;&#xA;My group works on topics in vision, machine learning, and AI for climate. &#xA;&#xA;For more information and details on how to get in touch, please check out my website:  &#xA;homepages.inf.ed.ac.uk/omacaod</description><pubDate>16 Oct 2025 09:15 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3m3cgb52lns2r</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3lz6w6swl7c2f</link><description>Working at the intersection of AI and Climate/Conservation? &#xA;&#xA;If so, check out our upcoming workshop which will taking place at #EurIPS in Copenhagen in December 2025. &#xA;https://sites.google.com/g.harvard.edu/aicceurips &#xA;&#xA;@euripsconf.bsky.social&#xA;‬&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>19 Sep 2025 12:59 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3lz6w6swl7c2f</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3lyl4cmdqsk2d</link><description>The US-UK Scientific Forum on Measuring Biodiversity for Addressing the Global Biodiversity Crisis summary report is now available. &#xA;&#xA;I participated earlier this year and it was really enlightening. The report covers explores tools, challenges, and solutions for tackling the biodiversity crisis.&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>11 Sep 2025 15:55 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3lyl4cmdqsk2d</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3luah67r5ss2s</link><description>This week at #ICML we are presenting our new work titled Feedforward Few-shot Species Range Estimation. &#xA;&#xA;TLDR;&#xA;* Our model, FS-SINR, can estimate a species&#39; range from few observations&#xA;* It does not require an retraining for previously unseen species&#xA;* It can integrate text and image information</description><pubDate>18 Jul 2025 12:29 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3luah67r5ss2s</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3lrnj6tlub22x</link><description>CrossSDF: 3D Reconstruction of Thin Structures From Cross-Sections&#xA;&#xA;We will be presenting our work on thin structure reconstruction at the final poster session (4-6pm) at #CVPR2025 today. &#xA;&#xA;Stop by poster #457 to learn more.</description><pubDate>15 Jun 2025 12:55 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3lrnj6tlub22x</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3lrkyewnamk2n</link><description>DepthCues: Evaluating Monocular Depth Perception in Large Vision Models&#xA;&#xA;Do automated monocular depth estimation methods use similar visual cues to humans? &#xA;&#xA;To learn more, stop by poster #405 in the evening session (17:00 to 19:00) today at #CVPR2025.</description><pubDate>14 Jun 2025 12:48 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3lrkyewnamk2n</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3lrkxrg7ohk2j</link><description>MVSAnywhere: Zero-Shot Multi-View Stereo&#xA;&#xA;Looking for a multi-view stereo depth estimation model which works anywhere, in any scene, with any range of depths? &#xA;&#xA;If so, stop by our poster #81 today in the morning session (10:30 to 12:20) at #CVPR2025.</description><pubDate>14 Jun 2025 12:37 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3lrkxrg7ohk2j</guid></item><item><link>https://bsky.app/profile/oisinmacaodha.bsky.social/post/3lrfyyl3lpk2s</link><description>Check out Nikolas @tsagkas.bsky.social and Danier&#39;s work on understanding the limitations of pre-trained visual representations for visuomotor robot learning at the Embodied AI Workshop at CVPR 2025 today.&#xA;&#xA;[contains quote post or other embedded content]</description><pubDate>12 Jun 2025 13:16 +0000</pubDate><guid isPermaLink="false">at://did:plc:4o336jhppoeho4zofqlt3eyb/app.bsky.feed.post/3lrfyyl3lpk2s</guid></item></channel></rss>