Talking Papers Podcast
🎙️ Welcome to the Talking Papers Podcast: Where Research Meets Conversation 🌟
Are you ready to explore the fascinating world of cutting-edge research in computer vision, machine learning, artificial intelligence, graphics, and beyond? Join us on this podcast by researchers, for researchers, as we venture into the heart of groundbreaking academic papers.
At Talking Papers, we've reimagined the way research is shared. In each episode, we engage in insightful discussions with the main authors of academic papers, offering you a unique opportunity to dive deep into the minds behind the innovation.
📚 Structure That Resembles a Paper 📝
Just like a well-structured research paper, each episode takes you on a journey through the academic landscape. We provide a concise TL;DR (abstract) to set the stage, followed by a thorough exploration of related work, approach, results, conclusions, and a peek into future work.
🔍 Peer Review Unveiled: "What Did Reviewer 2 Say?" 📢
But that's not all! We bring you an exclusive bonus section where authors candidly share their experiences in the peer review process. Discover the insights, challenges, and triumphs behind the scenes of academic publishing.
🚀 Join the Conversation 💬
Whether you're a seasoned researcher or an enthusiast eager to explore the frontiers of knowledge, Talking Papers Podcast is your gateway to in-depth, engaging discussions with the experts shaping the future of technology and science.
🎧 Tune In and Stay Informed 🌐
Don't miss out on the latest in research and innovation.
Subscribe and stay tuned for our enlightening episodes. Welcome to the future of research dissemination – welcome to Talking Papers Podcast!
Enjoy the journey! 🌠
#TalkingPapersPodcast #ResearchDissemination #AcademicInsights
Talking Papers Podcast
Beyond Periodicity - Sameera Ramasinghe
In this episode of the Talking Papers Podcast, I hosted Sameera Ranasinghe. We had a great chat about his paper "Beyond Periodicity: Towards a Unifying Framework for Activations in Coordinate-MLPs”, published in ECCV 2022 as an oral presentation.
In this paper, they propose a new family of activation functions for coordinate MLPs and provide a theoretical analysis of their effectiveness. Their main proposition is that the stable rank is a good measure and design tool for such activation functions. They show that their proposed activations outperform the traditional ReLU and Sine activations for image parametrization and novel view synthesis. They further show that while the proposed family of activations does not require positional encoding they can benefit from using it by reducing the number of layers significantly.
Sameera is currently an applied scientist at Amazon and the CTO and co-founder of ConscientAI. His research focus is theoretical machine learning and computer vision. This work was done when he was a postdoc at the Australian Institute of Machine Learning (AIML). He completed his PhD at the Australian National University (ANU). We first met back in 2019 when I was a research fellow at ANU and he was still doing his PhD. I immediately noticed we share research interests and after a short period of time, I flagged him as a rising star in the field. It was a pleasure to chat with Sameera and I am looking forward to reading his future papers.
AUTHORS
Sameera Ramasinghe, Simon Lucey
RELATED PAPERS
📚NeRF
📚"Fourier Features Let Networks Learn High-Frequency Functions in Low Dimensional Domains"
📚On the Spectral Bias of Neural Networks
LINKS AND RESOURCES
📚 Paper
💻Code
To stay up to date with Marko's latest research, follow him on:
👨🏻🎓Google Scholar
👨🏻🎓LinkedIn
Recorded on November 14th 2022.
🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com
📧Subscribe to our mailing list: http://eepurl.com/hRznqb
🐦Follow us on Twitter: https://twitter.com/talking_papers
🎥YouTube Channel: https://bit.ly/3eQOgwP