# Talking Papers Podcast

🎙️ Welcome to the Talking Papers Podcast: Where Research Meets Conversation 🌟

Are you ready to explore the fascinating world of cutting-edge research in computer vision, machine learning, artificial intelligence, graphics, and beyond? Join us on this podcast by researchers, for researchers, as we venture into the heart of groundbreaking academic papers.

At Talking Papers, we've reimagined the way research is shared. In each episode, we engage in insightful discussions with the main authors of academic papers, offering you a unique opportunity to dive deep into the minds behind the innovation.

📚 Structure That Resembles a Paper 📝

Just like a well-structured research paper, each episode takes you on a journey through the academic landscape. We provide a concise TL;DR (abstract) to set the stage, followed by a thorough exploration of related work, approach, results, conclusions, and a peek into future work.

🔍 Peer Review Unveiled: "What Did Reviewer 2 Say?" 📢

But that's not all! We bring you an exclusive bonus section where authors candidly share their experiences in the peer review process. Discover the insights, challenges, and triumphs behind the scenes of academic publishing.

🚀 Join the Conversation 💬

Whether you're a seasoned researcher or an enthusiast eager to explore the frontiers of knowledge, Talking Papers Podcast is your gateway to in-depth, engaging discussions with the experts shaping the future of technology and science.

🎧 Tune In and Stay Informed 🌐

Don't miss out on the latest in research and innovation.

Subscribe and stay tuned for our enlightening episodes. Welcome to the future of research dissemination – welcome to Talking Papers Podcast!

Enjoy the journey! 🌠

#TalkingPapersPodcast #ResearchDissemination #AcademicInsights

## Talking Papers Podcast

# SPSR - Silvia Sellán

In this episode of the Talking Papers Podcast, I hosted Silvia Sellán. We had a great chat about her paper "Stochastic Poisson Surface Reconstruction”, published in SIGGRAPH Asia 2022.

In this paper, they take on the task of surface reconstruction with a probabilistic twist. They take the well-known Poisson Surface reconstruction algorithm and generalize it to give it a full statistical formalism. Essentially their method quantifies the uncertainty of surface reconstruction from a point cloud. Instead of outputting an implicit function, they represent the shape as a modified Gaussian process. This unique perspective and interpretation enables conducting statistical queries, for example, given a point, is it on the surface? is it inside the shape?

Silvia is currently a PhD student at the University of Toronto. Her research focus is on computer graphics and geometric processing. She is a Vanier Doctoral Scholar, an Adobe Research Fellow and the winner of the 2021 UoFT FAS Deans Doctoral excellence scholarship. I have been following Silvia's work for a while and since I have some work on surface reconstruction when SPSR came out, I knew I wanted to host her on the podcast (and gladly she agreed). Silvia is currently looking for postdoc and faculty positions to start in the fall of 2024. I am really looking forward to seeing which institute snatches her.

In our conversation, I particularly liked her explanation of Gaussian Processes with the example "How long does it take my supervisor to answer an email as a function of the time of day the email was sent", You can't read that in any book. But also, we took an unexpected pause from the usual episode structure to discuss the question of "papers" as a medium for disseminating research. Don't miss it.

AUTHORS*Silvia Sellán**,** Alec Jacobson*ABSTRACT

shapes from 3D point clouds. Instead of outputting an implicit function, we represent the reconstructed shape as a modified Gaussian Process, which allows us to conduct statistical queries (e.g., the likelihood of a point in space being on the surface or inside a solid). We show that this perspective: improves PSR's integration into the online scanning process, broadens its application realm, and opens the door to other lines of research such as applying task-specific priors.

*RELATED PAPERS*

📚Poisson Surface Reconstruction

📚Geometric Priors for Gaussian Process Implicit Surfaces

📚Gaussian processes for machine learning

LINKS AND RESOURCES

📚 Paper

💻Project page

To stay up to date with Silvia's latest research, follow him on:

👨🏻🎓Google Scholar

🎧Subscribe on your favourite podcast app: https://talking.papers.podcast.itzikbs.com

📧Subscribe to our mailing list: http://eepurl.com/hRznqb

🐦Follow us on Twitter: https://twitter.com/talking_papers

🎥YouTube Channel: https://bit.ly/3eQOgwP

The key idea is we extend this, very well known algorithm called PO surface reconstruction. We give it a statistical formalism and study the space of possible surfaces that are reconstructed from a.

Welcome to Talking Papers, the podcast where we talk about papers and let the papers do the talking. We host early career academics and PhD students to share their cutting edge research in computer vision, machine learning, and everything in between. I'm your host it Ben Shabbat, a researcher by day and podcaster by night. Let's get started.

Happy woman greeting and talking on video call - May 9, 2022:Hello and welcome to Talking Papers, the podcast where we talk about papers and let the papers do the talking. Today we'll be talking about the paper Stochastic for San Surface Reconstruction, published at Sea Graph Asia 2022. I am happy to host the first author of the paper, Sylvia Cian. Hello and welcome to the podcast.

Silvia Sellan:Hi.

Happy woman greeting and talking on video call - May 9, 2022:Can you, introduce yourself?

Silvia Sellan:Uh, yes. I'm Sylvia Sian. I'm a student, uh, a PhD student at the University of Toronto. I'll be finishing up in one year.

Happy woman greeting and talking on video call - May 9, 2022:Excellent. And who are the co-authors of the

Silvia Sellan:this is a joint work with my advisor, professor Alec Jacobson from the University of Toronto. And that's,

Happy woman greeting and talking on video call - May 9, 2022:All right, so let's get started in a TLDR kind of format, two, three sentences. What is This paper about?

Silvia Sellan:This is about? Quantifying the uncertainty of surface reconstruction from point land. So there's this classic algorithm called on surface reconstruction, that, takes a point set and puts an implicit representation of a surface. We take that algorithm and generalize it to give it a full statistical formalizm.

Happy woman greeting and talking on video call - May 9, 2022:So what is the problem that the paper's addressing?

Silvia Sellan:the the overarching problem is surface reconstruction, so you get a point cloud as input, which can be. they output up a 3D scanner or a lighter scanner or something like that, and you want to recover a fully determined surface. so you can imagine that you're a car driving down the street, an autonomous car driving down the street. You scan your surroundings using some lighter scanner, and you want to know what they look like so that you know that you're not crashing into anything. traditionally if you ask any computer graphics researchers, they'll tell you the easy way of doing it is by using a thing called put on surface recon. Uh, This was an algorithm published in, uh, 2006 that takes that point loud and outputs an implicit distribution. So something that tells you in or out for any point in space. however it does it, it only gives you one possible implicit distribution. And of course, recovering a fully determined surface from a point cloud is an under determined problem, right? There are many possible surfaces that could interpolate the points in the point cloud. instead of just outputting one, we extend personal surface reconstruction and we output every possible surface with a specific probability. So every possible surface that could be reconstructed from a given point cloud with different probabilities.

Happy woman greeting and talking on video call - May 9, 2022:Okay, so essentially given a point cloud as input, You could find, Multiple ways to connect between the points, right? So finding the surface that, that these points were sampled on, that's the big question that everybody wants to solve. And you're saying, well, there's an infinite number of surfaces that could theoretically go through these points, especially if there's like a gap in the

Silvia Sellan:That's right. That's right.

Happy woman greeting and talking on video call - May 9, 2022:And, and the on surface construction method basically says, well, there's only one. Here you go. That's what my output is. And your method is saying, well, there could be other options.

Silvia Sellan:That's exactly it. That's exactly it. And, and in a way we interpreted on reconstruction as giving you the most likely output under some, constraint under some, prior, uh,, conditions. But, Sometimes you don't want just the most likely, right? You can imagine if you're the one that's driving the car, that's doing the point, you want to know, okay, I won't crash into anything. Not just under the most likely reconstruction, but under 99% of the reconstruction so that you keep driving, right? So we quantify that uncertainty of of the reconstruction.

Itzik:Right. So, so this kind of ties to the question. So, so why is this problem important? And I think the example of autonomous driving is one of these amazing examples where you say, well, I don't wanna found out the collision after I collided. I wanna know beforehand.

Silvia Sellan:Uh, that, that, that's right. That's a great example. I, I was recently talking to, to some people about this work and, and they work in, Automated surgery. So they were also telling me that sometimes you want to be very, very sure that you're not cutting through a nerve. So you want to be absolutely sure of what your nerve looks like. And apparently in some software they do use a point leverage reconstruction algorithm. So, so this is that, that would be yet another example of a situation where you want, you really want to quantify the uncertainty, cuz you don't wanna paralyze someone.

Happy woman greeting and talking on video call - May 9, 2022:Super interesting. So now we know why this is useful, but what are the main challenges in this domain?

Silvia Sellan:Well, the main challenge is, is that, uh, it's not specially hard to quantify the uncertainty of reconstruction, in general to go to, to device an algorithm that takes a pointent and would give you an uncertain, surface. The problem is that we already have this other algorithm called on surface reconstruction, that combines many of the good things we would want in a surface reconstruction algorithm, and they also have very good, efficient code online, so it. almost everyone who is doing point cloud reconstruction is using on surface reconstruction. So the challenge was not justise some other algorithm, but generalize this one. So we needed to really understand on surface reconstruction, and give it a new statistical formalism. That meant for me, the main challenge was that I'm a graphics or a geometry researcher. I'm not a statist researcher. So it meant familiarity myself with a lot of. Statistical learning literature that I would understand where the statistical formalism come in. And uh, that was mostly the theoretical. Our, our paper is mainly theoretical, and that was the main theoretical challenge that, that we struggle with. It took a couple of weeks over, uh, last year's winter, uh, Christmas break to really understand where did the, the statistical formalism, where, where can we plug that in to? Plus on surface reconstruction,

Happy woman greeting and talking on video call - May 9, 2022:Okay. Can't wait to hear more about that in the approach section, but before we go down to that, let's talk a little bit about the contributions. So what are the main contributions of the paper?

Silvia Sellan:Well, the main contribution, like I said, is we, we give a statistical formalism to put on surface reconstruction. that's the one sentence version, uh, that's two sentence version would be that usually put on surface reconstruction gives you just one value of an implicit function. We extend that and instead of one value we give. Uh, a mean and a variance that fully determine a ga and distribution of what the value at that point is. I know that like people in, in, in your field use this term, coordinate network. This is not a network, but it's kind of a, a coordinate function that implicitly defines a surface. Think of it as quantifying the variance of the output of a coordinate.

Happy woman greeting and talking on video call - May 9, 2022:So I'm super excited to get down to what the approach is doing, but before we do, let's talk a little bit about the related works. So if you had to name two or three, works that are crucial for anyone coming to read your paper, which ones would those

Silvia Sellan:Well, the most obvious one is on surface reconstruction, uh, by capstone at all. That's, uh, 2000. six, symposium and geometry processing paper. That's the main work that we're extending. So, you know, we, we give a summary in our paper, so you could read our paper without reading on surface reconstruction, but that's the main work that, that we build on. So definitely that's, that's the most important one. Uh, then we use GA processes, I'll explain this later, but we'll use GA processes to formalize this statistical understanding. So, two of the ones I've most, I would recommend someone read are, uh, gian processes for machine learning. This is a book, and also there's, uh, geometric priors for Gian Process Implicit Surfaces by Martins at all. This is. A paper that, that basically uses gotcha processes for the specific case of recon, of surface reconstruction. and it's written more for a graphics audience. So it's, it might, it was easier to understand them for me than one of these gotcha processes for machine learning. More general papers. So if you come from a graphics, background, geometric priors for calcium process, implicit surface,

Happy woman greeting and talking on video call - May 9, 2022:Okay, excellent. I will be sure to put links to all of those relevant references in the episode's description. Personally, I think that any researcher working on surface reconstruction, has to read for on surface reconstruction, like it's a must read. so it's time to dive deep into the approach. So tell us what did you do and how did you do it?

Silvia Sellan:Uh, well we combine put on surface reconstruction with this concept called kan processes. Uh, I'll be careful on how to explain this cuz I know that most of your a is from machine learning, not necessarily graphics. So I'll. Basically you need to understand both things to understand our approach. And our approach relies on one very specific interpretation of on surface reconstruction, on one very specific interpretation of gian processes. And then we put those together. so what we did is we went through on surface reconstruction and we interpreted to work in two steps. So basically on surface reconstruction takes a point, cloud as input that point. Cloud is oriented, so it comes with a bunch of normal vector. the, the first step of on surface reconstruction is to take those vectors and interpolate them into a full vector field that's defined for everywhere in space. So that's step one. Step two is that they then solve a partial differential equation to get an implicit function. Whose gradient field is that vector field. So basically step one, you go from a discreet set of points to a vector field, and then step two, you go from a vector field to an implicit. that's basically all you need to understand about on surface reconstruction. And of course, that PDE that you solve is azo, uh, equation. So that's why it's called on reconstruction. But really the part we care about is that step where you go from an oriented point cloud to a vector field. We notice that, uh, that step can be seen a as a gian process. So what is a gauch process? Basically, a gian process for your audience is just a way of doing supervised learning. but just, just in case someone from graphics is listening to this and is wondering what, what is that? Uh, that means that you want to learn some function that you don't know what that looks like. Um, and you've observed it at some points, a finite, discrete set of points. So I like to think of, uh, the function being how long does my advisor to respond take, to respond to an email, right? So, uh, the, the, the variable is the time of the data you send the. And the response or the, the function that you wanna learn is that the hours it takes for him to respond. So, you know, maybe I send my advisor an email at noon, I send my advisor an email at 2:00 PM and I get two data points, right? But then I ask myself, well, what, what would it look like if I sent him an email at 1:00 PM Right? That's a new point that I haven't considered. Uh, we call that the test point. And the cool thing about gotcha processes is that they tell me, well, if it. Two hours for him to respond at noon, and it took five hours for him to respond at 2:00 PM Then at 1:00 PM it'll take something like three hours plus minus two. Right? So it, it will not just tell me a guess for how long it'll take. It'll tell me sort of an error bar, a variance for how long it'll take. And you, we can compete those, um, that, that mean, and that variance with simple matrix. So we do some, uh, assumptions that I'm not gonna get into until a gallian process. And we notice that that step from on surface reconstruction, that going from a discreet set of oriented points to a vector field, that step is a supervised learning step. So like the, the vector field that was on reconstruction outputs is the mean of a Gaussian process where you're trying to learn the fun that vector field as the. So as the major, it bears stopping there for a second. So we notice that the vector field from reconstruction could be understood as the mean of a Gaussian process. So then we wonder, well, what, what would it look like if reconstruction had wanted to do a Gaussian process from the start? So we reinterpret reconstruction, that's what we call stochastic surface. So instead of just this mean, we wondered, well, if we wanted to get to, to do a gar process from the start, we would not just get the mean, we would get a variance too. So we get this sort of sarcastic vector field instead of just a vector field, and we can solve the same equation that we solved earlier to go from vector field to implicit function. We can solve it again now in the space of statistical distributions to go from stochastic vector field to stochastic scaler. What does this give us? This means that at the end we get for each point in space, not just an, not just a value. So reconstruction, traditional reconstruction would give you, for this point in space, the value is 0.2. And since that's bigger than zero, that means outside. Instead we would give you a full distribution, so, so we would tell you 0.2 plus minus 0.3, and that'll give you an idea of how sure you are of that point being inside or. So that's our approach. We take reconstruction and we reinterpret it as a GA process, and we can output a fully stochastic scaler field as the output.

Happy woman greeting and talking on video call - May 9, 2022:Uh, this is such an interesting approach. It's really not like all of those new papers coming out that, oh, yeah, we. Switch some block and now it works better. It's actually like looking at the problem from a different perspective, right? Like looking at it in the way that you, you now have like this go process, which gives you this stochastic properties you can, you can utilize to, to do so much more than you could before with the traditional or classic for surface reconstruction.

Silvia Sellan:I'm glad you enjoy our first. I can shout out, uh, Derek Lou, who I think was on this podcast a few months. Uh, I asked him, before I started to work on anything related to machine learning, I asked him, how do you work on machine learning such that you're not waking up every Monday? Check, checking archive to see if you still have a project? Cuz that feels too stressful for me. Uh, I don't wanna panic. I already panic enough in my life. I don't, I don't want that to have that. and he said you just need to work on something so fundamentally different from everything. That no one's gonna scoop you, which is like a classic Derek advice where you're like, well, obviously if I could, I would work at something revolutionary and fundamentally different. Right? Uh, the problem is I don't, but, but this paper felt like that in the sense that, you know, for some reconstruction has been around for 15, 16, 17 years. There aren't many people that are working on, uh, statistically formalizing post reconstruction. So it felt like a nice newspaper to write that. Wasn't gon that. I didn't need to be anxious about being skipped on. So that's kind of the reason.

Happy woman greeting and talking on video call - May 9, 2022:Yeah, and I think this message is like super important because most of the audience of this podcast are early career academics or PhD students and, and I think the message of. not, you know, crunching the parameters all day and trying to really find something that's fundamentally different than what everybody else is doing, I think is a super important message to convey. So thank you for that.

Silvia Sellan:To be clear, that's Derek Lou's message. No, not mine.

Happy woman greeting and talking on video call - May 9, 2022:Yeah. That fits for you. And it fits for me too. I mean, um, I, I think that what research should be about, right? It shouldn't be. It's unfortunate that the, that the, a lot of fields are now in the place where it's about tuning parameters and rather than coming up with new and interesting approaches and perspective. on this podcast, I try to bring all those that do, do that. Like they give a new perspective on a field, solve a problem, and seems like I've got two for tune for now. Um,

Silvia Sellan:okay. Maybe I'll, I'll ask you a question then I'll change the format a little bit and you can cut it if you don't want it. Uh, but this, you are, you're focused around papers a lot in this, uh, podcast, obviously. Uh, and I wonder if part of the problem is that we're, we're using this academic currency, so, you know, obviously if we have an incentive, we, we, if we, if we create an incentive, People and I include myself, are gonna look for, you know, what is the idea that most quickly and concisely resembles a neuros paper or a cvpr paper, a cigarette paper? Um, I wonder how much our current, like scientific publication process encourages those types of works that are, I changed the parameter a little bit and I got like this bold face number at the end of the table, and that's a CPR paper, like, which are definitely important work works of research. But currently we have no way of distinguishing, putting fundamentally different approaches and those types of works. So, um, you know, how much of it, how much of it is our fault for focusing on papers too? Is there gonna be a talking blog post podcast send?

Happy woman greeting and talking on video call - May 9, 2022:Well, actually that's a great question. Um, and well, Personally, I try to bring those papers that do the extra mile as well. So usually papers that have a project website and a blog post and they try to convey it and teach it. And not only, okay, here are the ball numbers, we, we were the best. Right? Where some of the, an idea, but I think you touched on a very important point where you said, well, our incentive system. Not good. I'm not sure I have a good idea for what is a good incentive system, but the current one is not good. We're judged by the number of papers that we shoot out and the more they get accepted to high venues, the the better. And that helps you secure funding and securing funding helps you get students and that's your academic career and. There's nothing that looks at, and you can actually even see it on Twitter, right? Every other academic kind of post, oh, we had seven papers accepted to

Silvia Sellan:Yeah, exactly.

Happy woman greeting and talking on video call - May 9, 2022:and now is this out of how many? Right? It's it's not just about the successes, right? It's also about the, all of the times that you tried something new and risky and novel and failed because you're doomed to fail, right? That's research. If we knew the answer before we started the project, then it's not a very interesting question to work on. And yeah, I agree. It's a big problem in, in, in multiple fields at the moment. Um, but the upside is that it pushes everyone to the limit and it pushes the field forward much faster than than any other field. And I think there are some, Things that people now do in addition to papers that even further promote that, right? Publishing the code, that wasn't a huge thing. I don't know, 10 years ago, like who would've put the code online and made sure that it can run on multiple platforms. Today it's almost standard to put your code on GitHub, right? Um, so bidder, right? You can't have the one without the outer. Okay. But back to the episode, that was a great, uh, question. Thank you. and by the way, I think this is one of the thing that, that this podcast is trying to do, right? It's, it tries to, to kind of have a little look like a peak inside the mind, behind the paper. It tries to see the way of thought, not just the results,

Silvia Sellan:that's great. I just wonder, I just wonder, you know, there are papers that I've, and you know, I don't mean this as a compliment necessarily, but take it as that if you want. There are papers that I've. uh, listen to this podcast on and, and looked at the project page and I feel like I understand them. You know, like I feel like the actual pdf, I've never opened it and I feel like I understand that paper well enough to, to be inspired by it and like work on on future works. So like at some point, yeah, the whole format of a paper is being rewritten.

Happy woman greeting and talking on video call - May 9, 2022:Yeah, this is part of the reason I started a new medium for sharing research. Uh, But yeah, it would be interesting to see where we are in a few years. I know that, um, it used to be only about citations, but now there's this whole line of, they call it alt matrix. So all these kind of different ways of measuring the impact of the paper, which are not necessarily influenced by citations, but it's not as widely adopted as citations Back to the episode structure. So we talked about the approach, super interesting. let's talk a little bit about results and applications. So in which situation did you, apply your stochastic for surface reconstruction and how did that work?

Silvia Sellan:Right. So as I was telling you, by using, uh, by combining on surface reconstruction with aian process, we had this. Um, uncertainty map for every point in space that told us how likely that point was of being in the reconstructed surface. Instead of just, is it in or is it out? We got something like, oh, it has a 60% chance of being inside the reconstructive surface. This map, we can ask for every point in space, and that's actually very useful. The main use is, for example, that car example I said at the beginning. So like, you know, we, we had a very toy example of a car that's driving in 3d. Takes a scan of its surroundings and it, you know, through a trajectory, it can ask, how likely am I to intersect any of the other shapes in this scene? And you can see that like, it's like 30%. You know, 30% means that probably what on surface reconstruction, the traditional method would've told you no, there's no intersection and that's it. Right? But 30% chance of crashing your car. It's you, it probably means that you want to take another trajectory, right? You, you can only do so many of those chances before you break your car. Uh, so we do examples like that. another thing we do is, um, you know, if you think about it, the more these probabilities are closer to either a hundred or zero, the more certain you are of what the shape looks like. So, you know, if I give you an uncertainty map that just looks. It's zero in all this part and a hundred in all of this part. It means we're, we're very sure of what the shape looks like, because for every point of space we can ask, we can very confidently say if it's in or out. But if it's mostly 50%, then we don't have a lot of idea of what the shape looks like. So another thing that we do is we can introduce a thing called Integrated Uncertainty that just measures how far this probability is from zero point. How close this probability is to 0.5. So the, the higher, the more uncertain you are about what the shape looks like. And that's something that we can use, uh, for example as a reconstruction threshold. So if we're scanning something from different angles, we can compute this integrated uncertainty and say, you know, keep scanning from random angles until you reach 0.1. Integrated uncertainty. And this is something that's agnostic on the shape that you're actually reconstructing. So you can use it as a t. For scanning unknown shapes so that you get a similar reconstruction quality. So this is something that, that we output.

Happy woman greeting and talking on video call - May 9, 2022:So this is something that's very interesting for, I guess kind of like robotics applications, right? You have a robot walking around the house, it sees a bunch of things. It's not sure what it's seeing, so it want, it should get a better look. And this is what we do as human, right, humans, right? We see something we've never seen before. The first thing we would do is like,

Silvia Sellan:Right. The, the, the only difference is that, we humans would have an intuitive feeling for where we should look as the next point. Right. Whereas what I'm saying is just like, oh, I would tell the robot to like, keep scanning randomly until it, it, it figures out what the shape is, which isn't exactly what we as humans into, right? Like, if you see something, you would turn it around because you know that it's the back part that you haven't seen yet. Um, so this is actually something that we looked at further. Um, so we, we, we have an example in the paper where, Uh, have an incomplete point cloud and we set different cameras around it and we simulate Ray from those cameras onto the point cloud. So this is something that we can do. We call it Ray casting on uncertain geometry, but basically we can use the same statistical formalism to cast ray from a hypothetical scan position on the surface that that means that for each possible camera we can simulate which points would this scanning position add to the. We can add those points. I see if our integrated uncertainty got better, right? So we can ask, you know, by adding this new scanning position, did I actually gain any knowledge or not? And that's kind of closer to what the human is doing, which is identifying the next best view position. Um, and that's kind of a further example that we have, so we can also do it with our statistical formal. There's also, there's also, so I, I always make this joke that like if you, you might have heard that actors have some movies that they do one for them, one for me. So like they have one movie that that sells. So they'll do Avengers so that it allow of tickets, but then they'll use that money to fund their small project that won't sell as much, but they really, really want. So sort of these like next week planning, collision detection, all of these applications are the ones I did for the reviewers, right? So this is for them. Uh, the application I really liked is that by understanding post reconstruction as a, as a gaugin process in, in the process of understanding it like that, we needed to assume a certain prior, right? Because a gotcha process start by assuming a prior, but now that we understand reconstruction like this, we. Does that prior make sense for every reconstruction test? So like, can we change that prior? So can we use this statistical understanding, not just for new applications, but to improve the application of reconstruction, which is just straight up surface reconstruction. So the main result application I'm interested in is using different priors. So for example, we show examples in the paper where we enforce that the reconstructive surface has to be. This is something that's a known problem with reconstruction. They some sometimes outputs open surfaces. We solve that with like less than a line of code with half a line of code. We we solve that. We have a similar one where we, we, we close a car reconstruction that on reconstruction would've, um, would've given an open output. So basically changing these priors to, uh, improve the reconstruction is, are some of the most exciting results that we have and some of the most exciting future work directions that, that we have, that I guess we'll talk about.

Happy woman greeting and talking on video call - May 9, 2022:Yeah. And I think like it really makes sense to, to have that like dependency on some prior, right? Because a lot of, I don't know, classification networks, uh, I mean, many people can say that it's already solved, right? So if you knew that you're looking at a car, And you have like, I don't know, a very noisy one, directional scan of that car. It would be really good for the reconstruction process to say, well, that's a car now. You know, that's a car. Use that information to improve the

Silvia Sellan:right. That's right, that's right. And, and there are, to be clear, like, you know, point completion or, or, or surface reconstruction algorithms that use, database knowledge, the problem is that those don't leverage all the good things about Pozo reconstruction. So Pozo reconstruction is extremely fast, it's extremely efficient. It has. These local global dual steps that, that make it fast and robust, but also noise resilient. So it has, it, it some reconstruction is the best of the best that we have for surface reconstruction. So like, I think there, thanks to our paper, hopefully there's a very clear follow up of data driven on surface reconstruction. I guess I'll, I'll pitch it to your audience. If you wanna write that paper, send me an email cuz we can write it together and, and I know how to use the code so I'll do that. But you do the data driven part. I, I think that there's an opportunity for an immediate follow up there that's very easy and, and could be revol revolutionary.

Happy woman greeting and talking on video call - May 9, 2022:Low hanging fruit.

Silvia Sellan:Or rather, we, we built a ladder that tells, takes you very far, very near the fruit. Right? It wasn't low hanging five months ago.

Happy woman greeting and talking on video call - May 9, 2022:Okay. Were there any like fail cases or any unexpected, results that you encountered?

Silvia Sellan:Well. Hmm, good question. The main drawback of our algorithm is the speed. So our algorithm is slow. This isn't really like a failure. It's more like computing. The variance of our estimation is very slow. This doesn't affect the project. I was just pitch. Because, uh, that would just be comput in the reconstruction, that that is fast. But computing the variance is, is slow. So we, uh, found that, you know, when we jumped from 2D to 3d, we straight up couldn't do what we were doing in 2d, in 3d, in a reasonable time, in a reasonable computer. So we had to use, uh, you know, a space decomposition, uh, a space reduction trick to, to make the solver manageable. So that was a bit disappointing. Um, another. Failure case, we, maybe not a failure case, but uh, something in our paper that didn't go as planned is that we have this step where we basically lump a ma, we make a matrix diagonal so that it's easier to invert. Uh, basically. And uh, this is based on something that I know from finite element analysis where people usually, uh, make a matrix diagonal we show that it's valid and there's some assumptions, but it's not entire. Accurate under all assumptions. Um, this is not our, like we had to do this lumping so that we recover post on reconstruction. So this is, it's not that we proposed this, it's as we explained post on reconstruction as having done this. but recently there's been, a new paper by Alex Turnin at all called Numerical numerically stable sparse SCO from processes via minimum Separation using cover trees. And this basically shows you a better way of doing what we did. This is a, this is a paper that was posted on archive two weeks ago, so, so there's no way we could have used it for our, work. But, this is a very, I recommend this to anyone working on gian processes and thinking about applying gian process. At scale, because this basically gives you, I, I don't think Alex would agree with this, uh, interpretation, but it basically gives you a smart way of turning that ma of, of making that matrix smaller. So, uh, this is one thing that I wish, I wish this other paper had come out a year ago, and then we would've, we would've used it.

Happy woman greeting and talking on video call - May 9, 2022:Yeah, these kind of like the field is going forward and you never know which block you can swap into another block and something that's a challenge today that you had to circumvent in some way at some point would turn up to be solved. but, but this is great. It means we're working in a very, productive and high paced field.

Silvia Sellan:Yep.

Happy woman greeting and talking on video call - May 9, 2022:Moving on to the conclusions and future work section. So how do you see the impact of the paper going forward?

Silvia Sellan:So I look at this paper as a, as a computer graphics and geometry processing researcher, and the part that excites me the most about this paper is that it's a, it's a way of quantifying uncertainty of a process that we use in geometry processing. So namely surface reconstruction from point light. So, You know, one thing I would like to work on in the future and I would like to encourage people to work on, cuz I think it's a, a very, can be a very promising field, is a fully uncertain geometry processing pipeline. So there are all these works like ours that quantify the uncertainty of the capture steps. So like going from a real world object to an uncertain surface. There are several works like that, like that hours among them. But we sort of stopped. And I would like, I would like us to do things to that uncertain surface, right? So, you know, geometry processing doesn't stop at capture. We then solve partial differential equations on that geometry. We then compute differential quantities on that geometry. We then deformed that geometry. We do physics simulation on that geometry, right? There's a lot of things that that geometry processing does, but we're not doing it for those uncertain shapes. So the next steps, the ones I'm interested by. Okay. Now that, now that I've given, I've scanned the thing and I've given you the different possibilities of surfaces that it could be, now tell me the different possibilities of curvatures that it could have. Right? That extra step. It, I don't think it has been done before and I think that's very exciting cuz then we can inform the scanning, right? We can go back and. Well, I want, I know that the shape has certain maximum curvature, so if I take it all the way, I know where I should scan next because there are regions where my curvature is more uncertain or something like that. Right. So this is something that, this is a direction I think is very promising. We already talked about, uh, task specific or data driven on reconstruction that I think is an extremely promising avenue for future work. Low hanging fruit, like you said.

Happy woman greeting and talking on video call - May 9, 2022:No, uh, a tall

Silvia Sellan:A tall ladder. Exactly. Uh, and, uh, you know, there, yeah, there, there are many applica, many ways of quantifying uncertainty that, that, uh, we could do in geometry processing. And I hope that this is just a first step, uh, to that vision. There are some steps that I think we will take, and there are some steps that I hope the community also.

Happy woman greeting and talking on video call - May 9, 2022:Yeah, this one of the things I really liked about this paper. From the first read, I could totally see that it kind of opens up this whole branch of sarcastically informed, inspired, or motivated further steps in the pipeline that you can use this work with. And yeah, it's, it's exciting to see what what will come up next. now to my favorite part of the podcast, what did reviewer two say? Please share some of the insightful comments that Reviewer had through the review process.

Silvia Sellan:Okay. So this was a very, so anyway, the whole story of how we published this paper was, was really fun. Uh, I, I. I, I am one of those people that doesn't like crunching for a deadline. So I work on something steadily, consistently, several months instead of one week where I don't sleep. There are two kinds of people. I'm one kind, my advisor is mostly the other kind. Uh, but this time I got c four days before the deadline, so I couldn't crunch. So I just sent basically my current draft to my advisor and. Look, here's the victor. If you wanna change anything, change it. But I'm not working on it cuz I have like a fever and I'm just gonna lay on the couch for, for five days. So, we basically submitted our draft without a lot of the things that we would've liked to do in those four days. So I was a bit disappointed that maybe we would get rejected because I couldn't do, for example, the data driven part that was like a plan that I wanted to do in the days before the submission deadline. So that was a bit sad. but we actually got very positive reviews. We got seven review. Uh, which is un unheard of for sra. Like SRA usually has five reviews. Sometimes they bring in a sixth one, but I had never had seven. I don't know anyone who has had seven reviewers. So that was very, surprising. Most of the reviews were positive. I think six outta seven or five outta seven were positive, except that we had one, I don't know if it was strong reject or reject. So like the work, the way it works at Segra is you have like strong reject. We reject, we accept, accept and strong accept. So six. And, uh, we had one that was either strong rejector or, or reject that, that basically tanks your paper if you have one of those usually. Uh, and the, the review was very surprising. It said, Basically, you know, I like the paper on a first read. I loved it. But then on a second read, I started realizing that none of the quantities that the authors are introducing make any sense. So if you look at the variance maps, so these are the maps of like where we are most confident of the reconstruction. The variance is higher near the sample points. That shouldn't be like that. The variant should be lower near the sample points, right? Because we are more. of what the value is near the data, right? So it doesn't make any sense. So then I realized that nothing in the paper makes sense, so now I want to reject it. The problem was as simple as that, that revere was misreading our color map. So the color map was yellow, yellow meant high, and purple meant low. It was this plasma, uh, matte plot lib color bar that, that you may have used or your, your. Viewers might be familiar with. So it was just a matter that we did not include a color bar saying like, this is low, this is high. We did not include a color bar in all our pictures. This is basically, our figures are full of these color images. So if we added color bars, we would have 200 color bars in the paper. Uh, but this reviewer misunderstood it. So, you know, it was a very interesting rebuttal to write where we had to. You know, we're, we will add color bars to every figure and they will show that unlike reviewer two or three is interpretation variances indeed lower near the data points. That's, you know, it is what you expected it to be. Uh, so it was very scary cuz I, for some seconds there, I thought we might just get a paper rejected because we didn't add color bars to the plot. So, you know, under every, you know, behind every sign there's a story. Always add color bars to your plots. You never know. If we had had one other negative review, it might have, it might have tagged the paper completely. So that was a lesson I will never forget. I'm sorry, review two that we didn't add color Barss. It's not your fault. You, you, you get, you. There were two possible interpretations and you took one of them. Uh, we should have added it. And now if you look at our favorite, it has a lot of color bars. Because we're not making that mistake again. So that, that's my Revere two story,

Happy woman greeting and talking on video call - May 9, 2022:Oh wow. I absolutely love those kind of paper war stories, the whole C submission deadline, and then the coba and, and yeah, I think it's an amazing lesson. And I, and I know that every paper from now on you or any one of your future collaborators, you would never forget to put the coba.

Silvia Sellan:right. That's

Happy woman greeting and talking on video call - May 9, 2022:Uh, yeah. So don't forget the color about everyone. That was a great story. alright, anything else before we wrap up?

Silvia Sellan:I guess, if any of what I sent sounds interesting. I don't have enough time to do all the project ideas. I, I. So, uh, definitely email me if you wanna work on anything related to what I just said, and we can work on it together. So I'm sure its, it will put my website somewhere in the episode notes. Uh, you can go there, find my email and send me an email. I'm always open to, to getting random emails from people.

Happy woman greeting and talking on video call - May 9, 2022:Yeah, excellent. I'll be sure to put all of the contact information for Sylvia in the project description, and I should also probably mention to all of the more senior listeners that we have that Sylvia is looking for postdoc or faculty position starting fall 2024. So don't miss out on this amazing opportu. and All right, Sylvia, thank you very much for being a part of the podcast, and until next time that your papers due to talking.

Itzik:Thank you for listening. That's it for this episode of Talking Papers. Please subscribe to the podcast Feed on your favorite podcast app. All links are available in this episode description and on the Talking Papers website. If you would like to be a guest on the podcast, sponsor it, or just share your thoughts with us, feel free to email talking papers dot podcast gmail.com. Be sure to tune in every week for the latest episodes, and until then, let your papers do the talking.