Curiosity Daily

Communicating with Cell-Sized Robots (w/ Cornell University) and Uncanny Valley Science

Episode Summary

Learn from Cornell University physicists Paul McEuen and Itai Cohen how cell-sized robots actually communicate with each other and move around. You’ll also learn about the “uncanny valley” and how scientists figured out what part of your brain gets creeped out by human-like robots. In this podcast, Cody Gough and Ashley Hamer discuss the following story from Curiosity.com about how scientists pinpointed the part of your brain that’s creeped out by human-like robots: https://curiosity.im/2SpcbqS  Additional resources from Cornell University: Physicists take first step toward cell-sized robots — https://as.cornell.edu/news/physicists-take-first-step-toward-cell-sized-robots Graphene Origami [VIDEO] — https://research.cornell.edu/video/graphene-origami  Nanobots That Can Do Just about Anything — https://research.cornell.edu/news-features/nanobots-can-do-just-about-anything Itai Cohen | Department of Physics Cornell Arts & Sciences — https://physics.cornell.edu/itai-cohen Paul McEuen | Department of Physics Cornell Arts & Sciences — https://physics.cornell.edu/paul-mceuen  Want to support our show?Register for the 2019 Podcast Awards and nominate Curiosity Daily to win for People’s Choice, Education, and Science & Medicine. After you register, simply select Curiosity Daily from the drop-down menus (no need to pick nominees in every category): https://curiosity.im/podcast-awards-2019  Download the FREE 5-star Curiosity app for Android and iOS at https://curiosity.im/podcast-app. And Amazon smart speaker users: you can listen to our podcast as part of your Amazon Alexa Flash Briefing — just click “enable” here: https://curiosity.im/podcast-flash-briefing. 

Episode Notes

Learn from Cornell University physicists Paul McEuen and Itai Cohen how cell-sized robots actually communicate with each other and move around. You’ll also learn about the “uncanny valley” and how scientists figured out what part of your brain gets creeped out by human-like robots.

In this podcast, Cody Gough and Ashley Hamer discuss the following story from Curiosity.com about how scientists pinpointed the part of your brain that’s creeped out by human-like robots: https://curiosity.im/2SpcbqS

Additional resources from Cornell University:

Want to support our show? Register for the 2019 Podcast Awards and nominate Curiosity Daily to win for People’s Choice, Education, and Science & Medicine. After you register, simply select Curiosity Daily from the drop-down menus (no need to pick nominees in every category): https://curiosity.im/podcast-awards-2019

Download the FREE 5-star Curiosity app for Android and iOS at https://curiosity.im/podcast-app. And Amazon smart speaker users: you can listen to our podcast as part of your Amazon Alexa Flash Briefing — just click “enable” here: https://curiosity.im/podcast-flash-briefing.

 

Find episode transcript here: https://curiosity-daily-4e53644e.simplecast.com/episodes/communicating-with-cell-sized-robots-w-cornell-university-and-uncanny-valley-science

Episode Transcription

CODY: Hi! We’re here from curiosity-dot-com to help you get smarter in just a few minutes. I’m Cody Gough.

ASHLEY: And I’m Ashley Hamer. Today, you’ll learn from Cornell University physicists how cell-sized robots actually communicate with each other and move around. You’ll also learn about the “uncanny valley” and how scientists figured out what part of your brain gets creeped out by human-like robots.

CODY: The machines really are taking over! ...this podcast, at least. Let’s satisfy our robot overlords… or, at least, let’s satisfy some curiosity. 

Micro Robot Mondays #Segment 3 — How the robots actually communicate and move [3:12] (7/29) (Ashley)

ASHLEY: How do you actually communicate with a cell-sized robot? And how does something that small actually move around? This week you’ll get to know how microscale machinery works, with some help from Cornell University physicists Paul McEuen and Itai Cohen. It’s the third edition of our Microscale Mondays mini-series, and we’ll start by asking Paul the obvious question: can’t you just use Bluetooth or WiFi?

[CLIP 1:22]

ASHLEY: Okay, so it turns out you can help a robot do a LOT of things just by using lights. They can even blink back at you. Pretty cool, right? Well it’s only cool if the robots can actually move around. Here’s Itai Cohen with more on how that works.

[CLIP 1:50]

ASHLEY: Who knew cell-sized robots and water bears would have so much in common? Anyway, now that you know how they’re made, how they move, and how they communicate, we’ll wrap up next week by talking about the future of origami robots and the impact they could have on our world. Again, that was Itai Cohen, Professor of Physics at Cornell University, and Paul McEuen, Director of the Kavli Institute at Cornell for Nanoscale Science. And you can learn more about them and their work in today’s show notes.

Scientists Pinpointed the Part of Your Brain That's Creeped Out by Human-Like Robots — https://curiosity.im/2SpcbqS (Cody)

Scientists have pinpointed the part of your brain that’s creeped out by human-looking robots. And it could have implications for how we design robots in the future. Have you ever seen a too-real CGI character or watched a video of a humanlike robot and felt a shiver down your spine? Scientists call that hyperrealistic creep factor the "uncanny valley." And earlier this month, neuroscientists announced they’ve figured out what’s going on in the brain when we fall into this valley. We call it a “valley” in the first place because of the shape of lines on a graph that was used to first explain the phenomenon. In 1970, a Japanese roboticist named Masahiro Mori created a graph that plots things according to their level of human resemblance, or likeness, and shinwakan, which roughly translates to “affinity” or “comfort level.” The correlation between something’s likeness and a human's affinity for it is MOSTLY positive. An industrial robot in a factory would rank low in both likeness and affinity, while a lifelike android would rank higher in likeness and affinity. 

But there’s a dip in the graph for humanoids that give us the creeps, and that dip is the uncanny valley. That’s where you’ll find, say, prosthetic hands and corpses — because they appear human at first glance, but turn out to be artificial (or worse, dead) when you take a closer look. Mori also observed that movement intensifies our feelings toward humanoids, both good and bad. You'd probably feel closer to a well-animated cartoon character compared to a humanlike sketch, and you'd be more creeped out by a zombie than a corpse. For a new study published in the Journal of Neuroscience, a team of neuroscients and psychologists from the UK and Germany figured out the neural mechanisms involved in how people evaluate humanlike figures. They used fMRI machines to measure changes in blood flow to different parts of the brain while human participants looked at humans, humanoids, and robots. And they found that when you see an agent like the ones in the study, your prefrontal cortext has a two-part response. First, your dorsomedial prefrontal cortex, right along the midline of your frontal lobe, emits a "human detection signal" that's strongest for, well, humans. Then, the neighboring ventromedial prefrontal cortex combines this signal with an evaluation of how likable an agent is. This research pretty much confirmed that the uncanny valley is a real thing because of the way your brain processes information, but even more interesting is that it turned out that not all participants felt the same level of revulsion. The authors say this study is the first to demonstrate individual differences in sensitivity to the uncanny valley effect. And that suggests that some people get creeped out by almost-humans more than others. Meaning, the uncanny valley isn't one-size-fits-all, and there's no robot that pleases or scares everybody. It’s possible that you might even get creeped out by a humanlike robot, but learn to feel more at ease as you spend more time with it. Over time, the uncanny valley might not be about what a robot looks like, but what it can do. You can't judge a book by its cover, and you can't judge a robot by its creepy rubber skin.

ASHLEY: Before we recap what we learned today, we want to quickly remind you to please nominate Curiosity Daily to be a finalist in the 2019 Podcast Awards! Find a link in today’s show notes, or visit podcast-awards-dot-com, to register. Then find Curiosity Daily in the drop-down menus for the categories of People’s Choice, Education, and Science & Medicine. It’s free to vote and will really help us out. And now, let’s recap what we learned today.

CODY: Today we learned that cell-sized robots can communicate using LED lights, and move around by bounding like a super-slow cheetah.

ASHLEY: And that not all uncanny valleys are created equal.

[ad lib optional] 

CODY: Join us again tomorrow to learn something new in just a few minutes. I’m Cody Gough.

ASHLEY: And I’m Ashley Hamer. Stay curious!