According to some scientists, robots are defined as machines that can interact with their environment and have some abilities to make decisions on their own. In this segment adapted from Curious, robots are also viewed as machines that perform tasks that humans can’t (or won’t) undertake. One example of a robot is the Mars Rover, an autonomous robot (one that can make some of its own decisions) that was designed to study the past history of water on the ground surface of the planet Mars. LAGR stands for Learning Applied to Ground Robotics. It is a program that is designed to teach robots how to learn, starting with the human senses.
Life science, technology
The following Frame, Focus and Follow-up suggestions are best suited for middle school students using this video in an English language arts or science lesson. Be sure to modify the questions to meet your students' instructional needs.
What is Frame, Focus and Follow-up?
Frame (ELA) What is the definition of “hypothesis”? What does it mean when someone asks you to create a hypothesis?
Focus (ELA) As you watch the video segment, use your inference skills to create a list of positive and negative results that could come from creating an intelligent robot. Use specific details to support your hypotheses.
Follow Up (ELA) Do you agree or disagree with the concept of an intelligent robot? Why? Be prepared to explain your reasoning with possible outcomes.
Frame (SCI) Where does the hypothesis fall in the steps of the scientific method? What steps come before and after the hypothesis?
Focus (SCI) As you watch the video, try to figure out the possible hypothesis that was created before the research and development of intelligent robots.
Follow Up (SCI) What major part of the human body must be replicated in order for a robot to have senses like a human being?
ASHLEY STROUPE: What’s a robot? That is a great question, and you’ll get as many different answers as the people you ask.
To me, a robot is a machine that has some way of interacting with its environment and has some way of making at least some kinds of decisions on its own.
RICHARD MURRAY: Most people will think about a robot as some sort of a machine doing tasks that humans might otherwise have to do but now we can get a machine to do it.
BRIAN WILCOX: There has been a tremendous amount of effort invested in trying to understand how the human brain works, and much of that has paid dividends in trying to build robots.
ANDREW HOWARD: What isn’t so well understood is how do you actually make those things work in the real world? And that turns out to be a significant challenge.
MARK MAIMONE: What we’re looking at here is the engineering model of the Mars Rover. This is a duplicate of what’s on Mars. The M.V.R. mission was designed to go to Mars and study the past history of water on the planet’s surface.
We have a lot of cameras on this vehicle. That’s one of our main sensors to tell what’s going on. We have two on the front here. We have four on the mast. Stereo vision is using two images, just like we have two eyes, to see the world in two different views. Each view is a little bit different, and we can measure the difference between it, and that tells us how far away everything is.
The Mars Rover and, actually, almost all spacecraft have some level of autonomy where they can make their own decisions. We tell it a goal, we tell it where we want it to go, and it’s able to decide on its own the right path to take to get there.
It can sense the ground when it’s driving. It feels that there’s a change in the terrain and can see where there’s a big rock or a big ditch and automatically steer itself away. Even if we didn’t tell it how to do that, it’s smart enough to detect that on its own.
RICHARD MURRAY: We talk about robotics and usually people will start talking about autonomy almost immediately—the ability to act in its environment without having a human off to the side sort of telling it what to do, joysticking it around.
ANDREW HOWARD: LAGR is actually a program, learning applied to ground robotics. Robots can learn - not in, at the moment, very sophisticated ways, but certainly, at least, in simple ways which are effective, rather like the way you train an animal.
We start with the most basic senses, and one of the most basic senses is touch. We have this bumper at the front so the robot can crash into things and detect the fact that it has crashed into something. We can literally say to the robot, “Explore the world, and when you bump into things, well, you learn that those things are bad. And through that process, the robot itself can learn what it can drive over and what it can’t drive over.
It’s very easy to put yourself in the head of the robot. Why is it doing that? I think it’s doing this and you also get cranky when it doesn’t do what it’s supposed to be doing, you say, “Why are you doing that?” I mean, you talk to the robot. “Clearly, you shouldn’t be doing that. I taught you that was the wrong thing to do.” Ok, now I sound like a parent.
Getting a robot to see has historically been one of the most difficult problems in robotics. People don’t appreciate quite how difficult the vision problem is because we do it so naturally. A very large proportion of your brain is devoted to the problem of interpreting that torrent of information which is coming in.
ASHLEY STROUPE: We are very, very good at interpreting what we see. We can all identify a chair and they don’t have all a seat, a back, and 4 legs. A lot of chairs are very different, or if they’re covered by a pile of coats, we can still recognize that it’s a chair. But how are we defining that in our brain? How could we pass that definition along to a robot so that it always knows what’s a chair?
ANDREW HOWARD: We can teach a robot this is a rock. It understands what a rock looks like and it can recognize a rock and we can teach it that you can’t drive over big rocks. The problem is that then you put some grass in front of the rock, and suddenly it’s a different thing. The robot no longer recognizes it as a rock. Overcoming that challenge, it’s been a long, long, hard road.
Academic standards correlations on Teachers' Domain use the Achievement Standards Network (ASN) database of state and national standards, provided to NSDL projects courtesy of JES & Co.
We assign reference terms to each statement within a standards document and to each media resource, and correlations are based upon matches of these terms for a given grade band. If a particular standards document of interest to you is not displayed yet, it most likely has not yet been processed by ASN or by Teachers' Domain. We will be adding social studies and arts correlations over the coming year, and also will be increasing the specificity of alignment.