Focusing Curiosity

Still a fan of the Mars rovers, such as Curiosity? The recent issue of The Planetary Report featured an article on Curiosity’s visual data acquisition system (aka, its camera and controlling software), and while searching for the article online, I found a similar blog post by Raymond Francis and Tara Estlin.

Image credit: NASA/JPL-Caltech/MSSS

Since 2016, NASA’s Curiosity Mars rover has had the ability to choose its own science targets using an onboard intelligent targeting system called AEGIS (for Automated Exploration for Gathering Increased Science). The AEGIS software can analyze images from on-board cameras, identify geological features of interest, prioritize and select among them, then immediately point the ChemCam instrument at selected targets to make scientific measurements.

Autonomous targeted science without Earth in the loop is a new way to operate a scientific mission. It has become a routine part of the MSL Science Team’s strategy for exploring the ancient sedimentary rocks of Gale Crater.

AEGIS is an example of what we call ‘science autonomy’, where the spacecraft (the rover in this case) can make certain decisions on its own about scientific measurements and data – choosing which measurements to make, or having made them, which to transmit to Earth. This is distinct from autonomy in navigation, or in managing onboard systems – both of which Curiosity can also do. In a solar system that’s tens to hundreds of light-minutes across, science autonomy allows us to make measurements that can’t be made with humans in the loop, or to make use of periods where our robotic explorers would be waiting for instructions from Earth.

Efficiency. I recall reading somewhere that it was considered a big step forward when chess playing computers started using the time spent by the opponent, human or computer, in deciding on their move, to compute future alternative strategies, rather than idling away.

The AEGIS autonomous targeting process begins with taking a ‘source image’ – a photo with an onboard camera (either the NavCam or the RMI). AEGIS’ computer vision algorithms then analyze the image to find suitable targets for follow-up observations. On MER and MSL, AEGIS uses an algorithm called Rockster, which attempts to identify discrete objects by a combination of edge-detection, edge-segment grouping and morphological operatio

ns – in short, it finds sharp edges in the images, and attempts to group them into closed contours. Built originally to find float rocks on a sandy or gravelly background, Rockster has proved remarkably versatile at finding a variety of geological target types.

Cool stuff. I wish I could say I had some sort of involvement. It’s the sort of thing that’ll probably end up having all sorts of applications in other areas.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.