Twenty-five years ago, Nikos Papanikolopoulos started his work in the University of Minnesota’s Artificial Intelligence, Robotics and Vision lab.
For the first five years, the lab researched and developed robotics that were largely unheard of. In 1997, his lab had its landmark moment, when the U.S. Military used his lab’s Scout robot to survey dangerous war zones in lieu of humans. It built the lab’s reputation into a national standout.
Today, Papanikolopoulos and his lab are part of a rapidly emerging field of science fusing yesterday’s unbelievable with today’s uncharted — artificial intelligence.
He and his team are working on developing technology that uses artificial intelligence and expands to all facets of life. The developments span from helping the Department of Homeland Security detect suspicious activity in airports through the lens of a camera to assessing and diagnosing mental illness.
“We’re working on the line between what’s feasible and what’s just not possible yet,” said Dario Canelon, a research assistant in the lab.
Papanikolopoulos and his students are helping develop technology to improve human health, agriculture and safety.
Canelon is helping build an artificial intelligence system that can watch videos of children and assist in identifying symptoms of mental illness.
The system can point out patterns of behavior common in certain mental illnesses like Tourette’s Syndrome or Obsessive Compulsive Disorder.
“You can always have a psychologist follow a kid around, but the fact that a person is there might cause the kids to alter their behavior,” he said.
Another project uses machine learning, where the machine improves its function over time and increased exposure, to look at databases of cancer characteristics to recognize formations and textures of cancerous cells and make diagnoses faster than doctors.
Another of Papanikolopoulos’s students, Ruben D’Sa — a researcher for eight years — is developing solar-powered drones. These vehicles take off in the day, collect solar energy and switch to battery power when the sun sets, letting them stay in the air to collect massive amounts of data.
D’Sa and his colleagues want to incorporate AI technology into drones to let them fly over fields of crops and spot things like nitrogen deficiencies in specific areas of the field.
This data can be stored and relayed to a farmer, who then knows which specific areas of the field need more fertilizer or chemical treatment. This saves the farmer money, but also limits chemical runoff that could reach lakes and rivers, Canelon said.
Another lab, the Multiple Autonomous Robotic Systems lab, or MARS, is using motion sensors, cameras and lasers to come up with algorithms autonomous robots can use to navigate environments on their own.
The work of two MARS students, Kejian Wu and Mrinal Paul, can be applied in self-driving cars. The car needs to know where it is in real time so it can plan and navigate to a destination and avoid obstacles and people.
Paul, who has spent three years working in the MARS lab, said the team is working on building indoor navigation as well.
Paul said the theoretical end game of this project could give users directions based on video of interior surroundings they take with their phones and a digital map.
“We already have some of these solutions, but as time goes by the challenge is how to make things even faster and work in all different kinds of environments,” Paul said.
“It does the stuff we’re not so good at.”
A couple miles away from the University’s campus in downtown Minneapolis, Aftercode, an artificial intelligence startup founded by University dropout Mitch Coopet, works to bring AI to consumers. Specifically, Coopet wants to augment another landmark technology, your smart phone, with AI.
The company is currently testing an app called Rambl, which uses AI technology to transcribe phone conversations, take notes, suggest follow-ups and enter dates into a phone’s calendar in seconds.
“It does the stuff we’re not so good at,” Coopet said.
Aftercode focuses on training AI to observe patterns in written text transcribed from phone conversations, said Josh Cutler, an AI technician at the startup.
For example, if you say in a phone call, “I will send you that by next week,” Rambl can recognize grammatical patterns and store the information in your phone.
Cutler hopes this kind of technology will get to a point where AI is consistently making things better without humans having to tell computers what to do.
Development around AI depends on understanding human psychology in a given situation and then coaching an AI to do the same thing, Coopet said. The next stage would be actual executive decision making by AI, but that’s still far off.
“I don’t believe the real general AI goal is to make something like us,” Coopet said. “It’s to make something better than us.”
The not-so-likely doomsday scenario
In a University law school classroom, professor Francis Shen lectures students on AI ethics and society-altering implications of the field in his class "Law and Artificial Intelligence."
He said many experts on the topic are fearful of a worst-case scenario surrounding artificial intelligence. “We have to ask ourselves if we’re cataclysmically ending the human race by developing this technology.”
Shen said many of AI’s potential implications rest on how much decision-making ability humans give them.
Canelon said he worries over a future where military drones independently make combat decisions.
“Do we hand the controls over to the drone or do we decide not to give it that power?” he said. “If we don’t put any limits on AI, there might be some unintended consequences.”
Researchers at the University said if their research ends up harming people, it will be the fault of the user, not the technology.
“Technology isn’t inherently bad, it’s up to people to decide what that technology can do,” D’Sa said. “It’s incredibly powerful, and it could easily be used in a bad way.”
While future artificial intelligence scenarios are often thought of in the extreme, the realistic negatives are much more rooted in the practical.
Coopet brought up the recent Microsoft Twitter artificial intelligence that was influenced by the internet to start tweeting Nazi propaganda continuously.
Artificial intelligence and robotics technology could also raise issues of inequality depending on which jobs are taken by robots, Shen said.
Intelligent robots already replace low-paying jobs staffed by less educated workers. Policy makers will have to choose whether labor laws should intervene with this process and how, he said.
While there are concerns about job loss due to AI and robotics, Papanikolopoulos said people don’t talk about the jobs that will be created, like programmer and manufacturer positions.
“Robots can do the jobs that we don’t want to do,” he said.
A march into the unknown
Ethical concerns and the unpredictable nature of technology lend themselves to a murky future for AI.
“It’s so hard to tell,” Papanikolopoulous said. “Twenty years ago I would have told you robotic surgery was impossible. Now it isn’t.”
He said he prefers to envision a future where the technology eliminates mundane tasks — washing laundry, cleaning dirty dishes — and advances medical technology, like self care for the elderly and mentally disabled.
Maria Gini, a researcher with the University’s Intelligent Agents for Electronic Commerce Lab, does work to help those with sensory or motor control problems with AI technology in the home.
“My attitude is that fundamentally, the tech has the potential to improve the lives of people,” Gini said.
An optimistic view of AI’s future, where regulation and politics align correctly, is one where basic tasks will be automated and humans will have more leisure time, Coopet said.
He said it’s important to look at the positives of what an AI future holds. Advances in gaming and entertainment will move fast, tasks humans don’t want to do will become automated and medical sciences can advance much faster.
The actual future of AI technology is much more difficult to predict, Coopet said. “AI is going to make the internet revolution look like a firecracker next to an explosion.”