Two leaders in the field of artificial intelligence have announced that they're open-sourcing their AI platforms.
After investing in building rich simulated environments to serve as laboratories for AI research, Google's DeepMind Lab on Saturday said it would open the platform for the broader research community's use.
DeepMind has been using its AI lab for some time, and it has "only barely scratched the surface of what is possible" in it, noted team members Charlie Beattie, Joel Leibo, Stig Petersen and Shane Legg in an online post.
By open-sourcing the platform, DeepMind Lab hopes to open new opportunities for developers to make significant contributions to AI.
Meanwhile, OpenAI, which is cochaired by Tesla CEO Elon Musk, on Monday invited developers to try on its Universe platform for size.
It's hoping an influx of development talent will help it achieve its overarching mission: to create a single AI agent that can be flexible in applying its past experience within Universe to quickly master unfamiliar, difficult environments.
At Home on GitHub
DeepMind uses 3D gaming environments to train AI agents to behave more like human beings.
Now that it's open source, the platform is available on GitHib, a home for many developers on the Web.
Developers will be able to add custom levels to its platform via GitHub, the DeepMind team explained. In addition, all DeepMind assets will be hosted on GitHub. along with code, maps and level scripts.
DeepMind hopes the GitHub community will help it shape and develop the platform going forward, said Beattie, Leibo, Petersen and Legg.
OpenAI is designed to allow an AI agent to use a computer as a human does.
OpenAI wants to train AI systems on a full range of tasks to solve, it noted in an online post.
Universe enables the training of a single agent to perform any task a human can complete with a computer, according to OpenAI.
Sign of Frustration?
Computer scientists for some time have been trying to make algorithms learn from patterns, but progress has been slow.
"There's a little bit of frustration that we're not making as much progress as we wanted to," maintained Sorin Adam Matei, an associate professor at Purdue University.
"This attempt to get more people involved, to excite people, to get another angle to the problem is a sign of frustration," he told LinuxInsider.
Nevertheless, opening the AI platforms to the at-large development community could be beneficial.
"If a lot of people are attracted to these tools, we'll see new creative interesting services," Matei pointed out. "There could be a very healthy lateral development."
What's more, that development may come faster and cheaper than if DeepMind and OpenAI continued to go it alone.
"Certainly, part of the motivation [to go open source] is rooted in a desire to innovate quickly and cost-effectively," said Austin Ogilvie, CEO of Yhat.
Sea Change in AI Training
Opening up these platforms will create a sea change in the way AI systems are trained, said Aditya Kaul, a research director with Tractica.
"Today, AI is being driven by data. Companies like Google, Facebook and Microsoft are advancing AI because they have access to a lot of data," he told LinuxInsider, "but going forward, what is going to drive AI is new environments -- like gaming environments, where algorithms can learn dynamically about how things work."
These AI platforms allow an AI agent to learn from many environments within the platform rather than just one.
"What that does is accelerate the training and advancement of the AI algorithms and AI technology," Kaul said. "That is significant and a big shift in how AI research is performed."
Threat to Humanity
Although there are those -- including theoretical physicist Stephen Hawking -- who believe artificial intelligence eventually will pose a threat to the existence of the human race, others believe it can be kept in check.
"The impact of AI and machine learning on our lives is already obvious," Yhat's Ogilvie told LinuxInsider.
"Most of our decisions are influenced by AI through the media we consume, the articles we see or do not see online, the products we buy, the movies we watch, and the financial options we have," he explained.
"The conversation around the ethics of AI is robust," Ogilvie said, "and the fact that corporate thought leaders and research labs are among those most vocal in the discussion is a really big source of confidence for me that the industry is maturing in a responsible manner."
What's more, it's hard to envision humans creating anything that's infallible.
"Most systems aren't that robust," said Roger L. Kay, president of Endpoint Technologies Associates.
"Things break," he told LinuxInsider. "I'm not losing sleep about AI taking over the world. It's a pipedream of some people in Silicon Valley."
News , Technology
No comments:
Post a Comment