The Allen Institute for Artificial Intelligence (AI2), funded by billionaire Paul Allen's, is developing projects like an AI-based search engine for scientific papers and a system to extract "visual knowledge" from images and videos. According to Scientific American, another goal of AI2 is "to counter messages perpetuated by Hollywood and even other researchers that AI could menace the human race." SciAm's Larry Greenemeier interviewed AI2 CEO and computer scientist Oren Etzioni:
Read the rest
Why do so many well-respected scientists and engineers warn that AI is out to get us?
It’s hard for me to speculate about what motivates somebody like Stephen Hawking or Elon Musk to talk so extensively about AI. I’d have to guess that talking about black holes gets boring after awhile—it’s a slowly developing topic. The one thing that I would say is that when they and Bill Gates—someone I respect enormously—talk about AI turning evil or potential cataclysmic consequences, they always insert a qualifier that says “eventually” or this “could” happen. And I agree with that. If we talk about a thousand-year horizon or the indefinite future, is it possible that AI could spell out doom for the human race? Absolutely it’s possible, but I don’t think this long-term discussion should distract us from the real issues like AI and jobs and AI and weapons systems. And that qualifier about “eventually” or “conceptually” is what gets lost in translation...
How do you ensure that an AI program will behave legally and ethically?
If you’re a bank and you have a software program that’s processing loans, for example, you can’t hide behind it.