Students of artificial intelligence are getting recruited like star athletes, but could a rush to get new technology on the field cause ethical lapses?
You walk into a gigantic auditorium. Folding chairs are lined up in evenly spaced rows, with a cold black awning behind the stage and uniform white or yellow lights pouring down on the sea of seats below. A couple of huge screens are to either side of the stage with a third in the center in the middle of the room for those who can’t see the presentation.
What’s about to happen is one of the most important annual gatherings in tech. But there’s no glitz and glamour. There are no multi-color sets, flashy graphics or showman’s props like real flames heating up the sides of the room.
This is the Neural Information Processing Systems (NIPS) Conference, one of the two biggest academic conferences in the world that cover developments in artificial intelligence aside from the International Conference of Machine Learning (ICML). And attendance has exploded.
“The conference seems to have started thinking about industry interest in a way that’s different than before,” says Katherine Gorman, the host of podcast The Talking Machines. “Previously you would have booths like any other exhibitor. The old guard who have worked in this industry and very senior and seeing this shift as something more permanent.”
— Christophe Tricot (@ctricot) December 14, 2016
For two seasons, she has had the chance to cover a range of topics including probabilistic programming, automatic translation, sparse coding, and the effects of new breakthroughs on society. Now the entire culture of AI research is changing.
Cold, gray conferences are lively and explosive. There are invites to more and more events in far-flung places. The ecosystem is evolving. Papers still get presented like a traditional academic affair, but the heads of major corporations are also wandering the show floors recruiting PhDs.
It’s kind of like Major League Baseball’s winter meetings.
After Donald Trump’s executive order at the beginning of his administration, engineering visas are a hot topic right now, and it couldn’t come at a more crucial time. Recruiting and acquiring small startups are elements of an AI arms race driving employee migration across North America, the hallmark of an age in which machine learning is exloding.
“People are shifting around and going to different companies, making plays for new groups of people. Amazon and Google did some big sort of shifting around in what they were doing and what they wanted to do.”
Amazon’s entrance into the game challenges some of the bigger names. Microsoft has also made some major moves to open 2017 with the acquisition of artificial general intelligence startup Maluuba, as has Baidu with their appointment of Dr. Qi Lu to head their AI division.
These rapid developments have worried many, so much so the world’s most powerful tech executives are sitting down to discuss the ramifications, particularly if this new technology and its applications are evolving more quickly than we might be able to contain it.
The Partnership on AI is the ultimate illustration of that. It’s a consortium co-founded by Amazon, Facebook, Google, Microsoft, and IBM which just last month added two new members: Apple and OpenAI. The latter company, backed by Y-Combinator’s Sam Altman and SpaceX’s Elon Musk, is a company that will be at the center of machine discussion for years to come.
“We work on reinforcement learning, generative models, imitation learning, transfer learning, robotics. We do a lot of stuff,” says OpenAI Director of Communications Jack Clark, a company putting a square focus on the practical effects of new technology and so-called “SafeAI.”
“We try and build the most advanced AI systems we can and in doing so we try to find out where there needs to be safety work. How can we develop techniques we can prove mathematically will have the *right outcomes? ‘We can give you 100 percent certainty that this won’t do ‘X.'”
OpenAI joined Partnership for a pretty simple reason: to steer the conversation toward the near future.
“A philosopher taking a closer look at the ethics of AI, or economist measuring the effect on the economy. It’s this cross-industry thinking that will help do some research. It’s also a place where companies can have disagreements with each other and have some forum to discuss norms that they wouldn’t otherwise.”
But haven’t we been through all that? Discussions seem to go pretty deep about how humans and machines understand each other in Star Trek, The Matrix, and even Terminator. The popular themes of sci-fi can only go so far and exhaust so much because we haven’t lived these eras yet. But really those discussions are irrelevant because they don’t deal with practical issues.
“We haven’t yet explored the more mundane questions,” Clark explains. There is a very practical example of this.: the “Right of Explanation” that is being openly discussed in the European Union right now.
“If I’m denied a mortgage on a house and I’m told an AI/ML system made that decision, should I as an EU citizen get an explanation for why I’m not going to get that mortgage?”
“The point where AI hits a consumer, can you show those systems have some kind of responsibility? We as designers of AI have not,” yet, “made sure our algorithms can explain themselves at that level.”
How we evaluate creditworthiness is one thing, but imagine how the data a machine collects reflects humanity’s subjectivity.
“If I train a really big language model on, say, a corpus of 1 billion news articles in English, then this model will reflect biases inherent in those news articles.” That could include a racial and gender bias if it’s too focused on articles written by white males, or its understanding of language could be missing out on common slang terms or the way people speak in certain dialects.
So one of the immediate ways to correct this is to get more minorities and women in the field. As Clark so eloquently asserted in Bloomberg over the summer, AI has a “sea of dudes problem.”
“If I created a more recognition algorithm and it has some bias toward a specific type of person, do I have an ethical responsibility to address that? Am I cool to just let be that model with certain biases in it, or try to remove certain biases? Should I change that?
A brave, new, collaborative world
The need to keep society, its laws, and new technology in lockstep with each other is facilitating working relationships beyond trying to protect the world. It’s encouraging the sharing of new research.
“Open sourcing is another big trend we have seen this year. People in the industry are pushing to do that. Sharing work, sharing ideas, sharing tools. It floods the landscape. There’s so much output that if you’re not working in it, it can be really daunting.”
A side effect to this is collaboration across fields, not just companies. An example related to but not essential to the conversation on AI is nanotechnology, where chemists have had to work with physicists to produce next-generation mobile screens out of silver and gold nanowires.
“People are moving between the silos like they never used to before, but that requires a different way of communicating, being able to communicate about the fundamentals of your work but learn and absorb the fundamentals of your collaborator’s work,” Gorman reiterates. The consequence of this rapid evolution is not just one of anxiety, but a need to work in tandem.
The big change we will see in 2017 will come from corporate reinvesting in academic research. Google, Baidu, and Microsoft have huge research operations. Apple will also begin to publish some of its AI research, deciding to throw away the lock and key on some of its proprietary work for the sake of advancing the field.
Gorman seems to back her former guest’s view though, in that as researchers and computer scientists have checked off more boxes on things like search engines and image recognition, society and even researchers begin to downplay just how ‘intelligent’ those achievements are.
“We have these lofty goals for AI, but the historical story of developing AI is setting a goal, achieving it, then looking at it as a field and society, then saying ‘That can’t be AI because we’ve achieved it.'”