Blog

Xebia Data @ NIPS 2017

12 Dec, 2017
Xebia Background Header Wave

One of Xebia Data’s perks is our training allowance: as a consultant you get a generous budget and 5 training days for your personal development. You’re free to spend it on whatever you think will benefit you: external or internal trainings, coaching, conferences, MOOCs, etc. This year a couple of us used it to go to the NIPS conference in Long Beach, California.

Why did we go to an academic conference like NIPS? NIPS is steadliy becoming the most important conference in Machine Learning and Artificial Intelligence. It covers hot topics like Deep Learning, which have revolutionized the field and will probably keep doing so in the coming years. If you want to know about the latest developments happening in research and industry, it is a must to attend. Also not unimportant: we really enjoyed last year’s NIPS in Barcelona and it’s a good excuse to spend some time in and around sunny Los Angeles.

This year’s NIPS covered a wide range of topics, from hard-core algorithm development to ethics discussion. Here are some of our highlights.

The most talked-about session was probably Ali Rahimi’s talk on the field’s understanding of Deep Learning, likening the methodology to alchemy. The conversation is still going on with a reply by Yann Lecun and the addendum by Rahimi & Brecht.

The impact of machine learning on our society is growing — and not all of it has been beneficial. Kate Crawford discussed bias in our data and algorithms in her keynote, and Solon Barocas and Moritz Hardt explored fairness. It’s great to see that the field is actively investigating their impact on society.

Related to fairness, model interpretability will be increasingly important, as we’ll see the impact of GDPR and its right to explanation. You might already be familiar with LIME and ELI5 to make complex models more interpretable. Scott Lundberg presented a more efficient alternative to LIME using expectations and Shapley values and Ethen Elenberg talked about the STREAK algorithm for interpreting neural nets. The necessity of interpretability for machine learning may still be under debate, but it’s an important tool for Data Scientists: stakeholders want to know why a model does what it does.

Some of other highlights were learning from unlabeled examples, probabilistic soft logic, deep learning on graphs for recommender systems, and cross-domain image generation. Check this repo for more material.

In this blog post we covered only a small part of the conference, check the NIPS Facebook page for all talks and the NIPS Proceeding for the papers. Next year we hope to add another mug to our collection!

NIPS cups

Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts