Skip navigation.
Home
Semantic Software Lab
Concordia University
Montréal, Canada

Machine Learning Blog

Syndicate content
Machine learning and learning theory research
Updated: 17 hours 54 min ago

A Real World Reinforcement Learning Research Program

Fri, 2018-07-06 11:10

We are hiring for reinforcement learning related research at all levels and all MSR labs. If you are interested, apply, talk to me at COLT or ICML, or email me.

More generally though, I wanted to lay out a philosophy of research which differs from (and plausibly improves on) the current prevailing mode.

Deepmind and OpenAI have popularized an empirical approach where researchers modify algorithms and test them against simulated environments, including in self-play. They’ve achieved significant success in these simulated environments, greatly expanding the reportoire of ‘games solved by reinforcement learning’ which consisted of the singleton backgammon when I was a graduate student. Given the ambitious goals of these organizations, the more general plan seems to be “first solve games, then solve real problems”. There are some weaknesses to this approach, which I want to lay out next.

  • Broken API One issue with this is that multi-step reinforcement learning is a broken API in the sense that it creates an interface for problem definitions that is unsolvable via currently popular algorithm families. In particular, you can create problems which are either ‘antishaped’ so local rewards mislead w.r.t. long term rewards or keylock problems, as are common in Markov Decision Process lower bounds. I coded up simple versions of these problems a couple years ago and stuck them on github now to be extra crisp. If you try to apply policy gradient or Q-learning style algorithms on these problems they commonly run into exponential (in the number of states) sample complexity. As a general principle, APIs which create exponential sample complexity are bad—they imply that individual applications require taking advantage of special structure in order to succeed.
  • Transference Another significant issue is the degree of transference between solutions in simulation and the real world. “Transference” here potentially happens at several levels.
    • Do the algorithms carry over? One of the persistent issues with simulation-based approaches is that you don’t care about sample complexity that much—optimal performance at acceptable computational complexities is the typical goal. In real world applications, this is somewhat absurd—you really care about immediately doing something reasonable and optimizing from there.
    • Do the simulators carry over? For every simulator, there is a fidelity question which comes into play when you want to transfer a policy learned in the simulator into action in the real world. Real-time ray tracing and simulator quality more generally are advancing, but I’m not ready yet to trust a self-driving care trained in a simulated reality. An accurate simulation of the physics is unclear—friction for example is known-difficult, and more generally the representative variety of exogenous events in an open world seems quite difficult to implement.
  • Solution generality When you test and discover that an algorithm works in a simulated world, you know that it works in the simulated world. If you try it in 30 simulated worlds and it works in all of them, it can still easily be the case that an algorithm fails on the 31st simulated world. How can you achieve confidence beyond the number of simulated worlds that you try and succeed on? There is some sense by which you can imagine generalization over an underlying process generating problems, but this seems like a shaky justification in practice, since the nature of the problems encountered seems to be a nonstationary development of an unknown future.
  • Value creation Solutions of a ‘first A, then B’ flavor naturally take time to get to the end state where most of the real value is set to be realized. In the years before reaching applications in the real world, does the funding run out? We certainly hope not for the field of research but a danger does exist. Some discussion here including the comments is relevant.

What’s an alternative?

Each of the issues above is addressable.

  • Build fundamental theories of what are statistically and computationally tractable sub-problems of Reinforcement Learning. These tractable sub-problems form the ‘APIs’ of systems for solving these problems. Examples of this include simpler (Contextual Bandits), intermediate (learning to search, and move advanced (Contextual Decision Process).
  • Work on real-world problems. The obvious antidote to simulation is reality, driving both the need to create systems that work in reality as well as a research agenda around reality-centered issues like performance at low sample complexity. There are some significant difficulties with this—reinforcement style algorithms require interactive access to learn which often drives research towards companies with an infrastructure. Nevertheless, offline evaluation on real-world data does exist and the choice of emphasis in research directions is universal.
  • The combination of fundamental theories and a platform which distills learnings so they are not forgotten and always improved upon provides a stronger basis for expectation of generalization into the next problem.
  • The shortest path to creating valuable applications in the real world is to simply work on creating valuable applications in the real world. Doing this in a manner guided by other elements of the research program is just good sense.

The above must be applied in moderation—some emphasis on theory, some emphasis on real world applications, some emphasis on platforms, and some emphasis on empirics. This has been my research approach for a little over 10 years, ever since I started working on contextual bandits.

Let’s call the first research program ’empirical simulation’ and the second research program ‘real fundamentals’. The empirical simulation approach has a clear strong advantage in that it creates impressive demos, which creates funding, which creates more research. The threshold for contribution to the empirical simulation approach may also be lower simply because it requires mastery of fewer elements, implying people can more easily participate in it. At the same time, the real fundamentals approach has clear advantages in addressing the weaknesses of the empirical simulation approach. At a concrete level, this means we have managed to define and create fundamentals through research while creating real-world applications and value radically more efficiently than the empirical simulation approach has achieved.

The ‘real fundamentals’ concept is behind the open positions above. These positions have been designed to come with both the colleagues and mandate to address the most difficult research problems along with the organizational leverage to change the world. For people interested in fundamentals and making things happen in the real world these are prime positions—please consider joining us.

Categories: Blogroll

When the bubble bursts…

Mon, 2018-06-04 16:39

Consider the following facts:

  1. NIPS submission are up 50% this year to ~4800 papers.
  2. There is significant evidence that the process of reviewing papers in machine learning is creaking under several years of exponentiating growth.
  3. Public figures often overclaim the state of AI.
  4. Money rains from the sky on ambitious startups with a good story.
  5. Apparently, we now even have a fake conference website (https://nips.cc/ is the real one for NIPS).

We are clearly not in a steady-state situation. Is this a bubble or a revolution? The answer surely includes a bit of revolution—the fields of vision and speech recognition have been turned over by great empirical successes created by deep neural architectures and more generally machine learning has found plentiful real-world uses.

At the same time, I find it hard to believe that we aren’t living in a bubble. There was an AI bubble in the 1980s (before my time), a techbubble around 2000, and we seem to have a combined AI/tech bubble going on right now. This is great in some ways—many companies are handing out professional sports scale signing bonuses to researchers. It’s a little worrisome in other ways—can the field effectively handle the stress of the influx?

It’s always hard to say when and how a bubble bursts. It might happen today or in several years and it may be a coordinated failure or a series of uncoordinated failures.

As a field, we should consider the coordinated failure case a little bit. What fraction of the field is currently at companies or in units at companies which are very expensive without yet justifying that expense? It’s no longer a small fraction so there is a chance for something traumatic for both the people and field when/where there is a sudden cut-off. My experience is that cuts typically happen quite quickly when they come.

As an individual researcher, consider this an invitation to awareness and a small amount of caution. I’d like everyone to be fully aware that we are in a bit of a bubble right now and consider it in their decisions. Caution should not be overdone—I’d gladly repeat the experience of going to Yahoo! Research even knowing how it ended. There are two natural elements here:

  1. Where do you work as a researcher? The best place to be when a bubble bursts is on the sidelines.
    1. Is it in the middle of a costly venture? Companies are not good places for this in the long term whether a startup or a business unit. Being a researcher at a place desperately trying to figure out how to make research valuable doesn’t sound pleasant.
    2. Is it in the middle of a clearly valuable venture? That could be a good place. If you are interested we are hiring.
    3. Is it in academia? Academia has a real claim to stability over time, but at the same time opportunity may be lost. I’ve greatly enjoyed and benefited from the opportunity to work with highly capable colleagues on the most difficult problems. Assembling the capability to do that in an academic setting seems difficult since the typical maximum scale of research in academia is a professor+students.
  2. What do you work on as a researcher? Some approaches are more “bubbly” than others—they might look good, but do they really provide value?
    1. Are you working on intelligence imitation or intelligence creation? Intelligence creation ends up being more valuable in the long term.
    2. Are you solving synthetic or real-world problems? If you are solving real-world problems, you are almost certainly creating value. Synthetic problems can lead to real-world solutions, but the path is often fraught with unforeseen difficulties.
    3. Are you working on a solution to one problem or many problems? A wide applicability for foundational solutions clearly helps when a bubble bursts.

Researchers have a great ability to survive a bubble bursting—a built up public record of their accomplishments. If you are in a good environment doing valuable things and that environment happens to implode one day the strength of your publications is an immense aid in landing on your feet.

Categories: Blogroll

Reinforcement Learning Platforms

Mon, 2018-04-16 16:17

If you are interested in building an industrial Reinforcement Learning platform, we are hiring a data scientist and multiple developers as a followup to last year’s hiring. Please apply if interested as this is a real chance to be a part of building the future

Categories: Blogroll