Mixture of experts

Mixture of experts (MoE) is a machine learning technique where multiple expert networks (learners) are used to divide a problem space into homogeneous regions.[1] It differs from ensemble techniques in that typically only one or a few expert models will be run, rather than combining results from all models.

An example from computer vision is combining one neural network model for human detection with another for pose estimation.

Hierarchical mixture

If the output is conditioned on multiple levels of (probabilistic) gating functions, the mixture is called a hierarchical mixture of experts.[2]

A gating network decides which expert to use for each input region. Learning thus consists of learning the parameters of:

  • individual learners and
  • gating network.

Applications

Meta uses MoE in its NLLB-200 system. It uses multiple MoE models that share capacity for use by low-resource language models with relatively little data.[3]

References

  1. Baldacchino, Tara; Cross, Elizabeth J.; Worden, Keith; Rowson, Jennifer (2016). "Variational Bayesian mixture of experts models and sensitivity analysis for nonlinear dynamical systems". Mechanical Systems and Signal Processing. 66–67: 178–200. Bibcode:2016MSSP...66..178B. doi:10.1016/j.ymssp.2015.05.009.
  2. Hauskrecht, Milos. "Ensamble methods: Mixtures of experts (Presentation)" (PDF).
  3. "200 languages within a single AI model: A breakthrough in high-quality machine translation". ai.facebook.com. 2022-06-19. Archived from the original on 2023-01-09.

Extra reading

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.