본문으로 바로가기

[개념정리] Ablation Study 란?

category AI/CV & SSL 2019. 10. 31. 16:09
336x280(권장), 300x250(권장), 250x250, 200x200 크기의 광고 코드만 넣을 수 있습니다.

 

 

 

 

 

An ablation study typically refers to removing some “feature” of the model or algorithm, and seeing how that affects performance.

Examples:

  • An LSTM has 4 gates: feature, input, output, forget. We might ask: are all 4 necessary? What if I remove one? Indeed, lots of experimentation has gone into LSTM variants, the GRU being a notable example (which is simpler).
  • If certain tricks are used to get an algorithm to work, it’s useful to know whether the algorithm is robust to removing these tricks. For example, DeepMind’s original DQN paper reports using (1) only periodically updating the reference network and (2) using a replay buffer rather than updating online. It’s very useful for the research community to know that both these tricks are necessary, in order to build on top of these results.
  • If an algorithm is a modification of a previous work, and has multiple differences, researchers want to know what the key difference is.
  • Simpler is better (inductive prior towards simpler model classes). If you can get the same performance with two models, prefer the simpler one.

 

모델이나 알고리즘의 "feature"들을 제거해나가면서 그 행위가 성능에 얼마나 영향을 미치는지를 확인해보는 것. 

 

 

출처 : https://www.quora.com/In-the-context-of-deep-learning-what-is-an-ablation-study

 

In the context of deep learning, what is an ablation study?

Answer (1 of 4): An ablation study typically refers to removing some “feature” of the model or algorithm, and seeing how that affects performance. Examples: * An LSTM has 4 gates: feature, input, output, forget. We might ask: are all 4 necessary? What if I

www.quora.com

 

 

 

 

 

'AI > CV & SSL' 카테고리의 다른 글

[개념정리] downstream task란?  (0) 2019.11.07
[개념정리] co-adaptation이란?  (0) 2019.11.01
[개념정리] Pretext task 란?  (0) 2019.10.31
[개념정리] Generalization & Optimization  (0) 2019.10.30
[개념정리] 과적합이란?  (0) 2019.09.20