Opened 12 years ago
Last modified 8 years ago
#1795 closed feature request
Gradient boosting meta-learner for regression and classification — at Version 4
Reported by: | gkronber | Owned by: | gkronber |
---|---|---|---|
Priority: | low | Milestone: | HeuristicLab 3.3.14 |
Component: | Algorithms.DataAnalysis | Version: | |
Keywords: | Cc: |
Description (last modified by gkronber)
It would be nice to support a kind of boosting where multiple models are learned step by step and the weight of observations is adapted based on the residuals of the models learned so far.
Friedmans "Stochasic Gradient Boosting" (1999) could be implemented for regression and classification problems.
Since version 3.3.12 there is a specific implementation of gradient boosted trees. It would be great if we could also implement gradient boosting as a meta learner that uses any regression algorithm as the base learner.
Change History (4)
comment:1 Changed 11 years ago by gkronber
- Priority changed from medium to low
comment:2 Changed 10 years ago by gkronber
- Description modified (diff)
comment:3 Changed 9 years ago by gkronber
- Description modified (diff)
- Summary changed from Boosting support for classification and regression algorithms to Gradient boosting meta-learner for regression and classification
- Version changed from 3.3.6 to branch
comment:4 Changed 8 years ago by gkronber
- Description modified (diff)
- Milestone changed from HeuristicLab 3.3.x Backlog to HeuristicLab 3.3.14
- Status changed from new to accepted
- Version branch deleted
Note: See
TracTickets for help on using
tickets.