This is my first post here
I have a set of data in which 70% of the model's errors are between 0 and 1, and the rest are between 1 and about 9. The loss function implied by least squares possesses an undesireable quality for a loss function for my application-- as the errors less than 1 become smaller (as they are squared) and the errors above 1 become significantly larger.
In my application a linear loss function (min Σǀy- ŷǀ ) seems more appealing.
I have never tried to estimate a model using a linear loss function, which as I recall (from many, many years ago) is a linear programming problem. We use R currently, but have not investigated this particular approach in R--but we are open to using other software. Is R a good tool for this kind of modeling?
The data set will have between 2-4 thousand observations on around 15 variables.
I would appreciate any recommended readings on the subject that are not too technical.
Also, I would like to ask a very naive question. I am curious about non-regression based machine learning approaches, are they primarily heuristics? Do these methods have implied loss functions?
Thanks
Jim Hawkes
jhawkes@hawkeslearning.com------------------------------
James Hawkes
Retired
------------------------------