It's the scheme's negative conditional loglikelihood (aka log loss), computed from the class probabilities of the test instances using base-2 logarithm.

The negative conditional loglikelihood on the training data is what logistic regression minimizes. You can view logistic regression as empirical risk minimization if you measure risk using log loss, and you consider logistic regression models as your set of hyotheses.


On Thu, Dec 21, 2017 at 10:00 PM, Valerio jus <> wrote:
Dear Eibe, 

Thanks for the detaild reply.

1- However, in your opinion, can the "Class complexity | scheme " of entropy help us in finding/computing the "Empirical risk minimization"?

2- How can one interpret the "Class complexity | scheme" ?(how to use it for the sake of scheme evaluation?)



Wekalist mailing list
Send posts to:
List info and subscription status:
List etiquette: