Thanks for your help Peter, much appreciated.
On Fri, 01 Jun 2007 09:34:18 +1200
Peter Reutemann <fracpete(a)waikato.ac.nz> wrote:
looking to start a series of experiments based on the SVM
classifier. Essentially my initial evaluation of what I have to assess
comes to over 1,000 experiments. On a home computer (or a maximum of
two), I'll obviously have to scale these back by dividing it up into
mini projects or I'll never complete what I have to :)
In light of this, my question relates to parameter tuning in an
experimental environment. What is the best way to tune a SVM
classifier? Is this even necessary for good results using WEKA?
Obviously the thought of trying too many parameters concerns me as it
would multiply the number of experiments required. Therefore, I'm just
looking for the best way to save on having to run too many experiments
while ensuring that close to maximum results are still possible.
In this, I'm simply referring to the classifier alone, and not any
preceding sections, and therefore am not including things like
dimensionality reduction or text representation. The ML parameters are
what I'm essentially referring to.
The parameter setup is always dependend on the data you're using
(there's no silver bullet!). E.g., using an RBFKernel in SMO, you need
to tune the gamma parameter of the kernel (and possible the complexity
parameter of SMO itself). The developer version contains "GridSearch" a
meta-classifier that allows parameter tuning of a base-classifier (it
explores the two-dimensional plane of possible parameter settings of two
parameters you wanna tune). Check out the Javadoc of that classifier for
Of course, it's not guaranteed that a setup that works fine for one of
your experiments will work well with the others...
Peter Reutemann, Dept. of Computer Science, University of Waikato, NZ
Ph. +64 (7) 858-5174
Wekalist mailing list