I don't know where to look for references of algorithms implemented in Weka,
but I suspect in this particular case it's:
T. Dietterich. An experimental comparison of three methods
for constructing ensembles of decision trees: bagging, boosting,
and randomization. Machine Learning, 40(2):139-157, 2000.
> From: Praveen Boinee
> Dear Weka- People
> First of all i would like to acknowledge your hard work behind this
> wonderful machine learning tool !!
> You made this very hard and difficult subject a funny game ... !!
> O.K Coming to my question...
> Surprisingly Random Tree algorithm in tree section has
> outperformed many
> famous algorithms like adaboost m1 , random forests for my
> classification problem
> I would like to know where to find some more technical
> details on this
> Thanking you and waiting for ur reply
> Wekalist mailing list
Notice: This e-mail message, together with any attachments, contains information of Merck & Co., Inc. (One Merck Drive, Whitehouse Station, New Jersey, USA 08889), and/or its affiliates (which may be known outside the United States as Merck Frosst, Merck Sharp & Dohme or MSD and in Japan, as Banyu) that may be confidential, proprietary copyrighted and/or legally privileged. It is intended solely for the use of the individual or entity named on this message. If you are not the intended recipient, and have received this message in error, please notify us immediately by reply e-mail and then delete it from your system.
Is there a simple way to get the Z-Scores (Coefficients divided by
standard error) from weka.classifiers.functions.Logistic, or does
the used iterative optimization procedure not compute the necessary
variables? I think I would need the diagonal of the inverse Hessian,
which does not seem to be implemented...
Dr.techn. Alexander K. Seewald
Austrian Research Institute alexsee(a)oefai.at
for Artificial Intelligence www.oefai.at
Tel. +43(1)5336112/18 Mob. +43(664)1106886
Information wants to be free;
Information also wants to be expensive (S.Brant)
--------------- alex.seewald.at ----------------
I'm working on a 3.4 copy of the weka software under Linux. While
playing with the CLI a few questions arose which I should like to pose here:
Why do you let the user specify the initial seed? Wouldn't it be best to
always just take the POSIX 1003.1-2001 gettimeofday(2) and seed the
java.util.Random() with this?
I'm a bit confused concerning the major differences between the CLI and
the GUI version regarding parameter selection. In the GUI version I can
specify which Attributes to hide or ignore for a SimpleKMeans run,
however I'm not sure how to specify this in the CLI run. I have learned
about the p option but the explanation of the help text troubled me more
than was helpful. So if I have 3 Attributes and I would like to ignore
the first one, do I have to specify -p 2,3? The first attribute is a
The GUI version unfortunately doesn't show me the real command which
gets executed (a feature IMHO) and the CLI version outputs completely
different data when specifying -p 2,3.
Any help or suggestions appreciated. Best regards,
Roberto Nibali, ratz
This might be of interest: we have just re-worked the
weka.classifiers.functions.RBFNetwork a little bit (because so far
performance has been pretty poor). It appears to do a more reasonable
job now. The new version is in CVS (note that some supporting classes
have also changed).
The new version implements (what appears to be called) a "normalized"
Gaussian RBF network, and applies k-means separately to each class to
find the basis functions. It also now standardizes the data prior to
For more information on RBF networks I recommend
C.M. Bishop's book on NNs
(http://research.microsoft.com/~cmbishop/nnpr.htm) and the NN FAQ (e.g.
On Nov 16, 2004, at 12:51 PM, wekalist-request(a)list.scms.waikato.ac.nz
> From: Roberto Nibali <ratz(a)drugphish.ch>
> Date: November 16, 2004 8:20:17 AM GMT+13:00
> To: lmattua(a)terra.com.br
> Cc: wekalist(a)list.scms.waikato.ac.nz
> Subject: Re: [Wekalist] Weka and RBF
>> I´m starting a work to implement an RBF (radial basis function)
>> algorithm in Weka package in order to get my MSC degree. Until this
>> I was not able to find anything related to this topic on the net. The
>> Matlab is the nearest thing I found. Anyone with this expertise ?
>> Suggestions ? Book names ?
> Just when I finished my initial reply to you, I discovered
> weka/classifiers/functions/RBFNetwork.java and
> Maybe this helps. Have a nice one,
> Roberto Nibali, ratz
Dear Weka- People
First of all i would like to acknowledge your hard work behind this
wonderful machine learning tool !!
You made this very hard and difficult subject a funny game ... !!
O.K Coming to my question...
Surprisingly Random Tree algorithm in tree section has outperformed many
famous algorithms like adaboost m1 , random forests for my
I would like to know where to find some more technical details on this
Thanking you and waiting for ur reply
I´m starting a work to implement an RBF (radial basis function)
algorithm in Weka package in order to get my MSC degree. Until this time
I was not able to find anything related to this topic on the net. The
Matlab is the nearest thing I found. Anyone with this expertise ?
Suggestions ? Book names ?
You must be using JDK1.5. You can do two things either use JDK 1.4.x or modify the file weka/classifiers/bayes/BayesNet.java
In the BayesNet.java file you would see the following line in toXMLBIF03() method:
just modify it to:
Compile and run using JDK1.5 and it should be fine.
Date: Sat, 13 Nov 2004 07:37:40 -0800 (PST)
From: Amira Djebbari <amira_djebbari(a)yahoo.com>
Subject: [Wekalist] Error with visualizing graph from Bayes net
Content-Type: text/plain; charset=us-ascii
I'm having an error with Visualizing the graph after
making Bayes Net.
Here are the steps I took: I loaded the
weather.nominal.arff from the data directory then in
classify, chose Bayes net. The analysis completes and
then when I try visualize graph the window gets stuck
on "removing gaps by adding dummy vertices".
Below is the error I see on the terminal:
[Fatal Error] :1:20: XML version "0.1" is not
supported, only XML 1.0 is supported.
org.xml.sax.SAXParseException: XML version "0.1" is
not supported, only XML 1.0 is supported.
Please let me know if you have any ideas. Am I doing
something wrong? How can I fix it?
Thanks for your advice.
I wish to get the predicted class value of an instance using
classifyInstance. An illustration is shown below:
My reference dataset is the simple weather.arff
Instance testInstance = sunny, 67,87,FALSE, ?
double distribution = ibk.classifyInstance(testInstance);
I also tried
double distribution = classifier.classifyInstance;
I am passing the testInstance as an argument to the classifyInstance method.
Each time i run this, it outputs the nearest neighbours to test instance
after which it gives this exception " UnassgnedClassException: Class index
is negative (not set)"
I am actually confused on how to go about it because i dont wish to add the
class attribute since i want my IBk to predit the class Attribute (yes OR
Stay in touch with absent friends - get MSN Messenger