I can run it on Win2k Client.
I used the following command:
C:\DJFHQ\Info_Extraction\Weka>java -classpath weka-3-2-3\weka-3-2-3\weka.jar
I have WekaMetal.jar in the CLASSPATH. I also used WinZip to "unzip" the JAR
files for Weka 3.2.3 and WekaMetal after downloading them.
My version of Java is the following:
java version "1.3.0_02"
Java(TM) 2 Runtime Environment, Standard Edition (build 1
Java HotSpot(TM) Client VM (build 1.3.0_02, mixed mode)
PS I also noticed a possible typo in your quoted string which maybe should
"G:\Program Files\Weka-3-2-3\weka.jar" i.e. the "-3" is missing.
From: Chris Bacon [mailto:firstname.lastname@example.org]
Sent: Saturday, 17 August 2002 10:29 AM
Subject: [Wekalist] WekaMetal and Windows?
Has anyone been able to run WekaMetal on Windows? I've tried on
Server from a command prompt where I get this:
G:\Program Files\WekaMetal>java -jar WekaMetal.jar:"G:\Program
Exception in thread "main" java.util.zip.ZipException: The filename,
ame, or volume label syntax is incorrect
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(Unknown Source)
at java.util.jar.JarFile.<init>(Unknown Source)
at java.util.jar.JarFile.<init>(Unknown Source)
I've played around with the ClassPath, but it didn't help. I've also
from within IBM Websphere Application Developer, where it gets an IO error
because it can't find the cache files (ap.cache and dc.cache), even though
they're both in the same directory as the WekaMetal.jar file.
Any help would be appreciated.
Wekalist mailing list
I am applying ID3 to a medical data set and was wondering if there is a way to
get Weka to output the results of the ID3 algorithm graphically so that the
Specialist can interpret the results easily?
I have written a classification scheme in Java using the Weka-library.
Now, I want to use the 'Experimenter' for doing some experiments
with this new classification scheme.
(Up till now I wrote my own experiments in Java, but probably the
Experimenter seems like a better option.)
In the README file it is indicated that the file
GenericObjectEditor.props is the place to be. I thus copied this
file to my home directory and I added ...
monotone.MinMaxExtension,\ #line added
monotone.OSDL #line added
This works, in the sense that in the Experimenter-gui
I now can choose from 'MinMaxExtension' and 'OSDL'.
but upon choosing one of these I get the error message
'could not create an example of weka.MinMaxExtension from the
current classpath' ...
I have to say that
1) I find it strange that the 'monotone' part is cut from the classname,
and that 'weka' is prefixed ...
2) the class 'MinMaxExtension' is part of the package 'monotone'.
I.e. the first line of 'MinMaxExtension.java' is 'package monotone;'
3) the file 'MinMaxExtension.class' is in a directory called
$SOMETHING/Source/monotone and this directory is part of my
classpath; in my .bashrc I have
Any hints on how to proceed are welcome. If possible I would like to
keep these files in the current package ....
I would like to know whether someone has written/developed an Independent Component Analysis Module For Weka.
Post your free ad now! Yahoo! Canada Personals
On Oct 15, 2004, at 12:18 PM, wekalist-request(a)list.scms.waikato.ac.nz
> I compared the results of SMO with LIBSVM
> (http://www.csie.ntu.edu.tw/~cjlin/libsvm/), which is a simplication
> of both
> SMO(Platt) and SVMLIGHT(Joachims) using the RBF kernel with same c, G,
> tolarence parameter, and 10-fold cross validation. With LIBSVM I got
> the 81.44%
> accuracy and with SMO I got 67.14%. Why are they so different?
There can be multiple reasons for this (note that I don't know anything
a) SMO normalizes the attributes (but you can turn that off)
b) The dataset is small and if you repeat the cross-validation with a
different random number seed you get a very different result (try
changing the seed). It is very unlikely that both LIBSVM and Weka
happen to shuffle the data so that the cross-validation folds are
c) There is a bug somewhere in Weka's SMO (seems unlikely).
> as the value of C increases, the algorithm should try to classify the
> more accurately.
This is incorrect (if you are referring to cross-validation). As you
increase C (and allow the algorithm to fit the training data more
closely) the accuracy on the TRAINING DATA normally goes up. However,
this might lead to overfitting, which would mean the cross-validated
error would go up.
> But that's not happening in SMO. After 1000, it is
> decreasing!! why? I ran the SMO with different data, but I am always
> the optimal value for C within 100-500 range.
I would like to know the number of selected support vectors in a SVM after
a certain learning on a certain dataset. I didn't find any method to get
this information. Is it availaible ? Would it be possible to add it easily
dear weka users,
I am working on a paper in speaker recognition. I used
different tool for extracting the speech features and
I wrote a small java program to parse those features
into feature vectors and created weka arrf file. One
vector has seventeen elements sixteen elements are
speaker dependent features or coefficients and one is
class (person or speaker identification). There were
ten person's speech hence ten classes. And there are
about nine hundred feature vectors per class (person)
for the testing set there are about ten times more for
training. The feature vectors for training and testing
were separate. I used almost all classifying
algorithms but the result of the multilayer perceptron
is as follows (i am just posting the confusion matrix
and accuracy %).
Correctly Classified Instances 4796
Incorrectly Classified Instances 6138
Kappa statistic 0.3743
Mean absolute error 0.1262
Root mean squared error 0.2774
Relative absolute error 70.0188 %
Root relative squared error 91.7837 %
Total Number of Instances 10934
=== Detailed Accuracy By Class ===
TP Rate FP Rate Precision Recall F-Measure
0.74 0.044 0.629 0.74 0.68
0.333 0.078 0.3 0.333 0.316
0.304 0.044 0.409 0.304 0.349
0.671 0.191 0.26 0.671 0.374
0.058 0.057 0.092 0.058 0.071
0.397 0.043 0.482 0.397 0.436
0.466 0.057 0.644 0.466 0.54
0.337 0.059 0.362 0.337 0.349
0.699 0.013 0.848 0.699 0.766
0.353 0.037 0.488 0.353 0.41
=== Confusion Matrix ===
a b c d e f g h i j <--
736 38 13 21 4 33 80 59 1 9 | a = 00001
21 331 79 198 74 25 190 41 17 18 | b = 00002
19 98 302 370 40 22 48 30 0 65 | c = 00003
20 31 54 667 36 39 82 40 0 25 | d = 00004
17 76 144 608 58 27 26 17 2 19 | e = 00005
55 107 9 94 61 395 16 219 21 17 | f = 00006
48 126 96 382 74 104 926 122 34 76 | g = 00007
93 151 21 81 131 34 8 335 6 134 | h = 00008
85 34 1 19 12 91 46 6 695 5 | i = 00009
77 112 19 129 139 50 17 56 44 351 | j = 00010
My question is even though correctly
classified instances is only 43%. But in confusion
matrix what we can see "for most of the classes " is
that more than 50% of the instances that belonged to
class (say for instance class 00001) "a = 00001" is
classified as class "a".
So can I say in the conclusion that "given the enough
data for training and testing" the classifier was able
to correctly classify the instances (person).
Hope someone in this entire list will be able to guide
me through this confusion. Or there could be another
way to measure the classification process.
Do you Yahoo!?
Yahoo! Mail Address AutoComplete - You start. We finish.
I have attributes that take on extremely skewed distribution, kind of
like the Parento distribution (or power law). Is it still good to
If so, what might be a better way to normalize them?
Does any body know if there is an implementation of Pairwise Coupling (PC)
meta learner that can be applied to binary learners for multi-class
I know PC is embedded in SMO learners, and that there is a
MultiClassClassifier that uses coding methods, but I need to be able to
apply PC to all WEKA's probabilistic classifiers.
Any help or advice would be greatly appreciated.