I can run it on Win2k Client.
I used the following command:
C:\DJFHQ\Info_Extraction\Weka>java -classpath weka-3-2-3\weka-3-2-3\weka.jar
I have WekaMetal.jar in the CLASSPATH. I also used WinZip to "unzip" the JAR
files for Weka 3.2.3 and WekaMetal after downloading them.
My version of Java is the following:
java version "1.3.0_02"
Java(TM) 2 Runtime Environment, Standard Edition (build 1
Java HotSpot(TM) Client VM (build 1.3.0_02, mixed mode)
PS I also noticed a possible typo in your quoted string which maybe should
"G:\Program Files\Weka-3-2-3\weka.jar" i.e. the "-3" is missing.
From: Chris Bacon [mailto:firstname.lastname@example.org]
Sent: Saturday, 17 August 2002 10:29 AM
Subject: [Wekalist] WekaMetal and Windows?
Has anyone been able to run WekaMetal on Windows? I've tried on
Server from a command prompt where I get this:
G:\Program Files\WekaMetal>java -jar WekaMetal.jar:"G:\Program
Exception in thread "main" java.util.zip.ZipException: The filename,
ame, or volume label syntax is incorrect
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(Unknown Source)
at java.util.jar.JarFile.<init>(Unknown Source)
at java.util.jar.JarFile.<init>(Unknown Source)
I've played around with the ClassPath, but it didn't help. I've also
from within IBM Websphere Application Developer, where it gets an IO error
because it can't find the cache files (ap.cache and dc.cache), even though
they're both in the same directory as the WekaMetal.jar file.
Any help would be appreciated.
Wekalist mailing list
I am applying ID3 to a medical data set and was wondering if there is a way to
get Weka to output the results of the ID3 algorithm graphically so that the
Specialist can interpret the results easily?
Im running Naive Bayes and Id like know is it possible to inspect
probability tables somehow? When I right click the Naive Bayes in result
list of the Explorer, I am not allowed to choose "visualize tree" option.
I have written a classification scheme in Java using the Weka-library.
Now, I want to use the 'Experimenter' for doing some experiments
with this new classification scheme.
(Up till now I wrote my own experiments in Java, but probably the
Experimenter seems like a better option.)
In the README file it is indicated that the file
GenericObjectEditor.props is the place to be. I thus copied this
file to my home directory and I added ...
monotone.MinMaxExtension,\ #line added
monotone.OSDL #line added
This works, in the sense that in the Experimenter-gui
I now can choose from 'MinMaxExtension' and 'OSDL'.
but upon choosing one of these I get the error message
'could not create an example of weka.MinMaxExtension from the
current classpath' ...
I have to say that
1) I find it strange that the 'monotone' part is cut from the classname,
and that 'weka' is prefixed ...
2) the class 'MinMaxExtension' is part of the package 'monotone'.
I.e. the first line of 'MinMaxExtension.java' is 'package monotone;'
3) the file 'MinMaxExtension.class' is in a directory called
$SOMETHING/Source/monotone and this directory is part of my
classpath; in my .bashrc I have
Any hints on how to proceed are welcome. If possible I would like to
keep these files in the current package ....
I want to generate an artificial instance in the following way:
1. The attributes in the given dataset are assumed independent.
2. For a nominal attribute, I compute the probability of cccurrence of
each distinct value in its domain and generate values for the new
instance based on this distribution.
Dose Weka have such Class to generate instances?
// bow, thanks for your reply!
I have rather big matrix in csv format that I want to load into WEKA,
but the sotf never seems to end reading the file....
The file is 780 Mb big and represent a square matrix of distance of
Will it finally load if I wait enough?
I am a newbie here. I would like to ask few questions. Could anyone please give me some hints?
1. What is the difference between a test set and a validation set? I know
test set is unlabled, but if so, how can we test against the true values.
2. Normally, we have 1 big dataset. We divide it into 2 sets: 66% for
training and the rest for testing.
However, in this KDD 99 dataset (about network intrusion detection data), we have 2 sets (the test set is ABSOLUTELY INDEPENDENT from the training set (it contains new attacks that
do not appear in the training set, and also different frequencies of the
+ Set1: 5 millions labeled records: they call "training set"
+ Set2: 2 millions unlebeled records: they call "test set unlabeled"
+ Set3: 400,000 labeled records: they call "test set labeled".
So what I will do with this arrangement is:
+ Supervised learning:
get 50,000 training records from set 1 (keep labels to train the model),
and 10,000 testing records from set 3 (first I hide the real labels, try
to predict the lables by using the model, then compare with the real
+ unsupervised learning:
get 50,000 training records from set 1 (I ignore the labels, and train the
model), and 10,000 testing records from set 3 (first I hide the real
labels, try to predict the lables by using the model, then compare with
the real labels).
so if I want to use the above 2 approaches, is it the rite way?
3. If I build 10 Decision Trees for a dataset. And they do not 100% agree each other. How could we make a decision of the output? Say, 4 of them classify an input as A, 3 of them say B and 2 of them say C. The final answer would A,B or C?
Start your day with Yahoo! - make it your home page