I can run it on Win2k Client.
I used the following command:
C:\DJFHQ\Info_Extraction\Weka>java -classpath weka-3-2-3\weka-3-2-3\weka.jar
I have WekaMetal.jar in the CLASSPATH. I also used WinZip to "unzip" the JAR
files for Weka 3.2.3 and WekaMetal after downloading them.
My version of Java is the following:
java version "1.3.0_02"
Java(TM) 2 Runtime Environment, Standard Edition (build 1
Java HotSpot(TM) Client VM (build 1.3.0_02, mixed mode)
PS I also noticed a possible typo in your quoted string which maybe should
"G:\Program Files\Weka-3-2-3\weka.jar" i.e. the "-3" is missing.
From: Chris Bacon [mailto:email@example.com]
Sent: Saturday, 17 August 2002 10:29 AM
Subject: [Wekalist] WekaMetal and Windows?
Has anyone been able to run WekaMetal on Windows? I've tried on
Server from a command prompt where I get this:
G:\Program Files\WekaMetal>java -jar WekaMetal.jar:"G:\Program
Exception in thread "main" java.util.zip.ZipException: The filename,
ame, or volume label syntax is incorrect
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.<init>(Unknown Source)
at java.util.jar.JarFile.<init>(Unknown Source)
at java.util.jar.JarFile.<init>(Unknown Source)
I've played around with the ClassPath, but it didn't help. I've also
from within IBM Websphere Application Developer, where it gets an IO error
because it can't find the cache files (ap.cache and dc.cache), even though
they're both in the same directory as the WekaMetal.jar file.
Any help would be appreciated.
Wekalist mailing list
I am applying ID3 to a medical data set and was wondering if there is a way to
get Weka to output the results of the ID3 algorithm graphically so that the
Specialist can interpret the results easily?
Im running Naive Bayes and Id like know is it possible to inspect
probability tables somehow? When I right click the Naive Bayes in result
list of the Explorer, I am not allowed to choose "visualize tree" option.
I have written a classification scheme in Java using the Weka-library.
Now, I want to use the 'Experimenter' for doing some experiments
with this new classification scheme.
(Up till now I wrote my own experiments in Java, but probably the
Experimenter seems like a better option.)
In the README file it is indicated that the file
GenericObjectEditor.props is the place to be. I thus copied this
file to my home directory and I added ...
monotone.MinMaxExtension,\ #line added
monotone.OSDL #line added
This works, in the sense that in the Experimenter-gui
I now can choose from 'MinMaxExtension' and 'OSDL'.
but upon choosing one of these I get the error message
'could not create an example of weka.MinMaxExtension from the
current classpath' ...
I have to say that
1) I find it strange that the 'monotone' part is cut from the classname,
and that 'weka' is prefixed ...
2) the class 'MinMaxExtension' is part of the package 'monotone'.
I.e. the first line of 'MinMaxExtension.java' is 'package monotone;'
3) the file 'MinMaxExtension.class' is in a directory called
$SOMETHING/Source/monotone and this directory is part of my
classpath; in my .bashrc I have
Any hints on how to proceed are welcome. If possible I would like to
keep these files in the current package ....
I'm using Weka explorer, and I want to discretize class attribute(it is numeric). I'm applying unsupervised Discretized filter, and it works fine on all the attributes except for the class attribute. Being applied to class attribute it doesn't do any changes, leaving the class numeric and after saving it, the file doesn't change either.
Is there any way to discretize class attribute?
Hi, All, I need some help on the dataset output.
After discretization, I need to output the discretized features. For example, a feature ranges from 1 to 100, could be divided into 1-15,16-28,29-100. Can I output features as 1,2 and 3?
Stanley, Yemin Shi
Rekindle the Rivalries. Sign up for Fantasy Football
I don't understand very well the differences between IBk and IB1. I have
found this in the wekalist:
"There are a couple of differences. The biggest one is probably that IBk
will keep extending the list of neighbours when several instances are
equally far away. This happens quite frequently on datasets with
nominal attributes (e.g. more than one neighbour might be used for
prediction if k=1). IB1 doesn't do this and uses the first nearest
neighbour it finds."
Thinking about nominal attributes, I understant that, in IBk, you choose
"several instances if they are equally far away", and then you choose the
majority class of these instances, while, in the same case, IB1 only chooses
"the first nearest neighbour it finds"... but if you had three instances
equally far away, two of them belonging to class A and the other belonging
to class B, which instance would you choose? All of the three instances are
equally far away so I suppose that IBk would choose all three and the class
assigned would be class A, but in IB1... is the choice of the first nearest
neighbour a random choice? Is this the difference?