Well I'm calling SMOTE filter(WeKa) in my algorithm in MOA, it works fine
for some datasets but for some specific datasets it throws an exception
java.lang.IllegalArgumentException: Comparison method violates its general
at java.util.TimSort.mergeLo(Unknown Source)
at java.util.TimSort.mergeAt(Unknown Source)
at java.util.TimSort.mergeForceCollapse(Unknown Source)
at java.util.TimSort.sort(Unknown Source)
at java.util.TimSort.sort(Unknown Source)
at java.util.Arrays.sort(Unknown Source)
at java.util.Collections.sort(Unknown Source)
I went through some java forums which gave me pointers here
But i dont know how to get away with this problem...it will be great if
someone could help me out of this.
----- Abhijeet Godase.
The number of support vectors arenot available in Libsvm package in Weka. Please help me how can take number of support vectors for different types SVM as one-class svm, C-SVC and so on.
In the following code I initialize my classifier as K2 but I cant find how
to set estimator in the code using setScoreType(SelectedTag)
can you provide me the details to specify it.
public static void main(String args)
Instances train =
Instances test =
train.setClassIndex(train.numAttributes() - 1);
test.setClassIndex(test.numAttributes() - 1);
BayesNet myBayes = new BayesNet();
K2 myK2= new K2();
< myK2.setScoreType( ); <------------------ How to declare it?
Evaluation eval = new Evaluation(train);
From: Noushin Rezapour Asheghi
Sent: 12 July 2012 12:52
Subject: liblinear feature weights
I have used liblinear in weka for classification and now I want to see the feature weights in the output too. The reason is that I want to know which features are important for which classes. Please Please help me with this problem.
Many thanks in advance,
Thanks Thomas for the reply. I was reading the paper, Instance-based
learning algorithm by Aha and Kibler (1991), that Weka IB1 and IBk
classifiers implement. My impression was that in the paper, IB1, IB2, and
IB3 refer to three different instance-based algorithms. My guess was
that I could specify the number of neighbors, k, for each of these algorithms.
In other words, the number in the names IB1, IB2, and IB3 in the paper
does not seem to correspond with the number of neighbors I choose but
denote the three variations. So, which algorithm could the Weka IBk
implementation be? Maybe I should look at the code long enough to figure
> you can specify the number of Nearest Neighbours, which choice exactly
> makes you use IB1, IB2 etc.
> 2009/5/13 Li Yang <lyshane(a)umich.edu>
>> Dear Weka experts,
>> I was just wondering whether the IBk classifier implements the IB1, IB2, or
>> IB3 algorithm in Aha and Kibler's article, Instance-based learning algorithm
>> Thank you in advance for your help.
>> Wekalist mailing list
>> Send posts to: Wekalist(a)list.scms.waikato.ac.nz
>> List info and subscription status:
>> List etiquette:
> Departement of Knowledge Engeneering
> Faculty of Humanities & Science
> Maastricht University
I am using weka 3.6 API programatically for association rule mining using apriori. Essentially, I am doing market
basket analysis for an electronic store. I have 7 attributes as follows with values as either "Y" or "N", depending
on whether an item is present or not in a transaction.
An example instance is
"Y", "Y", "Y", "N","N", "N", "N"
My problem is that rules seem to be dominated by the negatively correlated attributes, which is understandable.
Here are the top 5 rules I got
1. hdtv=N tv_stand=N 9 ==> lap_top=Y 9 conf:(1)
2. lap_top=Y hdtv=N 9 ==> tv_stand=N 9 conf:(1)
3. anti_virus_software=Y connector_cable=N 8 ==> lap_top=Y 8 conf:(1)
4. anti_virus_software=Y tv_stand=N 8 ==> lap_top=Y 8 conf:(1)
5. hdtv=N connector_cable=N 8 ==> lap_top=Y 8 conf:(1)
I am more interested in rules based on positive correlation. In the result above, I am not interested in rules 1, 2 and 5.
How do I get rid of them. Will it help to replace "N" with missing value. How do I specify missing value when I create instances
I would appreciate any help
Hotmail: Trusted email with powerful SPAM protection.
I can see that weka.core.ContingencyTables class calculates various
statistics about a dataset (e.g. conditional entropy of row=feature given a
Is there a class that would 'implement' this class that would just output
these statistics without running any classifier algorithm (e.g. J48 that
uses conditional entropies to calculate information gain)?
I am a Java rookie so for any help big or little I would be grateful!
Thanks, Harri S
I'm reading the learned model into an
ObjectInputStream object and creating a Classifier
from it. Then I'm creating an Instance with a,b,c as
variables and setting y=0 which is the classifier
Then I'm running
clsLabel = aClassifier.classifyInstance(anInstance);
If I use LinearRegression, this is working fine. But,
if I use a MultilayerPerceptron classifier, I'm
getting an error:
Exception in thread "main"
weka.core.UnassignedDatasetException : Instance
doesn't have access to a dataset!
Do I need to do something different for
Can anyone tell me where I'm going wrong ?
I am a postgraduate student and my research focuses on modeling for chemical properties prediction. I get a model from Weka using Functional trees (FT) algorithm, but cannot understand how it predict a unclassified example.
Here is my FT model
FT Inner tree
N0#1 <= 0.38772: Class=1
N0#1 > 0.38772: Class=0
Class 0 :
-0.43 + [nCIC]*0.44 + [nN]*0.36 + [nS]*0.34 + [nX]*0.18 + [SRW10]*0 + [ATS3p]*1.7 - [MATS3m]* 1.65 + [GATS3m]*25.69 + [C-001]*0.09 - [C-007]* 0.57 - [C-040]* 0.19 + [O-061]*0.25 + [Cl-089]*0.19
Class 1 :
0.43 - [nCIC]*0.44 - [nN]*0.36 - [nS]*0.34 - [nX]*0.18 - [SRW10]*0 - [ATS3p]*1.7 + [MATS3m]*1.65 - [GATS3m]*25.69 - [C-001]*0.09 + [C-007]*0.57 + [C-040]*0.19 - [O-061]*0.25 - [Cl-089]*0.19
One thing I am confused about is how I can use the split value 0.388. Until now I take it as a number related to the probability. However I have two probabilities for the two classes, I really do not know how to predict my examples with this functional tree model.