Monday, October 25, 2010
Monday, October 11, 2010
changing the output file name for C++
int setindex;
int caseindex;
FILE * pFile;
std::string part1 = "iccv_CV_set";
std::string part2 = "_case";
std::string part3 = "_C";
std::string part4 = "_iter";
std::string part5 = ".txt";
std::string result;
char sset[40]; // enough to hold all numbers up to 64-bits
char scase[40];
char sc[40];
char siter[40];
sprintf(sset, "%i", setindex);
sprintf(scase, "%i", caseindex);
sprintf(sc, "%i", C);
sprintf(siter, "%i", maxIter);
result = part1 + sset + part2 + scase + part3 + sc + part4 + siter + part5;
pFile = fopen(result.c_str(),"w");
fprintf(pFile,"%i\n",maxIter);
.
.
.
int caseindex;
FILE * pFile;
std::string part1 = "iccv_CV_set";
std::string part2 = "_case";
std::string part3 = "_C";
std::string part4 = "_iter";
std::string part5 = ".txt";
std::string result;
char sset[40]; // enough to hold all numbers up to 64-bits
char scase[40];
char sc[40];
char siter[40];
sprintf(sset, "%i", setindex);
sprintf(scase, "%i", caseindex);
sprintf(sc, "%i", C);
sprintf(siter, "%i", maxIter);
result = part1 + sset + part2 + scase + part3 + sc + part4 + siter + part5;
pFile = fopen(result.c_str(),"w");
fprintf(pFile,"%i\n",maxIter);
.
.
.
Submitting a job with multiple inputs
The following piece of code changes one of the input parameters (1-5) for 5 runs of the algorithm:
#!/bin/sh -login
#PBS -l walltime=00:10:00,nodes=1,mem=2gb
#PBS -M bucakser@msu.edu
#PBS -m abe
#PBS -t 1-5
#PBS -j oe
#PBS -V
cd ${PBS_O_WORKDIR}
./partial kernel.txt labels.txt alphas_iccv_500.txt 100 ${PBS_ARRAYID} 5
#!/bin/sh -login
#PBS -l walltime=00:10:00,nodes=1,mem=2gb
#PBS -M bucakser@msu.edu
#PBS -m abe
#PBS -t 1-5
#PBS -j oe
#PBS -V
cd ${PBS_O_WORKDIR}
./partial kernel.txt labels.txt alphas_iccv_500.txt 100 ${PBS_ARRAYID} 5
Thursday, September 23, 2010
Wednesday, May 26, 2010
How does scaling the kernel matrix affect the SVM outputs?
Unutmamak icin yaziyorum, bir ara ceviririm:
"Hi Pavan,
I was talking you about this baseline which performs different when the kernels were scaled. I found out that the problem was based on the SVM part. When the kernels are scaled so that the trace=1, the number of support vectors returned decreases. However when I increase C parameter, the output matches the old output again. So, the conclusion is that scaling the kernels corresponds to (proportionally) scaling C.
I think the problem was using a small C (C=10). In this case SVM just optimizes the first component (regularizer on w or f(w)). So kernel selection does not play a role in the cost function. This is also reflected to MKL formulation. Since they optimize error + regularizer on p, and changing the weights do not make any change on the error, MKL chooses p in a way that sum_of_p is decreased. that is why p approaches 0.
JFI
YGZ
"
"Hi Pavan,
I was talking you about this baseline which performs different when the kernels were scaled. I found out that the problem was based on the SVM part. When the kernels are scaled so that the trace=1, the number of support vectors returned decreases. However when I increase C parameter, the output matches the old output again. So, the conclusion is that scaling the kernels corresponds to (proportionally) scaling C.
I think the problem was using a small C (C=10). In this case SVM just optimizes the first component (regularizer on w or f(w)). So kernel selection does not play a role in the cost function. This is also reflected to MKL formulation. Since they optimize error + regularizer on p, and changing the weights do not make any change on the error, MKL chooses p in a way that sum_of_p is decreased. that is why p approaches 0.
JFI
YGZ
"
Subscribe to:
Posts (Atom)