For an SVM with a precomputed Gram matrix, do the kernels need to be normalized to between 0 and 1?

تعرفه تبلیغات در سایت

آخرین مطالب

امکانات وب

Vote count: 0

I'm using Scikit-Learn for text classification in Python. My classifier is currently making false predictions for everything (I was fooled for a while because it reported "75% accuracy" when 75% of the labels were false), so I'm trying to figure out what's wrong.

Currently, I'm doing SVC(kernel='precomputed') and computing the Gram matrix manually before passing it to fit() and predict(). The entry $G_{ij}$ of the Gram matrix is the kernel $K(d_i, d_j)$, where K denotes the kernel function and d_i is the ith document.

For my kernel function, the Gram matrix entries are not normalized, i.e. some are greater than 1. Do I need to apply kernel normalization

$$ K'(d_i, d_j) = frac{K(d_i, d_j)}{sqrt{K(d_i, d_i) times K(d_j, d_j)}} $$

to get it between 0 and 1? Or do SVMs not care?

asked 56 secs ago
James Ko

نویسنده : استخدام کار بازدید : 2 تاريخ : سه شنبه 29 اسفند 1396 ساعت: 8:52

فهرست وبلاگ