Hacker News new | past | comments | ask | show | jobs | submit login

Assuming you mean Support Vector Machines with "SVM", you may be idealizing them a bit.

SVMs have been around for almost two decades now, which is an eternity in the ML world, rather than infancy.

SVMs don't require the problem set to be linearly separable.

Please note that there's a myriad of robust, scalable SVM implementations -- SVMlight, HeroSVM, LIBSVM, liblinear... (the latter two also have wrappers in scikit-learn, a Python library mentioned in the OP).




Would only add that unless you're writing a PHD on SVM don't write your own implementation. As Radim wrote there are quite a few to chose from.


Unfortunately the vast majority of scalable SVM implementations are only for non commercial use.


That's exactly my point. Most real-world problems (see spam filtering) are NOT linearly separable. Using highly-granular kernels would yield better results when compared to ANNs and Bayesian methods, etc.

Like I said, I haven't worked (much) with SVMs but they really do seem like the future. Unfortunately, they are difficult to work with.

And finally, the first workable SVM algorithm was proposed in 1995 (not nearly the two decades you claim) and implemented several years later. SVMs are very much still in their infancy -- especially considering that not much work is being done on SVMs since ANNs are much (much) easier to work with.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: