Dimensionality Reduction with PCA algorithm

There are a lot of good articles that describe theory of dimensionality reduction with various algorithms such as PCA. Some of them have really good examples (for instance this one: http://blog.yhathq.com/posts/image-classification-in-Python.html)
However in order to apply and use it I want to develop intuition: what does it mean from a mathematical/machine standpoint to reduce 132342 dimensional space into let’s say 2D. After several hours of playing around with sklearn PCA implementation I’ve come up with following representation that shows 1st component of 2 dimensional space:

This is how a machine sees the data. On the left input non transformed 2 input data samples. On the right data samples projected to 2D and represented back into 132342D space for 1st component. Simply 1st element of array of 2 elements multiplied by 1st column of so called U matrix with 132342D elements in it.

As you see after data point is projected into 2D there is clear separation between different data point types, that can be used for further logistic regression algorithm.

Современные достижения и перспективы Робототехники и Искусственного Интеллекта

Директор лаборатории Искусственного Интеллекта Стенфорда Эндрю Энг рассказывает о современных достижениях и перспективах Искусственного Интеллекта. Значительную часть доклада он посвещает искусственным нейронным сетям.

AI: Planning

AI is the process of finding appropriate actions for an agent. Therefore planning is in some sense of a core of AI

Problem Solving Search over state space. Given a state space and problem description it can find a path to a goal. Those approaches are great for variety of environments, but they only work in an environment determinitsitic and fully observable.  In this approach planning is done ahead.

Continue reading

AI: Representation Logic

Representation – agents model of the world (could be increasingly complex. Representation uses tools of logic could be used by an agent to better model the world.

Propositional Logic

B (burglary occurring) E(earthquake) A(Alarm) M(Marry calls police) J(John calls police)

True False

( E v B ) => A ( Alarm is True whenever either Eathrquake or Burglary is True)

A=> (J ^ M) (When Alarm is True both Mary and John is True)

J <=> M (bioconditional: John calls when and only when Marry calls, John is equivalent to Mary)

J <=> !M (John isequivalent to not Marry)

Propositional logic either True or False with respect to a model of the world. Model is just a set of true/false values for all the propositional symbols.

Example: {B:True, E:False, …}

P => Q (P implies Q)

Continue reading

Machine Learning Algorithms

Supervised Learning

Linear Regression (Hypothesis, Uni-variable, Cost-function, Gradient Descent, Multivariable, Feature Scaling, Mean normalization)

Polynomial Regression

Normal Equation (Algorithm, Comparison with Gradient Descent, Case with Non-invertability matrix )

Logistic Regression for Classification (Hypothesis, Non-linear decision boundaries, Cost-function, Simplified cost-function and Gradient Descent)

Advanced Optimization

Advanced Algorithms (Conjugate Gradient, BFGS, L-BFGS)

Multi-class classification: One-vs-all

Regularization: Over-fitting problem, Cost-function, Linear Regression, Normal Equation, Logistic Regression, Advanced Algs)

Take away from AI-Class

Applications

Terminology

Supervised Learning

OCCAM’s RAZOR

Spam Detection (with Naive Bayes)

Maximum Likelihood

Laplace Smoothing

Advanced SPAM filtering

Handwriting classification

Overfitting prevetion

Linear Regression

Regularization

Perceptron Algorithm

Maximum Margin Algorithms (SVM, Boosting)

K-nearest neighbors (Algorithm, Problems)

Unsupervised Learning

Density estimation, Dimensionality Reduction, Blind sequence separation, Factor Analysis

Clustering

k-means (algorithm, problems)

Expectation minimization

Gaussians and Normal Distribution

Gaussian Learning

Algorithm

# of cluster calculation

Dimensionality Reduction

Spectral Clustering