Dimensionality Reduction with PCA algorithm

There are a lot of good articles that describe theory of dimensionality reduction with various algorithms such as PCA. Some of them have really good examples (for instance this one: http://blog.yhathq.com/posts/image-classification-in-Python.html)
However in order to apply and use it I want to develop intuition: what does it mean from a mathematical/machine standpoint to reduce 132342 dimensional space into let’s say 2D. After several hours of playing around with sklearn PCA implementation I’ve come up with following representation that shows 1st component of 2 dimensional space:

This is how a machine sees the data. On the left input non transformed 2 input data samples. On the right data samples projected to 2D and represented back into 132342D space for 1st component. Simply 1st element of array of 2 elements multiplied by 1st column of so called U matrix with 132342D elements in it.

As you see after data point is projected into 2D there is clear separation between different data point types, that can be used for further logistic regression algorithm.