This is the first real-time 2D to 3D video conversion in silicon presented by Dialog Semiconductor at Mobile World Congress 2011, it may be useful for the upcoming glasses free 3D screen devices that are coming out. I am not sure though how they analyse the video to “guess” the 3D features in the images, do they analyse each static image or do they process frames before and after to determine 3D space based on the motion?