About
An experiment to render multiple Google Street View ® scenes as a 3D point cloud using the LiDAR data captured along with the regular panorama images. Eventually, I'd like to create a collision mesh, third person controls and hook everything up to a head mounted display like the Oculus Rift.
Controls
Mouse plus left button to rotate around a scene.
● Mouse plus right button to pan.
● Mouse wheel to zoom in and out.
● Most settings in UI are self explanatory - otherwise, experiment!
● Max distance is the number of meters from the start location to search from.
Point step is the number of pixels to skip. Larger is faster but less detailed. Large values for distance and small values for point step can take a long time to render.
● You can choose from some
Preset locations ●Find location allows you to enter an address anywhere in the world. I don't know how extensive the LiDAR depth data is - some places may not support it.
●
Issues
Many but majors ones are: Unable to display progress while scenes are loaded. Large values of distance and small point step can result in long wait times
● Very little in the way of error handling.
● Need to use correct pitch/yaw/height for each view.
●
Credits
More examples and contact information:
here
Data:
Google Street View ® (this app not affiliated with or endorsed by Google)
Inspired by:
Point Cloud CityAPIs:
three.js ● GSVPanoDepth ● GSVPano ● Einar's math code