Point clouds

During the last month, we have been processing 3D models on a daily basis. Almost 100 3D models were build. The fieldwork in Iraqi Kurdistan and Belgium provided us with large and challenging datasets to explore and process. The first step in the processing of the image-based 3D model, using a structure from motion approach, is the alignment of the photographs and the generation of a 3D sparse point cloud. The point cloud represents the 3D geometry of the scene. This step largely determines the final accuracy of the 3D model. Therefore, we invest time in checking the image alignment and the point cloud for projection errors before proceeding with the processing of the 3D model. If we are able to identify the possible errors at this stage, it saves us a lot of -otherwise wasted- processing time. An optimised point cloud is our main priority before computing the 3D meshes.

Because the point cloud is important in the generation of the 3D model and because it clearly shows the 3D geometry of the scene/object, we feel the need to be able to exchange and present this data, both on- and offline, to the public and the community of scholars. At this point, we are especially struggling with the online sharing/presenting of the point clouds. It is easy to create videos of the point clouds and to share them on the Web, but the video is obviously not the same as the point cloud itself. A video won’t give the viewer the same experience as when s/he would be able to navigate through and explore the point cloud him/herself. The social interaction between models and viewers is one of the most important aspects of 3D archaeological data.

QUESTION: Because we are struggling with publishing our point clouds online, we can use all your help to achieve this. All tips, tricks and ideas will be highly appreciated!

skull001skull002skull003skull004The video in this post shows the point cloud of a sheep/goat skull, excavated earlier this year in Iraqi Kurdistan. The point cloud was build with 87 images and contains 581,176 points. The four images are snapshots of the final 3D model, the first two images show the 3D polygonal mesh, while the other two show the textured 3D model.

8 thoughts on “Point clouds

    • One of the aims of our project is to explore and evaluate different SfM software packages, open souce, low-cost and web-based SfM software, to find a cost-effective user-friendly solution that can be used by archaeologists.
      This example is made with PhotoScan. For the moment, it is the software I mostly use. It is user-friendly and the results are good (however, not always).

    • In never use scalebars in my 3D recordings. I add ground control points to the scene before recording to be able to georeference the 3D model in real world coordinates (you can see one of my reference points on the images). When the 3D model is georeferenced/scaled you can always add a scalebar, for example in a GIS-environment. These images are just some snapshots of the 3D model, so without scale indication, but I agree that scale is something important!

  1. Looks awesome! There are a lot of startup academic datasharing websites, I think even google is in on the business.

  2. Pingback: Point clouds, some experiments | Archaeology 3D

  3. To post models online I would definitely recommend Sketchfab, although not sure if it works with point clouds too. It’s definitely worth a look though. Great article by the way, and the results look excellent! Keep up the good work.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s