Lidar scanning via iphone



https://www.youtube.com/watch?v=XmQpu1QvK1Q

How to use iPhone LiDAR to Create 3DGS Models

This guide summarizes Olli Huttunen’s workflow for utilizing the iPhone Pro’s LiDAR scanner to generate data for 3D Gaussian Splatting (3DGS). While November in Finland offers poor lighting, it provides a testing ground to see if ARKit data can be converted into a format compatible with 3DGS training software.

The Technology: LiDAR & ARKit

  • LiDAR (Light Detection and Ranging): Uses laser beams to measure distance and create 3D point clouds.
  • Limitations: The iPhone/iPad Pro laser is not very strong; it has a maximum effective range of approximately 5 meters. It is not designed for large environments but works well for small rooms or objects.
  • ARKit: Apple’s framework (introduced in iOS 14) that allows developers to access LiDAR depth data and camera positioning.

The Workflow

1. Scanning

App Used: 3D Scanner App (Free on App Store).

  • Modes: The app has a standard LiDAR mode and an “Advanced” mode.
  • Process: As you scan, the app creates a 3D mesh and simultaneously captures photos. Texture mapping happens after the scan is complete.
  • Export: This is the crucial step. You must select “All Data” when exporting. This saves the images, depth maps, conf files, and JSON files containing camera spatial coordinates.

2. The Conversion Problem

The raw data from the iPhone (images + JSON pose matrices) is not natively readable by most 3DGS training software (like Postshot), which usually expects COLMAP data.

  • The Data: The export contains frame images and corresponding JSON files with camera matrices.
  • The Solution: Olli developed a custom Python application called LiDAR2COLMAP Converter.

3. Converting Data

  1. Transfer the “All Data” folder from the iPhone to a computer.
  2. Open the LiDAR2COLMAP Converter.
  3. Drag and drop the scan folder into the application.
  4. Click “Generate COLMAP data”.
  5. The software converts the ARKit data into a standard COLMAP folder structure that includes images and a sparse point cloud.

4. Training (Using Postshot)

The converted data can be imported directly into Postshot (or other tools like Lichtfield Studio or Brush). Observations during training:

  • Camera Tracking: The camera path usually looks correct.
  • Sparse Point Cloud: The point cloud generated by the LiDAR is very linear and “too perfect,” which can actually confuse the Gaussian training process.
  • Artifacts: The training often struggles with backgrounds, creating “floaters” and misaligned splats.

Tips for better results in Postshot:

  • Create Sky Model: Check this option in the training settings. It creates a spherical structure around the model, helping the software place background Gaussians more accurately.
  • Region of Interest (ROI): Use the bounding box tool to restrict training to the specific object (e.g., the violin case in the video) and ignore the messy surrounding room data.

Alternative Method: KIRI Engine

You can also use the KIRI Engine app, but the workflow differs slightly:

  1. Scan using KIRI Engine.
  2. Export using the “Export Raw Data” option (ARKit format).
  3. This creates two zip files (one for the 3D model, one for the dataset).
  4. Manual Step: You must unzip the 3D model folder and move the processed folder inside the KIRI Engine-nerf folder.
  5. Run this combined folder through the LiDAR2COLMAP Converter.

Conclusion

  • Pros: The workflow is incredibly fast and easy compared to traditional photogrammetry. It allows for local training on Macs (via the Brush app) or PCs.
  • Cons: The quality is not as high as photogrammetry. The sparse point cloud from the LiDAR is not accurate enough to produce high-fidelity splats without significant artifacts, particularly in the background.

Resources

Download the Converter: You can download the LiDAR2COLMAP v1.4.2 python application for free from Olli’s code site (link provided in the video description).