I’ve been a little busy lately, what with the new curriculum at Haaga-Helia being introduced and whatnot, but I’ve also managed to do something I have wondered about for a while already. I found a piece of software, Autodesk’s 123D Catch, and it seems to work really well with my pet project, the 3D printed 3D scanner system. Now that the printing department of the 3D Lab works well, with the 3 Minifactories and one CoLiDo Printrite, it’s time to take the next step.
New software from Autodesk
Autodesk is really generous when it provides the 123D Catch software for free. It is an amazing piece of programming. With some 24 or more high-resolution photographs, you can get a full blown 3D point cloud, and that can then be exported into .OBJ or .STL formats. The .OBJ even has material maps, so you can see a color model of your target. It is possible to edit the mesh in 123D Catch to a degree, but for my purposes all I need is the .STL format. It is easy to import that into Blender and then modify it to suit whatever needs you may have – trim, add to another mesh, or sculpt further and then print.
You can download the software from Autodesk, and there is an app for Android, iOS and WIndows Phone. At least with the Windows Phone, there is an issue with dual-core phones, such as my Lumia 920, so you may want to check phone compatibility. You also need an Autodesk account, which is free, and you can create it on the site while you download the PC app as well. If your phone works well, you don’t need the PC app, but it’s a good idea to grab one in any case. After you have installed the apps, you’re good to go.
It is also possible to grab HD video of your target, but the resulting point cloud will not be as accurate as it is with hi-res cameras. 5 megapixels in the camera is the minimum for good quality, and it may happen that the app fails to work in case you try to feed it with low quality images. I tried first with a webcam, and the process failed with the error message “general error”. While it’s always impressive to get one of those, it’d be way better to tell the user what really failed, and after discussing usability with Autodesk, we agreed it’d be better worded otherwise. Autodesk has been very helpful in learning the process, by the way.
How to use it?
Autodesk suggests that you take the pictures walking around your target and try to keep the camera at the same relative height. Of course I thought that it doesn’t make any difference if you walk around or rotate the target. How wrong can one be? My first attempt at taking the 24 pictures was a shot of my trusty Bolex camera. I set it up on a stool, then rotated it around while keeping the camera still, and then I loaded the images to 123D Catch, expecting a full blown mesh of my antique camera.
No such luck. What I had was a blurry mess of greyish and blackish fog on a stool. It didn’t take me too long to figure out that the software actually is capable of rendering the 3D space in the pictures into a mesh, but unfortunately in my case, the stool was identifiable to it, as was the wall behind the stool. The software correctly positioned the camera in 24 locations very close to each other, and then rendered the camera as 24 separate items, resulting in the mess. I was pretty irate at this stage, but it was my mistake, not the software’s.
Then I went to the school and put a 3D printed Suzanne on a rotating disk I had printed a little earlier. I have covered it in cardboard to make it stiffer and neutral in color. I then placed four printed markers, three bars and one arrow, on the disk. It is actually these markers that fool the software into thinking that I am walking around Suzanne, rather than rotating the disk it is on. So here you have three of the 36 images.
After all the 32 images have been shot, I started the Autodesk software. After Blender, it’s nice to see a consumer-oriented package for a while. Let’s create a new project:
With the images loaded, click on the Create Project button:
The images are then sent to the cloud service, and the application will then render the point cloud from the photos. For this you need the Autodesk account you set up earlier.
The point cloud is somewhat confusingly called a “capture”, and it will be created after the images have all been sent.
And then, hey presto – you have your project in the PC. The target appears in the larger screen, the right side is taken up by social media type sharing opportunities. You can turn it off from the Marketplace menu. You can use the toolbar on the top to zoom, pan, rotate, tilt and generally manhandle the mesh, and you can do the same with combinations of control such as Alt+drag. To me, this is exactly what I have been looking for, and I am thrilled that a player of Autodesk’s stature has delivered this amazing application for free.
Next time I will describe the scanner I am building to first stabilize the camera, next to add two HD webcams to partly automate the image-taking process, and ultimately (I wish) to run it via an Arduino robot.
Oh, I almost forgot – how does it look like in Blender then? It looks like this (16,884 vertices, of which 5,700 in the monkey head; nice resolution already):
And as rendered:
Stay tuned for the making of the scanner frame!