iPhone 10 has an very intersting feature. It has a realtime 3D scanner

A couple of years ago Apple bought out a motion capture company named ‘Faceshift’. It ran in its final versions on Intel’s Realsense sensors. Basically it was uber-cheap facial mocap that really would have changed the scene for low-budget productions.
It’ll be both amusing and completely frustrating if Apple resurrects beloved Faceshift in the form of a bloody iPhone app…

It’s impressive to get it in real-time since it’s a bit compute heavy to track lots of points - and mesh deform (move bones). Also that it seemed to keep track of it at weird angles.

So for real-time it can be useful, if apple lets you work with it via an API.

If you don’t want real-time, but just motion capture the face. Any camera and Blender will be sufficient. I recalled this video by a Blender user César.

As a sidenote, it´s actually not really that much more expensive in Europe, it´s just that in Europe we put the actual price of the product (i.e including VAT;)

The simple data will be “is the face of the user” “he is smiling”. Rarely Apple make APIs for iOS with low level data. Normally the abstract a lot the access to the APIs. For example, you actually cannot do nothing with wifi, in Android you can make basic wifi analyzer.

As for me, I have huge problems with a phone doing “facial recognition,” just as I do with the iPhone analyzing every picture that I take for faces and I cannot turn it off. To me, this is far worse than George Orwell’s dystopia, and I consider that “George only missed the date by 20 years.” :mad:

I like my iPhone-5, which I bought for $200, never-used and fully-qualified by the AppleCare warranty, from Amazon. I bought it because it fits conveniently in my shirt-pocket, which I consider to be a basic requirement for any phone.

I would never spend $1,000 for aything that could drop out of my pocket and hit the pavement. I use Apple equipment professionally all day every day – always have – but I’m just not interested in this.

Also – I have actually never bought anything from Apple that was “the very latest thing.” I know that apple.com has a “refurbished equipment” section (where equipment that was used e.g. as in-store demonstrators is cleaned of all fingerprints and sold with the same warranty as brand-new stuff), and I always buy Macs and iPads from there. Same stuff, “last year’s model,” like new, full manufacturer’s warranty and extended-warranty, but costs a lot less. Let’s face it – Apple equipment is a mature market now: there are no dramatic changes anymore from one model-year to the next, except in this case as-noted.

Apple has made it so it’s encrypted inside the phone like the finger prints. It’s not sending it to any server.

About the price, yes it’s crazy but if anything face 3D readers will be defacto now because Apple has it. If android phones have it then it will be much cheaper and the access in the API wouldn’t be a problem.

You can do 3d scans even using simple photographs nowdays. The technology has advanced so far.

The battle here is of course resolution. Specialised equipment can be very expensive but will give you far higher resolution and accuracy.

Of course technology should be viewed from the perspective of its usage. Do you really need all this accuracy ? Is it a big deal for you if you have to edit your model to fill in the gaps and correct mistakes ?

Using iPhone as a 3d scanner is not a matter of capabilities , its also a matter of software design. Does the phone allow you to use it as a 3d scanner ? It may not but I am pretty sure Apple will be releasing an API for it and it will be a matter of time till we see apps specialised in 3d scanning,

For now its too early to tell, if the technology of the iPhone can rival photograph based 3d reconstruction. It may be great real time but unless you really care about real time scanning , this is of little benefit to you as most likely accuracy will be a higher concern.

George Orwell’s 1984: Totalitarian regimes control the lives of all people using surveillance technology from 1994

Actual 1984: Totalitarian regimes control the lives of some people using surveillance technology from 1984

Actual 2017: With very few exceptions, totalitarian regimes have ceased to exist. Photo Album software uses facial recognition to mark photos with people in them, for automatic categorization.

Why do some many people miss the totalitarian part in Orwell’s work? That’s the whole point, not the technology. You’re supposed to worry about what your governments are up to, not about the corporations trying to sell you gadgets. If a totalitarian regime wants facial recognition in every home, that’s what it’ll get. It’ll turn your friends and family into spies. It’ll make you wear your underwear on the outside!

Data held by a private company is only one subpoena away from being data held by the government. I do worry about privacy issues, but I also understand that the problem is so big that picking one phone over the other isn’t going to make much difference. Facial recognition software is already powerful enough with simple 2d photographs. Every time I drive through an intersection with traffic cameras, the government has enough data to identify me individually without any human intervention, in real time, if they so choose.

I also don’t kid myself into thinking the government is really all that interested in seeing me pick my nose at a red light. I fully understand that the most nefarious intent behind invading my privacy right now is to try to give me more relevant and effective advertising. But the potential for abuse is huge, and with the ever increasing capacity of digital storage, the potential for retroactive abuse is also quite large.

As a programmer in this area i might clarify something about the Kinect, if you plan to buy a depth camera.

The old model kinect 360 just like apple uses the dot projector method, at 640 x 480 resolution, but as with “dot” projector system.
That actually means a lower 3d resolution (ea a dot needs black area around it, to be a dot), so effectively its around 1/8th times 640x480 that are true measurements, the rest of the pixels are calculated glyph

The kinect 360 tech was developed by prime sense, but they refused to be taken over by Microsoft, and they got some other conflict, and thus Microsoft abandoned this company, and with it that kind of 3D sensing. Dont forget Microsoft is a company with far more research money then prime sense ever had. And with their Xbox they had an important goal to solve (seeing in 3d) while not becoming dependant on camera manufacturers (which could drive up the price of their set).
So Microsoft developed a new camerea the Kinect One (kinda strange name, as one is the latest version)
Well nevermind the naming, this camera doesnt use dots but uses time of flight and provides 512 x 424 depth pixels, in which each pixel is a true depth point. (thats 217.088 dots) as compared to 30.000
Having true 217.088 depth pixels is a huge difference as compared to dot based systems…

The exact inner workings of the kinect one, have never been officially published, but from what has been leaked we know its a time of flight cam, it measures the time it takes for light to travel in between object and camera. How they make that with such low budged electronics is some kind of miracle. I’ve seen and tested even more expensive camera’s “with industrial strength depth bla bla”
But most of them are not as good as the kinect one. (prime sense / apple dont have something that comes close to it).

Its therefor not strange that if you see some kind of university 3d project involving a robot that they use the kinect one.
I myself have coded various programs for it (and other 3d camera’s), in robotics but that kinect one, its powerfull, ay tell ya

I have been experimenting with iPhone X since day 1 it was released. Basically the true depth 3D front camera works for Face ID and for Face AR, the way Apple currently allows it for application like Animojis talking head animation.

It can track and follow face expression 60 times per second. It is similar to Kinect, probably also like Structure Sensor, but what really works perfectly is for Facial Animation, how it provides easy Face Blendshapes application, based on the template provided by Apple. So the ARKit (iOS) and the Face AR can be easily created as an app.

I have a WIP project using a simple app I made, with OSC module, to create something like this:

1 Like

Holy s’'it. That is amazing. This is awesome. Don’t make me get an iphone x just because of this :smiley:

Today, I am trying the setup using Elisha Hung iPhone app, of which he released source code at Github: (I was kind of tinkering with the same idea, but Elisha’s coding experience and result seems to be what I tried to achieve)

This app already works great and make recording and scanning of face and facial expressions with the iPhone X become a streamlined process:

  1. Quick grab of the face mesh of the person -> You will get a coarse 1220 points mesh of the person’s face. This is really good as wrapper, if you happen to have higher resolution face head scan of the person. Remember this is only scanning the face, maybe eyes tracking as well.
  2. Hit record will capture the facial expression as fast as it can, max at 60 fps. 30 fps is usually good enough.
  3. You will get some easy to parse data consisting: head matrix orientation, camera (iPhone) rotation and position, 1220 points raw data, and plus some 60+ blendshapes coefficient, as being documented and explained at Apple Developer Guide.

If I am not wrong, the realtime streaming mode of the app is also provided, but I have not checked that one out.

Basically from this experiment, it seems like Apple made a very good tool and standard for Facial Augmentation and Animation.

Mmm, I played around with a lot of different 3D tracking stuff, from the LeapMotion to Intel realSense and the Kinect, but the stuff in the Iphone X - no matter if it’s the hardware, software or the mix - I’ve never seen anything come even close to the tracking speed and quality… This is friggin ridiculous and if you’re into any type of character animation stuff, there might be a very good case buying one just to use for that…

Not sure if Face ID can be used as “common 3d scanner” out of the box, but one of the 3d scanners manufacturer says it’s possible to turn it to a 3d scanner by third party apps.

Apple said that you can’t get any real facial features from the scanning just what points move to make those facial movement. Otherwise it would be intruding privacy and breaking some laws probably.