Apple has been working on an advanced depth mapping engine. In 2019 the company was granted a patent relating to methods and devices used for projection and capture of optical radiation. The patented setup consists of transmitter that emits the beam and a scanner designed to scan the beam. It works within a predefined range and could be used in applications like gaming.
Apple continued working on the technology, and today, USPTO has published a new patent. It is titled “Calibration of a depth-sensing array using color image data.” In 2013 Apple acquired PrimeSense, an Israeli 3D sensor company. Apple used PrimeSense expertise to build Face ID, and the team of engineers are now working on an improved depth mapping system.
Consumer trends point towards an increasing need for real-time-three-dimensional imagers. These devices are also known as depth sensors and used for measuring distance by measuring target scene depth. In other words, the transmitter uses an optical beam to illuminate the target scene. In the next step, the setup analyses reflected beam and uses the data to determine distance.
Typically, realtime 3D imagers beam an array of pulsed optical beams over a target scene. Then the distance is calculated by measuring the time taken for the round-trip, also known as time-of-flight (ToF) sensors. Apple’s patent details an emitter or a first plurality radiation source capable of emitting pulsed beams of optical radiation at a target scene. The following array of the second plurality is used to sense output signals and calculate the respective times of incidence of photons on the sensing elements.
Sensor device is configured to receive the signals from the array and identify the array on which pulses of optical radiation are reflected from and the corresponding regions. Furthermore, the sensing device measures depth coordinates depending on the areas of the target scene and time of incidence.
Applications of Depth Mapping on iOS
Currently, iOS devices with a dual rear-facing camera or a TrueDepth front camera can record depth information. The corresponding depth map is used for image-processing effects capable of treating foreground and background images differently, similar to the Portrait mode on iOS.
[via USPTO]