oculus link

How Oculus Link works, the labor behind the VR technology

Attempts to immerse users in the visual fantasy of electronic devices have not ceased since the TV screen appeared. And not so many years ago, 3D television was also released. Today, with the Oculus Quest viewer and the Oculus Link software, the scenes have become completely enveloping.

Due to this, it has been necessary because although we almost saw the images on the screen, everything was still reduced to that rectangular space. In this way, VR technology was born. With these advances, we have opened the doors to enter and put our feet in fantasy worlds.

So, with VR technology, we have been allowed to live and interact with fictional characters with surprising realism. But it has not been an easy road. Let’s see here the technological challenges involved and what has been done to overcome them. First let’s see what the Oculus Quest viewer is.

The Oculus Quest viewer and the Oculus Link software

Let´s talk about the devices that make VR technology possible. The new Oculus Quest is a helmet, visor, or virtual reality glasses developed for video games by the Oculus VR Company. This device was released on May 21, 2019 and has two versions; one of 64 GB and the other more powerful of 128 GB. Its weight is about 571 grams.

The Oculus Link software versus the Oculus Rift

Unlike its predecessor, the Oculus Rift is a wireless device. In this way, it gives us a freedom of movement which is almost impossible to achieve if we are connected to the PC by a cable. Its batteries, which come in the equipment, have a duration of about two and a half hours.

On the other hand, Oculus Link is the software that connects the viewer with our PC. We need this connection, since it is on the computer where we have our virtual reality games that we want to enjoy.

Oculus Link has built-in new systems that will allow us to live an experience as immersive as the same real-world scene we experience every day. But how does it work?

The operation of Oculus Link

One of the biggest challenges in the development of Oculus Link software is to make ideal latency times. Latency is the response time of a device. It is the interval from when it receives the order until it presents the result on the screen. To achieve ideal latency times has not been simple neither easy.

Latency, a big challenge for the Oculus Link

A device with a high latency will make it difficult to make a rapid succession of frames to form a fluid scene. Instead of showing a realistic-looking scene, undesirable waves and tabs will be observed as a result of the slowed passage between the display boxes. For that reason, it is an important point to attend in RV.

So, in case the number of pixels rises to several million, which is normal in more modern video games, the problem of latency becomes more acute. This is due to the large amount of information that must be encoded, sent and decoded to represent each display box.

Besides, in the three-dimensional representation of virtual reality, we cannot settle for low resolutions.

The problem of latency in virtual reality technology

For a 2D TV screen, designed to only occupy one tenth of the field of view, keeping latency low is a challenge. Imagine then, that we bring the screen at the distance of the Oculus Quest viewfinder, a few millimeters from the retina of the user.

Logically, a problem would arise. If we kept the number of pixels at this distance, the relative size of these would increase. We would see each pixel, not to say that everything would blur and the resolution would deteriorate greatly. If we increase the number of pixels to cover the entire visual field, it is not the solution, it would increase the latency.

Participating physically in the scene

There is an additional problem that directly affects the perception of latency. In a conventional 2D screen video game, we control our avatar so that he himself hurts the enemy with his one-swing sword. Although there is some latency, everything will happen with perfect synchrony: the slash, the cut, the demolition, everything. Since everything is affected by the same latency.

But in Virtual Reality, we are an integral part of the game. In there will be our hands. The problem is that our body does not suffer from system latency. In other words, our brain does not need time to process and response to the actions of the game.

For that reason, if there is a minimum latency, we will have finished the swing of our sword when we still do not see the sword reach the body of the creature. It is a latency that will affect the maneuver, and make the game unattractive and nonfunctional.

This would result in a bad experience that would prevent us from testing our reflexes quickly, since we would have to wait for the image. So the problem of virtual reality latency acquires a greater meaning. So how was this challenge overcome in the Oculus Link program?

The possible solution: Anticipating latency

We can have more megabytes, more power, more efficiency, but we can never bring the latency to zero. In a certain way, latency is the inertia of the virtual reality world.

However, with algorithms capable of anticipating, latency does not have to be noticed at all. With an understanding of the dynamics of the game and a correctly anticipated run time, the world will never have to be left behind by the player.

Thus, to make the sensation as real as possible, Oculus Link needs to understand the entire operating system, the audio, the dynamics of the objects, the patterns of light and shadow and even the functioning of human vision. It is part of the great work that Oculus Link software should do.

The process of channeling information

Oculus Link transmits input and output information between Oculus PC Run

time, on the computer and the application in the viewfinder. In the case of conventional screens, a back buffer must be generated so that the eye does not see the transition between a frame and the next one that overlaps.

In the case of Oculus Link it is not like that. Rather, each time a display box is to be generated, the program encodes the ocular surface to place the new image just in front of the eye. Amazing! Isn’t it? While we are learning about the big labor of Virtual reality technology, we are getting astonished.

You have to take into account that we are talking about very small distances between eye and screen. So, any angular movement of the eye produces a significant displacement of the pupil. This offset is taken into account to correct the position of the new display box.

The viewer also sends a quantity of data. The position of the viewfinder itself, the input status and the vertical synchronization times. These values ​​are not anticipated, but are extrapolated to match the scene. As we can infer, a high data flow is required to perform all this information transmission work.

Virtual reality composition process

The virtual reality composer does the work in two parts: mobile virtual reality and execution times. That way, you can manage the run time in both the viewer and the PC and correct latency more effectively.

Challenges in prediction accuracy

Therefore, the VR Run time must accurately predict the latency produced especially by the channeling of the data. If you do not, the application will show the frames out of time and the virtual world will be out of phase with respect to the real movements of the player.

As the transmission between the viewfinder and the PC is bidirectional, and it also has its latency, there is an additional problem. Neither of the two parts of the system will recognize which of the stages you are receiving should match the ones you are sending.

Solution in prediction accuracy

Consequently, the Oculus technical team created a system that allows perfectly splicing the information of both parties. From the point of view of the PC, the delay in the viewer information is canceled by making longer composition stages.

And from the viewer’s point of view, the unknown route is shown as a dynamic application composed of several physical locations at once.

Display time

An important way in which Oculus Link is ahead of latency is to provide the necessary viewing time for each frame. In this way, the application can prepare a particular frame with a variable display time. You do not have to prepare several alternative tables for a predetermined time.

If the applications could not operate in this way, they would have to prepare many frames to place the appropriate one depending on an undetermined action.

Erroneous predictions

Without any doubt, getting ahead of latency may have its disadvantages. For example, if a real display box does not align with the intended display box, the prediction will be wrong. Of course, one or two consecutive erroneous predictions will not be noticed. In these cases, there is not a real problem.

Nevertheless, the problem may appear because multiple erroneous predictions could also occur. This is very likely in translational movements of the head while observing close targets. In this case, a double image sensation, or image vibration, would be created. A blurred and uncomfortable perception for the eyes.

Solution for wrong system predictions of Oculus link

Oculus Link solves this image alteration with an asynchronous time warp (ATW). ATW is a technique that creates artificial display frames that serve as intermediates when the system does not receive the next display frame in time.

The asynchronous time distortion examiner takes into account the movement of the head to correct the position of the image. This prevents several display boxes from developing at the same time.

In this way the visual quality is maintained, even if the frame rate decreases a little. In addition, this program ensures that the next frame is ready for the next viewing period. So, the problem has been solved.

Adaptive Kickoff Composer

Another program called Adaptive Composer Kickoff (ACK) prevents the composer from dropping frames and thus it reduces latency. The ACK ensures that the ATW begins to enlist the following display frame on time.

The challenge of visual quality

Dealing with latency in image transmission is a challenge. The more bits of the images, the greater the work and the encoding and decoding time involved in the transmission. Of course, the resolution of the image could be reduced, but a decrease in the quality of the experience will be noticed.

Solving the problem with some distortion lenses

Hence, the solution to this problem was in some lenses with radial barrel distortion. Such a lens expands the image from the center so that the pixels closest to the edge of the frame are flattened and the centrals expand.

In this way, the center is further enhanced and the perceptual resolution on the periphery of the display frame is blurred. This function makes the perception even more similar to that of the human eye. And for that reason, this solution is ideal.

The lenses will apply a radial distortion. It will be proportional to the distance from the center of the frame, but independent of the horizontal or vertical axis.

Shaft aligned distortion transmission (AADT)

On the other hand, an axis-aligned distortion transmission (AADT) is digitally applied. The AADT is responsible for applying a compression similar to that of the fovea of ​​the eye. The fovea is the part of the retina responsible for central vision, which is more detailed and acute.

The AADT uses the fixed parts of the visual field to determine the compression axes of the image. This, in addition, will eliminate image waste from the corners and decrease peripheral resolution by 70%, making it more blurred.

With this procedure, the AADT will be helping to reduce the bandwidth needed for transmission. Therefore, the latency index will improve dramatically.

The human eye is not too sharp to detect the color in the periphery. But it is to detect movement. So, if the images are not processed by the barrel lens and by the AADT system, they would show a flickering contour. This is an additional advantage of the attenuation of details that is performed on the periphery of the field of vision.

There is still a long way to go

Representing all the complexity of lights and movements that we perceive in the real world is a long-term goal. However, currently the experience with an Oculus Quest is fascinating and overwhelming. There are details or failures that are being worked to bring this experience closer to perfection.


For example, animated objects cause greater difficulties during corrections with ATW. This is because the ATW deforms objects without knowledge of all the complexity of the movement.

The fault in the image will not be noticed if it is a small object. But if the animated image occupies a large part of the screen, the defect in the silhouette will be perceived.

The reflections

The calculations of the reflexes that the human eye should perceive are also extraordinarily complex. The computation of the reflection of the light reflected in each of the objects of the scene will change according to the eye movements, the rotation and the translation of the head. Without taking into account that they are independent calculations for each eye.

The challenge of interleaving frames

The ATW must be executed in very short times. Since the frequency of the viewfinder screens is 90 Hz, the display boxes follow each 11 milliseconds. This means that the ATW must generate auxiliary frames in a much shorter time.

The execution of the ATW must be able to avoid the threading representation commands and run consistently before the video card generates a new frame. According to experts, the intervention of the ATW should be limited to 2 milliseconds.

In that period, you should make corrections due to rotations, head or eye translations. You must also make adjustments to position, animations of objects with volume, reflections, parallax mapping and dynamic reliefs.

All this work has to be done within 11 milliseconds between successive display boxes. Maybe only in just 2 milliseconds. Otherwise, the last frame will be displayed again producing the undesirable double image effect.

The Oculus Link, an achievement in the field of virtual reality

Bringing virtual reality technology where it has arrived has required hard work and deep knowledge of dynamics, optics and much more. The product of ingenuity has also had to be used harder than ever for the Oculus Link software to do a good job.

But, when we take an Oculus Quest into our hands, we adjust it and set that world of virtual reality around us, we will know that the price paid was worth it.


Scroll to Top