How long will the VRMMO multiplayer online game be realized in 2019?

0VRMMO, short for Virtual Reality Massively Multiplayer Online, means virtual reality massively multiplayer online. In detail, it is a kind of high-tech equipment to control human sensory nerves or to fully visualize the game by affecting sensory organs. Realization, that is, the game can produce realism. For the current VR technology, it is possible to influence people's vision and hearing through the device. However, the more complicated processes of touch, smell and taste are waiting for scientists to explore. However, for VRMMO, the most important thing is large and multi-person. Online.

So how far is that world from us?

First of all, since it is VRMMO, it is the same as the general online game, it needs to communicate with the online game server. Then what needs to be considered is: the content information of the game. However, the information content of VRMMO is actually not much, and it is roughly the same as that of ordinary online games. Modern large-scale games have solved the problem of a large number of players simultaneously online. So VRMMO meets 10,000 simultaneous online games is not a problem.

Secondly, the world of VRMMO must be absolutely realistic. It must be close to the picture. It must not only create the five major items that human beings often use, such as human vision, hearing, smell, taste and touch. They also need to complete the simulation of stress. Let there be a feeling that the game world is a real world. So for the game server, the pressure is great.

So what should be the server as a VRMMO? As of 2015, Tianhe No. 2, a supercomputer system developed by the National Defense Science and Technology University, topped the list with a peak computing speed of 549 million times per second and a continuous computing speed of 339 million double-precision floating-point operations per second. Fast supercomputer. The world's number one "Tianhe No. 2" is much better than the "Earth Simulator", so it is not impossible to develop a world for players to swim with the current technology.

2019年展望 VRMMO多人在线游戏多久能实现? AR游戏 第1张

edit


Here's the hardest part: the terminal sends the message to the brain. Two professors at the Weizmann Institute in Israel have long studied the mathematical formula for transmitting taste. Although it is only a small step, it is indeed a huge step forward for the entire VRMMO process.

So, in theory, basically in addition to transmitting information to the brain, basically can be done. VRMMO is actually relatively close to us.

It is a new year, 2016 is generally good, the previous two years of persistence and accumulation began to produce results, 2017 does not forget the initial heart, continue to move forward. From doing PC online games, to do host games in 2014, to do stand-alone VR games in 2015 In 2016, we are doing multiplayer VR games. We have been taking a non-mainstream road. But now I look back and take a road and walk in front. As a technician, I still have a sense of accomplishment. Look at the current majority of VR games, in fact, is a scene of art, the program to achieve an interaction, and then you can take it online, it is no wonder that VR heat began to cool in the second half of 2016, because there are too many flickers. We are also from I started to do VR and thought it was very fun. After that, I started to think that VR games didn’t seem to have any technical content. When I went to the pits of multiplayer VR, I realized that the technical content of single and multiplayer VR games, It is not an order of magnitude at all. Only by patiently laying the foundation, can we build a tall building. Here are just some of the problems and challenges we have encountered, and we will not elaborate on the solution and implementation.

As a multiplayer game, Avatar is a must. And many stand-alone VR games only have two hands. Some multiplayer VR games just do one head plus two hands, saying "trustworthiness" >" Reality". Although this idea is correct, but only the head and the hand are too awkward in terms of expressiveness. The reason for this is that the VR device only provides the head display and the two controllers of the Transform, so there is no way to perfect Simulate the movement of other bones in the body. Some people may say that they can use art-made animations like other online games, but in that case, they lose the "credibility". After all, the feeling of face-to-face communication requires body language, and VR The transformation of the three skeletons of the device can express a lot of information. Therefore, in order to make the player's Avatar vivid, he must synchronize his body movements to Avatar. However, because there is no other bone joint information of the body, only It relies on the technology of FullbodyIK for simulation. At present, the middleware that is better done is FinalIK and IKinema. The rotation of the head IK is the easiest to implement, similar to AimIK/LookA in traditional games. t et al; IK of the arm is slightly difficult, need to avoid interspersed with the body, but also ensure that the angle of the elbow joint and shoulder joint does not exceed the constraint range, otherwise it will look like a fracture; the lower body IK is the most difficult, Because the legs need to follow the upper body for bending/squatting/turning/stepping, etc. Jumping and bending are not perfectly simulated at present.

When it comes to mobile in VR games, many people already have a consensus that it will faint. So, for small-scale moves, it is usually a space move using RoomScale; for a wide range, the current common method is to use transfer. Although there are also driving classes However, it is more relevant to specific gameplay. It is not discussed here. RoomScale's movement is very good for single people, but it is a problem in response to other players' Avatar. Because logically, their Transform has not changed, only the head and hand Transform has changed. If Avatar's design has legs, it needs the leg to follow the head movement and steering, so the effect is very strange, because the normal person is the lower body to drive the upper body to move / turn In VR, the upper body and the hand drive the upper body, and the upper body drives the lower body again. The movements will be uncoordinated. In 2017, many HomeScale heads should be listed. Using the SLAM positioning technology similar to Hololens, the leg performance is Inevitable.

In addition to body language, gestures are also a very expressive means of communication. The OculusTouch's Touch feature is used to simulate the movement of the thumb/index/middle finger. A combination of these can be used to achieve a wide variety of common gestures, combined with body language, can greatly Enhance the realism of face-to-face communication. Of course, this has no technical content, it can do animation state machine, it is difficult to take the action of the object. If there are only a few fixed things, then you can make different for each object. The animation that the hand grabs. But if you want to use any two hands to alternate parts of the object, this preset animation can't meet the demand. VirtualGrasp has done this technology, which allows the fingers to be in different parts of different objects. Can make the finger stick to the surface of the object, and achieve a more natural gripping action.

Multi-person communication, that communication is indispensable, but no one in VR games will use the keyboard to play, so voice chat has become a basic function that multi-player VR games must do. However, voice chat in VR is not just like YY. The room is just fine, but the sound needs to be spatialized and made into 3D sound, so that it is like the sound from the player Avatar's mouth. The 2D sound effect is quite illegal in VR. Further, you need to consider the sound. Reflection and occlusion, this has not yet seen a perfect technical solution in the actual product, everyone is only a reverb effect. Oculus Audio SDK has Sound Spatialization technology, NVIDIA has VRWorks Audio, AMD also has TrueAudio, because Everyone realizes that in VR, the importance of sound is no less than the picture.

Ok, there is a voice when you speak, then Avatar's mouth is going to move? This is the technology related to LipSync. LipSync is not a new technology, it is used a lot in AAA games, but everyone uses offline generation. The VR voice chat is a mouth shape that needs to be generated in real time. If it is just a single syllable recognition, it is relatively simple, Oculus's LipSync is like this, the effect is general. Further, it is necessary to consider the correlation of the front and rear sounds, do The continuous change of the export type, rather than the simple interpolation. This aspect is currently better at the middleware made by SpeechGraphics. The most difficult thing is emotional recognition, because the mouth can move, the face is still stiff, and the look is also very fake. In addition, the voice recognition of what is also available, after all, in the VR to do interaction and text input or something, there is no direct speaking to easily.

The above mentioned the performance of the face, the most basic, need to make an expression? Simple cartoon style can be attached with a picture, the partial style needs to bind a lot of facial bones or export MorphTarget to express the expression changes. Once the Mesh-based emoticon creation scheme is used, the animation art is crazy, and the Rig that binds a face is half dead. I don't know if DCC tools are faster to bind facial bones or generate MorphTarget technology. The cost of production is a problem. Well, when the art has done a few animations, the question is coming, how do I trigger it? So many buttons on the somatosensory controller, can’t all be used to trigger the expression? UI? Wait for you to choose not to laugh early. Use body movements to trigger? Always trigger by mistake, cry for a while, like a mental illness.

With the change of mouth shape, with the change of expression, why are there two dead fish eyes? At this time, it is necessary to control the eyes. Random blinking is relatively simple, and it is difficult to control the direction of the line of sight. Because of the current amount The head display does not have the function of eye tracking (only I heard that FOVE has), so you can't know where the opposite player's eyes are looking. It can only be a strategy to simulate, such as according to the sound, according to the moving object. The position, according to the head of the head, etc., how to call out makes people feel that nature also requires a lot of energy and time to adjust repeatedly.

Well, the face is rich in expression, and the body movements are also there. Do you have to keep up with other aspects? The head is turned, is the hair also smashed? The body is moving, is the clothes still floating? Into the pit of the physical simulation, a large number of AAA-level games have more mature programs, no longer say more.

The most troublesome thing about physics is not simulation, but network synchronization. In any VR game that uses dual-motion controllers, there are all kinds of objects with physical simulation properties, which can be picked up and thrown. In multiplayer VR games. The movement of these objects needs to be synchronized to other clients. Maybe this second is still in your hands, and the next second is taken by others. In addition, most of the VR games are now 90FPS, if you take something in your hand, The simulation can be detected immediately with a little delay, affecting the so-called "feel". When there are many physical objects in the scene, you use frame synchronization, the delay is too big; you use state synchronization, the server runs a physics engine and is more difficult The amount of data sent is also very large, I don't know if the player's bandwidth can hold it. Besides, before the sync head and hand plus voice chat has taken up a lot of bandwidth, leaving the physical object is not much...

2019年展望 VRMMO多人在线游戏多久能实现? AR游戏 第2张

So, in theory, basically in addition to transmitting information to the brain, basically can be done. VRMMO is actually relatively close to us. IBM Japan has announced the VRMMO work "The Beginning" from the popular ACG work "Sword Art Online". On February 22nd, IBM Japan released "This VRMMO, which was turned into reality by IBM technology" on its Twitter account, and posted a picture of "Sword Art Online", and added "#SAO" The label (SAO is the abbreviation of "Sword Art Online"). Subsequently, the official website of the project was also opened, and the game name "The Beginning" was made public. This is a large-scale multiplayer online game using VR equipment. The original author, Kawahara Shige, is responsible for the script. Based on the data center of IBM's cloud service host “SoftLayer”, the virtual world is realized by high-speed processing of massive data. Moreover, IBM's world-class Cognitive Computing System "IBM Watson" will also be used in the game. After being scanned by 3D, players can appear in the game as they are.

IBM Japan has already opened a good start for us. I believe that without the virtual 2025 animation, we can see the real, more perfect VRMMO appear in the Chinese market, leading China VR to further innovation.

Bathtub Length 1500-1800 Mm

Drop in cast iron bathtubs are very popular. They're used in family bathrooms and hotels. The body of porcelain enamel cast iron bathtub is made of cast iron. And grade of porcelain enamel is A. Drop in bathtub also called build in bathtub, can be made in many different sizes and shapes. All drop in bath tubs are with legs and some items can be with handles.

Bathtub Length 1500-1800 Mm,1800Mm Baths,Standard Bath Size,Freestanding Bath Tub

Anping Sunshine Sanitary Ware Co., Ltd. , https://www.sunshinebathtubs.com

Posted on