Team 3: developing deep learning models in virtual reality
Deep learning (DL) is a popular subfield of machine learning that has demonstrated significant advances. DL is particularly useful in supervised learning, and especially so in computer vision. The process of building and training a DL model, however, is not always intuitive. Inspecting the learning process in the various layers during inference is a popular approach, albeit one limited by the visualization medium. Recent technical advances in virtual reality (VR) have enabled the development of highly immersive and interactive experiences. Particularly, the intuitive control and visualization offered by VR mean that it can be used to develop an unparalleled DL model development environment. While the potential of VR as a development environment has been recognized (link), there are currently no solutions for DL model development.
The specific idea is to build a “shelf” of components commonly used in DL, such as convolutional layers, pooling layers, fully connected layers. There will be a “data hose” that represents the incoming data stream, depicted as a literal hose in the VR. The user will grasp the layers of interest from the shelf (at which time they will be immediately replenished), which will have two dangling hoses in the front and back side. After positioning the layer, the user will hold the incoming connection hose and attach it to either the “data hose” or the output hose of one of the already existing layers. The user will be able to summon a small interface to adjust the characteristics of the layer at hand (ex: set the activation function to ReLU or tanh, change the convolution size, etc). After designing a network as such, the user will press a virtual button to run the data through and start the backprop algorithm. For images, one can watch the intermediate layers during learning with the simple gesture of grabbing them and pulling them.