top of page

Projects

ARmobile, focusing on mobile augmented reality app development.

 

Right now, working on creating interesting elements for some real world scenes. For instance, visitors can see a duke university chapel model with its brief description from "a node between the real world and the virtual world"  (i.e. a tracking image).

 

At this first stage of the project, delivering customization service for specific organizations will be a preferred choice, in short term, maybe university tour and hospital systems.

1. Virtual Reality

Immersive Indoor Cycling Project, 2017 summer

The goal of this immersive 3D video project is to provide a riding experience that feels like a game for the riders to immerse themselves during the journey. The writer was hired as the developer of this immersive video by an indoor cycling studio.

During the project, the writer created own models and textures,  designed user interfaces, built several virtual environments for different scenarios in Unity Engine and recorded 360-degree panorama videos.

To see the full documentation of this project please go to the "Virtual Reality" page, or click the button below.

2. Augmented Reality

AR mobile, August 2017 - present

3. Robotics

Distributed Robot Systems, August 2017 - December 2017

Artificial intelligence of distributed systems of multiple mobile robots that interact collectively, cooperate on common tasks, and coordinate their motion through the world to reach individual goals. 

4. Image and Video Processing

3D LED cube, February 2017 - April 2017

A three-dimensional LED cube consists of 8x8x8 (512) LEDs controlled by a single chip microcontroller allows you to see 3d images and videos without wearing any extra devices.

The video on the right basically shows a simulation of Cooperative Multi-Robot Observation of Multiple Moving Targets  (CMOMMT) task. 5 yellow dots with green circles (sensing range) are the robots who are trying to cover as many targets (denoted by purple dots) as possible in an environment with a radius of 100 meters.

(1) Cooperation

(2) Collectivity

This collective task will let all robots form a flock, move together along course and finally reach the goal. The collectivity is determined by four component behaviors: collision avoidance, aggregation, following and homing. Video on the right shows a simulation of 25 robots all with homing behavior. The randomly generated robots behave pretty like migrating birds due to these component behaviors.

(3) coordination

This coordination task implements a traffic light approach that can prevent robots from colliding with each other in the intersection of the two roads. As shown in the video on the right, robots moving in a direction (e.g. west/east direction) will pass the intersection if "a green signal" is given to them or just wait outside the intersection if "a red signal" is given. By this approach, robots moving in different directions will not collide with each other.

Digital Graffiti, September 2017 - December 2017

An iOS app, Digital Graffiti, allows users to upload their tags and associated customized 3d models on a website, and see their models (e.g. a monarch model) in a real-world scene through the phone camera once detected a specific tag (e.g. a monarch picture).

 

This app is created by the writer and two other classmates in a mobile app development course project. Now, it is in the university app store. 

 

We utilize VisionKit and ARKit (a new feature of iOS 11) to recognize images with a machine learning model and display 3D model in real-world space, respectively.

EyeSQUAD - A Novel 3D Selection Technique, August 2017 - present

EyeSQUAD is a novel 3d selection technique with progressive refinement by eye tracking. It stands for eye-controlled sphere-casting refined by quad-menu selection technique. It allows users to use eyes to select a specific target in the virtual environment instead of holding a pointing device such as gamepad, mouse, and controller.

 

EyeSQUAD inherits the idea of quad-menu progressive refinement from a previous selection technique - SQUAD. Users will first select a bunch of objects with a selection bubble and then refine them on a quad-menu until finally gets the target.

Selection with eyes is quite intuitive and simple. Every time, a user wants to interact with an object. In the first place, the user needs to look at that target. Contrary to previous techniques, EyeSQUAD does not require the user to hold a controller, point to the target and press a certain button after gazing at the target. A "tongue click" sound is the only need to perform the selection. However, selection with eyes can be difficult and annoying as well. Eye movement is so-called saccade which is sometimes fragile and involuntary. Each time before using current most eye tracking devices, users need to calibrate. 

 

With those tradeoffs of eye trackings in mind, we need to determine whether the technique outperforms previous selection techniques through a user study. 

bottom of page