I’ve extended the flocking code framework to now have the concept of Enemy Squadrons. They have multiple states, and idle in a defense pattern, begin an approach, deliver an attack on a defense target, and even retreat, and return to their idle defense origin. It’s really fun to see game elements start to come together. I also had to work out the friendly defense units, and did some rough animations of some cubes to get the game going.

I’ve been really focused on getting the Art to progress, but now it’s time to define the game mechanics, and decide how it will play. This phase is always interesting to me because it’s about finding the fun. As part of this, I’ve read a lot about other people’s post postmortems and tried to develop more of a story element behind these characters. I think I know a little about the enemy and why they are attacking these units, but I know I need to continue to flesh this out.

I know a little about the game I want to make, but I need to take it out for a nice dinner, and get to know it a little better. I know I’m slow at these parts of the development process, but I’m not worried about the rest. Once the game is well defined, I’m confident I’ll be able to get the pieces to come together to tell the story. It’s getting the story together that I’m trying to work on now.

I’ve been playing around with a space environment, and I think this is pretty close to the look that I want for the environment. I made some more tests with the flocking to test performance in the environment, but I thought I’d post a quick video showing off the environment pieces a little. Although I don’t look around the environment too much in these videos, this is in VR, and it is accurate representation of space. Star Map Emitter, Planets, and a Galaxy skybox. I did distort some of the distances… (Moon is technically too close) But hopefully that can be forgiven for the sake of art.

Playing around with Flocking, and removing GC calls, and optimizations for large numbers of independent groupings. I think this is way more than I plan on controlling at any given time, but 3,000 runs pretty smoothly even without the CPU batching planned in Unity 5.6.

A little more contextual example would be 47 Squadrons of 7 ships each with their own dynamic targeting / pathing. This video was captured while playing in Unity on the HTC Vive at 140 fps using an nVidia GTX 980. This also has an extra 200 draw calls in the scene just to make sure I have plenty of headroom for performance once everything is added.

I’ve spent a lot of time getting a lot of trees into the latest Unity project trying to portray Lake Tahoe area and a proposed bike path along the North East portion of the lake. For this project we worked out a number of different workflows. One of them was baking the biking and jogging pedestrians physics into animation paths. This allowed us to setup pedestrians, give them a destination, and then capture them running or biking along the path, and still achieve realtime by pre-calculating all of the physics involved with them running. We additionally worked out how to get the trees and environment to look more realistic than previous projects. Gaia and Tenkoku plugins were a huge help, as well as Aquas plugins for Unity for the water and environment. We ended up going largely with Speed Tree assets, but had to work out a workflow for placing trees adjacent to the path very accurately to reflect the actual trees that would remain within the bike path corridor. Tricky project, with a lot of fun constratins, and I think the renders came out pretty nice.

Here’s a video I made for a client showing a quick prototype I put together for a potential mobile game. Not a lot of polish, but just something quick to convey a few art options, and help sell the pitch to fund a project.

GDC this year was a great experience for me. The galvanized opinions around which VR platform would be most successful, or widely adopted were energetic and food which will keep me motivated for a good while. My only regret is not having made more progress on my project, or being more willing to share the progress. I think I’m not quite far enough to share too much, and I’m optimistic that nobody sees my website.

My takeaway is that there are a large number of companies focusing on developing content for these platforms, and they will all have success based on the visibility which is available to the early movers. The only difference between any of them, and any of the independent developers is the location and recognition. Many of the startups are in SF, or Seattle, while many more are located in garages, or bedrooms across the world. It was very motivating realizing that many who are working for the larger teams are successful because of the large group of talent that they’ve gathered. While that seems interesting, I’m currently not sure how relocation would work, and I’m really motivated to find people here in Vegas who have the talent and interest.

The value of Networking is difficult to quantify, but the energy that is within the game industry is undeniable. And it was good to revisit such a condensed form of that energy at GDC.

Alright – I’m not an environment modeler, and I don’t claim to be an artist. But for programmer art, I think 4 hours of modeling and environment creation are starting to look pretty exciting.


And no – I don’t feel like VR doors need handles. They need an interface, but I’m planning on doing a screen you have to interact with before going through. Big plans for that door, and all the magic that will ensue behind it.

Oh – I also need to start working on BitScience webpage. Logo, Media, Web Layout, etc.

Vision Summit 2016 was the first conference I’m aware of that focused on VR/AR. I thought the list of sponsors, board panelists, and keynote guests was amazing. I agree with a lot of the ideas suggested in the keynote that this industry will have an extremely vast reach, and that what people are doing now with virtual and augmented realities are just beginning to scratch the surface.

One of my primary realizations is that with VR and AR, the sense of presence and immersion come free. The hardware accomplishes that. The content creators and content providers only have one task, and that is to ensure that they don’t go out of their way to remind the user that they aren’t in another realm. And from there, I was inspired to work on providing experiences that people can have which go beyond just showcasing the hardware. Many of the experiences I’ve tried in VR and AR were basically – look at what this could be. But I saw a few teams working on the future. And that is something I want to be a part of.

There are a lot of excited developers out there, and the opportunity to tell a story with this new hardware is going to be very fulfilling.

Another great Unity Asset Store Package is: VR Panorama 360 Pro. It is really designed to work in Editor, and will export a video from the camera, given an animation, and then stop playing once it reaches the end of the animation… But I liked the tech. So I re-wrote it so that it will work in runtime, and can be activated by a UI element. So basically a user could initiate recording. The plugin itself simply records a bunch of frames, and saves them into a folder, and then calls an FFMPEG library to composite them all.

Here’s two sample renderings I put together real quick to demonstrate the plugin and capabilities.

Southbound I-15: https://www.youtube.com/watch?v=B9B7fnz7Bjg
Northbound I-15: https://www.youtube.com/watch?v=OkL9W1b70Cs

It is really fun to work on this Civil Infrastructure project downtown Las Vegas as part of the CivilFX team.  I spent a lot of time trying to go through the AutoDesk workflow of taking aerials and topography, draping them together, and getting them into 3DS Max.  Turns out most of their documentation pages talk about using their Infrastructure Suite software to link the data in 3DS Max, but doing that requires a monthly subscription.  So I got a trial of their software, and got frustrated at the difficulty of exporting the data to an FBX, to be able to take it anywhere I want.

So after messing around a bit with their software, I decided just to break the aerial images into squares, re-map the UV coordinates in MAX, and drape the images myself.   Took a bit of triangle manipulation to make sure all the seams lined up but in the end I got a nice aerial surface in an FBX that I could import into anything.   And by Anything, of course I mean Unity.