My real-time app architecture journey

What started out to be a simple app for documenting device configuration and tracking devices on show-site in a live production environment turned into a set of very complex requirements. Here are some of the requirements:

  • Multi-project file system (like a Google Docs system)
  • Multi-user project collaboration
  • Multi-level permission levels
  • Multi-level permissions per app, per area of the app, per project
  • Real-time updates for all users
  • Very performant data-grid with row and column drag-and-drop
  • Custom data-grid cells with typeahead, selectors, radio groups etc.
  • A chat system for user-to-user DMs and project collaborators (user groups)
  • Project export in many formats (CSV, Excel, PDF)
  • A PDF label printer per row
  • A status history tracker per row
  • All the niceties we have come to expect from Google Docs:
    • Project open and edit history
    • Per document, per field edit history
    • Undo/Redo
    • User presence per project with idle tracking

The goal of this product was to create an ecosystem that helps technicians work more efficiently from show to show, and within one framework. The standard tool currently used is Google Docs/Sheets, so persuading techs to move from such a monolith would require a really well designed product.

Picking the frontend tech

Since my experience with the newer JS frameworks was limited, React was a good choice for this project. Considering the internet is full of resources teaching React, it was simple to get started.

We quickly found that the speed with which React has changed over the years was providing us with outdated tutorials or conflicting concepts (class based components vs. functions and hooks). I wouldn’t change this tech selection since there are so many resources available, but it has been a little difficult at times to be certain that I made the correct choice.

We decided on a front-end UI library that was well established, and had a good set of features that we would require. We are on the third iteration of this project, with the first two using the BlueprintJS component library, and the third using the Chakra-UI library.

The Chakra-UI library has enabled us to move VERY quickly. We developed a simple style guide that allows us to iterate on UIs without getting stuck on CSS and layout.

Picking the Backend (and Frontend) Tech

My original plan was to use Django as the backend single source of truth using a Postgres database with a Django Rest Framework system over it. I naively built a custom architecture for the real-time aspect of the system, with Google Firestone maintaining a diff per user per database table. This system worked well for the first few events we used it on, but the edge cases made it very difficult to maintain. Also, maintaining this system at scale would have been tough.

After the second show using the app, we realized that the architecture needed to change. At this time, I heard Filipe from MeteorJS on the Syntax podcast and tried Meteor JS. After the first two days using it, I realized that a rebuild of our app in Meteor would be more than worth the effort in the long and short terms.

Meteor provides all the real-time aspects that we required, plus a good amount of the “batteries included” features that Django provides (like a user system). Meteor is stable, as it has been around since 2016 and it works very well with React.

Using Meteor, we didn’t need separate systems for Front and Backends, as Meteor serves both very nicely. Buying in to the Meteor ecosystem has allowed us to build features extremely fast rather than getting stuck in the weeds with architecture. Meteor has already figured out all the tough edge cases of real-time data and their DDP/Method system allows us to pick which sets of data will be real-time enabled, and which data will be fetched on demand.

With this system in place, we will be able to take the product to the next level and provide technicians with a web based product that is robust, stable and feature rich.

sound system design

LustingerLive_Still2+138

the concept

Due to the nature of Lustinger’s music, there is inherently a large number of electronic instruments and the sync between them is very important.  The entire system was designed for ease of setup and consistency. Even though the band is playing small venues, we carry our own Behringer X32 mixing console so that we can maintain consistency between venues and reduce the impact that we have on the venue.

The system is contained in one 12 space rack and all cabling is fed into this rack via three bundles of cabling.  The cabling was designed to drop in key locations on the stage for easy patching so that the area around the rack is not crowded. 

The entire system can be setup in 10 minutes and cleared even faster. 

the playback system

I decided on Qlab for playback since it handles audio tracks and MIDI tracks simultaneously.  Each song is represented by a group cue and each group cue has multiple audio and MIDI tracks in it.  There is a click track and a backing track audio file as well as a lighting MIDI file and a Mainstage trigger MIDI file in each group cue.  Each group cue also has a global STOP cue to stop any other cues that were playing for fast switching between songs.  I added an auto-follow to the  last cue in each group cue so that the set will run automatically without the drummer having to trigger each song.  This playback system drives the entire show and all its effects.

mainstage

Since Joseph writes all his songs with Apple’s Logic, it made sense to use Mainstage for the synth, electronic drum and special vocal effects.  It was very simple to copy the channels from the demo’s Logic session into the Mainstage concert.  Qlab’s MIDI output switches the Mainstage patch changes automatically during each song.  Automatic patch changes allow the system to be autonomous.

Although Mainstage’s interface is not the most intuitive, it allows for reliable effects and synth patches. We are even able to control parameters during the performance to add a more live feel to the performances.

monitoring

In a typical small venue show, the use of stage monitors makes the performer’s job and the sound person’s job very difficult.  I decided to use an Aviom system to alleviate this problem.  Each band member has an Aviom A16-R that they use to mix their own wired in-ear mix.  This system works fairly well, although I have learned that the separate mixes have isolated the band members from each other in such a way that they don’t always hear their performances in the same way thus making for a less cohesive overall sound.  This is something we will work on in the future by setting stage volumes without the in-ears.  The band members’ experience onstage contributes to the audience’s energy. When the band is not having a good time due to tech issues, it translates to the audience.

vocal effects

Most of the special vocal effects used in the songs originate from Mainstage.  These were copied from the original Logic sessions, then modified for live use.  They include distortions, delays, chorus and reverbs. Since they are triggered with the Qlab MIDI file, the patch changes happen automatically as the song is played.  The output is patched  to the console on its own channel and allows for easy blending with the dry vocal channel.  Additional vocal effects come from the Kaoss Pad that Niko plays.  This signal chain is explained further in the signal flow section.  The effects provide a distinct sound that is unique in the local music scene.

signal flow

To accommodate this complex signal flow, I provided an 8 channel mic splitter.  One side of the mic splitter goes into the Behringer S16 and the other goes into a Behringer ADA8000 8 channel preamp.  The Behringer preamp output feeds into the Aviom A-16i input unit for the two guitars and the bass. 

The vocal mics have a different flow.  The output of the ADA8000 for Joseph’s vocal channel is “y’d” into the Mainstage input and to the Kaoss Pad input. The vocal mics feed into the S16 and each has its own mix output that feeds the Aviom system which allows us to EQ the vocal mics into their in ears.  The drum mics feed directly into the S16 and a mix output feeds the “drums” channel of the Aviom system.  A final output mix from the S16 provides a reverb return on the vocals to the Aviom system. 

The S16 MIDI input allows for scene changes automatically which keeps the performances consitent.  This MIDI feed is fed from the Qlab file.

lighting system design

The lighting system was designed to be driven with MIDI so that the band didn’t need an extra person for operating a lighting console.  It also made it easy to program the lights so that each song was repeatable and editable.  In order to follow the aesthetic of the band, all the lighting is very rhythmic with the music and we used it more as an effect than to light the stage.

lustinger visual experience

concept

The concept of the visual system continued from my experience building a custom LED lighting rig for a now defunct band called The Minimalist. As LED technology got better and cheaper, I was able to build a system that was cheap, easy to setup and provide a large visual impact.

My way to make a large visual impact was to make the system as bright as possible.  I wanted the system to be able to be scaled up to be able to be used as a supplemental system in the largest venues.  With fabrication help from Felix Kutlik from Masque Sound I built four vertical red, green, blue, white LED sticks 66″ tall.  These mount to mic stand bases with Atlas QR-2 quick releases for fast and efficient setup and strike.  Two of the sticks have custom fabricated Par Boxes mounts on the top.  These boxes have two 150 Watt halogen Par bulbs in them to contrast with the LED light.

The sticks are arranged around the rear of the band to create a “set” and the all the light is focused directly to the audience.  In order to light the band members from the front, Felix built taller and wider vertical sticks that act as side washes for the band.  In addition, each band member has an “uplight”; a 90 Watt halogen placed at the base of their mic stand to cast shadows on their body and face and to contrast with the stark LED light.

producing the Lustinger tech experience

Lustinger is a Brooklyn based industrial-rock band that I’ve been working with since 2012.   Due to the heavy synthesizer, backing tracks and vocal effects in this music, I wanted to develop a system that would translate the depths of the music to an audience in any size venue with the utmost clarity and efficiency.

The band isn’t signed to a label and doesn’t have any financial backing which made the technical production very difficult.  The caliber of production I wanted to provide is something that most bands do not have(especially at this level of touring) and makes the band’s live performances very unique and of utmost quality.

The technical system is always being improved, and was designed from the beginning for ease of transport, load in and to reduce the burden on any given venue’s technical staff.