My real-time app architecture journey

What started out to be a simple app for documenting device configuration and tracking devices on show-site in a live production environment turned into a set of very complex requirements. Here are some of the requirements:

  • Multi-project file system (like a Google Docs system)
  • Multi-user project collaboration
  • Multi-level permission levels
  • Multi-level permissions per app, per area of the app, per project
  • Real-time updates for all users
  • Very performant data-grid with row and column drag-and-drop
  • Custom data-grid cells with typeahead, selectors, radio groups etc.
  • A chat system for user-to-user DMs and project collaborators (user groups)
  • Project export in many formats (CSV, Excel, PDF)
  • A PDF label printer per row
  • A status history tracker per row
  • All the niceties we have come to expect from Google Docs:
    • Project open and edit history
    • Per document, per field edit history
    • Undo/Redo
    • User presence per project with idle tracking

The goal of this product was to create an ecosystem that helps technicians work more efficiently from show to show, and within one framework. The standard tool currently used is Google Docs/Sheets, so persuading techs to move from such a monolith would require a really well designed product.

Picking the frontend tech

Since my experience with the newer JS frameworks was limited, React was a good choice for this project. Considering the internet is full of resources teaching React, it was simple to get started.

We quickly found that the speed with which React has changed over the years was providing us with outdated tutorials or conflicting concepts (class based components vs. functions and hooks). I wouldn’t change this tech selection since there are so many resources available, but it has been a little difficult at times to be certain that I made the correct choice.

We decided on a front-end UI library that was well established, and had a good set of features that we would require. We are on the third iteration of this project, with the first two using the BlueprintJS component library, and the third using the Chakra-UI library.

The Chakra-UI library has enabled us to move VERY quickly. We developed a simple style guide that allows us to iterate on UIs without getting stuck on CSS and layout.

Picking the Backend (and Frontend) Tech

My original plan was to use Django as the backend single source of truth using a Postgres database with a Django Rest Framework system over it. I naively built a custom architecture for the real-time aspect of the system, with Google Firestone maintaining a diff per user per database table. This system worked well for the first few events we used it on, but the edge cases made it very difficult to maintain. Also, maintaining this system at scale would have been tough.

After the second show using the app, we realized that the architecture needed to change. At this time, I heard Filipe from MeteorJS on the Syntax podcast and tried Meteor JS. After the first two days using it, I realized that a rebuild of our app in Meteor would be more than worth the effort in the long and short terms.

Meteor provides all the real-time aspects that we required, plus a good amount of the “batteries included” features that Django provides (like a user system). Meteor is stable, as it has been around since 2016 and it works very well with React.

Using Meteor, we didn’t need separate systems for Front and Backends, as Meteor serves both very nicely. Buying in to the Meteor ecosystem has allowed us to build features extremely fast rather than getting stuck in the weeds with architecture. Meteor has already figured out all the tough edge cases of real-time data and their DDP/Method system allows us to pick which sets of data will be real-time enabled, and which data will be fetched on demand.

With this system in place, we will be able to take the product to the next level and provide technicians with a web based product that is robust, stable and feature rich.

Learning React made me a better developer

After the entertainment industry shutdown in 2020, I had some time to learn more about current web development technologies. Previously, I would reach for the same set of tools to accomplish a web dev job, no matter what the requirements were. I used Django and jQuery for everything, and could crank out apps for Masque in a day or two. I found comfort in these technologies and the results were pretty good. I used VueJS in a freelance project and was amazed by it, but never spent the time to refactor my existing apps, or implement new apps using it at Masque.

During quarantine, I built three apps in multiple iterations with advanced features (real-time collaboration, hierarchical permission system, file sharing, chat, etc.) using React.

Since I was coming from jQuery, I had a little JavaScript knowledge, so I was able to spin up a project fairly quickly using the React guides. Building the front end of these apps entirely in JavaScript taught me so many more useful techniques that I now take back into my Python/Django work.

The first takeaway was the developer experience. Other than a few extensions in VSCode, there isn’t much of a Django specific development experience. The Create React App node script creates an environment that provides excellent linting, debugging and hot-reloading. I’ve now installed more Python tools in VSCode to help find unused imports, syntax errors and errors prior to reloading the page.

The second takeaway was immutability. Python is incredibly dynamic by design, and since I learned to code with Python, I may have developed some bad habits and anti-patterns. For example, I would consistently assign different types of objects to the same variable throughout a function or method and check its type at the end to determine the outcome (i.e: is it False or is it an object?). This makes debugging difficult and the code hard to read. In JavaScript, when there’s a “const” declared, I know that it’s not going to change throughout the function. The heavy use of the spread operator in JS to copy immutable objects to make state changes made its way into my everyday Python coding. There are certainly reasons to use dict.copy and dict.deepcopy, but I find myself using the **dict and *array syntax for simple copying, where previously I would use the dict methods or worse, looping through dict.items()!

The third takeaway was array methods with immediately invoked functions. These have always been available in Python, but there was no incentive for me to use them in Django since a standard forloop (or multiple nested forloops) usually would suffice. In React, I use the Array.map function a lot, and in doing so, I’m now using the map() with a lambda function in Python. There aren’t really any severe performance increases, but it makes my code easier to read and debug by using this succinct syntax.

I’m now a React disciple. I find building apps using React instead of Django template with jQuery or Vue to be so much easier to troubleshoot, architect and maintain. I still use Django or Flask as a Python API back end if it’s needed, but instead of reaching for the same set of tools each time, I’ll now develop using the best tool for the job.

MIDI to OSC app

At the start of the entertainment industry shutdown in March 2020, I was contacted by a lighting designer who needed an application to convert their MIDI controller’s input to OSC commands on a Windows PC to control LightForm and LightJams.

Due to the experience I had building the OSC Controller hardware device, the designer thought I would be able to create an app for him. Using my existing toolkit (Python), I found a tech stack that I was comfortable with: MIDI in Python, OSC in Python, combined with web technologies (HTML/CSS/Javascript).

Using the Python Eel package, I created a standalone application that launches a dedicated instance of Google Chrome and connects to the host computers i/o through Python. This selection of tools allowed me to create a portable executable program that would run on most PCs.

The program functions as designed, and solves the niche problem of not being able to run OSCulator on Windows!

See it on GitHub at https://github.com/matt-dale/MIDI_TO_OSC

Masque Sound RFID System

 

The Masque RFID concept started with George Hahn(the Masque Sound Director of Production) asking me to research various handheld long-range RFID readers to use throughout the Masque Sound warehouse.  What this research helped me discover was the world of UHF RFID inventory systems.  I designed a system that uses fixed readers rather than mobile readers that interrogate tags and record their locations to a PostgreSQL database and provide real-time alerts to workers for issues that the system finds.

That system description sounds simple enough, but it took a lot of asking “why” to get the results that George was looking for.   He was looking for a way to catch items that were not correctly scanned to a given show as a show was being loaded onto a truck.  His original plan included a handheld “wand” that would interrogate tags and would respond with their inventory status.  By moving this idea to a multiple fixed reader system, I was able to add another, more valuable(in hindsight) feature of asset location logging.

The search for both readers and tags was a long one.  Tags needed to be durable, small, versatile and cheap.  This requirement set wasn’t able to be satisfied with just one type of tag in the end.  Since Masque Sound owns a lot of metal rack mount assets, which are generally at least 1 3/4″ tall, we found a “universal” tag made by Metalcraft that read over 15 feet from a reader, and was small enough to fit on the side of a 1 rack unit asset.  It also allowed us to install the tags on other small metal assets, like direct boxes, LCD monitors and mixing consoles. 

For our large road cases, we found a very cheap paper tag in the “wet inlay” format, which read from over 15 feet from a reader and had a very large antenna surface area. These tags, from Avery Dennison RFID, are installed underneath the carpet lining of each road case that Masque owns. These tags are very durable due to their hidden installed location, and the large surface area of the antenna.  These tags will not read when installed on a metal asset, but they are much cheaper than the “universal” metal-mount tags.

For the smaller niche assets where the larger tags don’t fit, we use a smaller “universal” tag from Metalcraft and a smaller tag from Avery Dennison RFID.  These tags read over 10 feet away, but they work well in our environment.

 

 

See further posts on how this system integrates with all others in the Masque Sound Warehouse.

 

Masque Sound Operational Projects

Each of these projects assists in day-to-day operations to improve the efficiency of the sound shop with clean user interfaces and efficient design.

Masque Sound uses the Unibiz R2 rental software, so much of my programming is based around extending this software.

Third Stream Web App

This application provides a more intuitive and more advanced interface for interacting with the Oracle backed, Unibiz inventory software, R2.
Using webservices, stored procedures and direct database queries on multiple databases, this application provides asset tracking, order filling, inventory control, all with a cohesive databse that integrates with all my other applications.

Each tile in the screenshot is a link to a full fledged application, some of which are detailed below.

The Windows 8 design style was adopted to get users used to the “Windows Store” style application before the company wide upgrade to Windows 8. This style guide was created using the PureCSS framework.

While this first iteration of the application is still in use and is the flagship of my development, the new, more streamlined version ThirdstreamV2 has a much more modern look, although it uses Bootstrap to handle its styling.

 

Mobile Asset Management Apps

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

This application was designed to give users a mobile way to check asset status throughout the warehouse. We use a Bluetooth barcode scanner that mounts to back of an iPod Touch and incorporated it into the mobile site.  By making the app web based, we are able to use this app on many devices including Android devices.  This app uses AudioJS to playback sounds corresponding to the statuses of the assets that are scanned.  Additionally, “bad scans” are logged to a database table and displayed on a dynamic sign in the manager’s office.

Another application was developed to record asset location information in the warehouse. Using pre-existing shelf location barcodes I created a database that
records the asset’s current shelf. The app simultaneously uses a custom Unibiz stored procedure that writes the shelf location to a field in R2. The R2 field is always the most current location, but asset location history is saved in the Thirdstream database.

Dynamic Signs

The “R2Today Screens” are a set of 5 different signs that are displayed throughout the warehouse on large LCD monitors. These provide a consolidated list of outgoing orders for the various departments. It is updated in realtime with direct queries to the Oracle inventory database.

The design inspiration came from airport/train station arrival/departure screens. It uses color and scrolling statuses to keep warehouse workers up-to-date on what work needs to be done.

Speaker Testing System

The purpose of this system is to automate as much of the speaker testing process as possible to increase warehouse efficiency.

The user interface is programmed using IronPython and connects to the Audio Precision APx API.
The app accepts a barcode scan of the speaker then:

– opens the test procedure,

– checks the status of the asset,

– provides setup prompts for the procedure,

– runs the procedure,

– saves test result data,

– saves asset location data,

– saves test history records

and updates the asset’s information in the R2 inventory system

All with one barcode scan.

RMA System

This system allows clients to enter information about their items that they need to send back to the warehouse. It incorporates an email system to alert the repairs manager and the necessary project managers. There is an admin system for the repairs manager to enter details about a particular RMA. This gives the clients a better idea of the status of their returns and lets them know shipping information.  This system is built on Bootstrap V2 and Django 1.10.

Scheduling System

This system is a permissions based calendar with resource/task assignment. It houses the company’s vacation calendar and projects.

The custom interface allows management to see long term workload and day-to-day planning. The front end was designed by Sean Colandrea.

The daily view shows the work to be completed for each day and provides a quick report generator for the manager to run meetings.

“SMURF” scanner interface

Andy Leviss(Duck’s Echo Sound) created a hardware device for scanning the RF spectrum into a CSV file. I created this user interface which takes file and venue information and automatically emails the results to Masque Sound for analysis and frequency coordination.

This uses Tkinter for the UI and pySerial for connection to the hardware device over USB.

Trucking System

Masque was using a paper and bulletin board method for trucking dispatch requests.  There were too many human errors with this system, so I built a simple dispatching system that requires users to enter the correct information.  The system continues to print dispatches on paper for the drivers to carry, but creates a traceable, searchable history for auditing.

Reporting System

Tracking the shows that have Masque equipment on Broadway became a fun way of testing how healthy the business is.  I developed a simple database reporting app that creates a nice metrics application.

Purchasing System

Masque previously used paper/pencil and spreadsheets to keep track of purchase requests and statuses.  This was handled by one employee who was overworked and needed help.  I built this application directly with this employee to make his job simpler and easier to track the millions of dollars of equipment that he purchases for the company.  The system sends notification emails and texts when requested items are ordered, delivered or received, and connects to APIs on FedEx, UPS and DHL to get delivery status updates.

Physical Inventory System

This system was built to accommodate more frequent inventory counts.  The previous method was to use an outside vendor to capture the data, then spend days importing the data back into R2.  By creating a custom solution that works on smart phones, we are able to conduct cycle counts, and format the data correctly prior to importing into R2.  The flexibility that this system offers gives more validity to the availability feature in R2.

RFID Inventory System

I designed and deployed an RFID inventory system at Masque Sound that uses Alien ALR-9680 readers and an assortment of RFID tags to track high value equipment through the warehouse.

The system was created to aid in asset tracking and inventory control and supplements the current barcode system in use. The various readers around the warehouse connect to a PostgreSQL database and through the Twilio API provides text message updates to alert of asset issues.

An alarm system is setup in two locations to provide immediate feedback to a shop employee in the area.
Using the GPO of the readers, a strobe light is turned on and a sound is played until the offending condition is addressed.

Loading Dock RFID System

This system is integrated with the Third Stream web application and the R2 inventory system. The user interface is a touch screen mounted on the wall near the loading docks. When a truck driver is loading a truck, they press the “Use Dock” button and the system goes into inventory control mode. As items pass through the loading dock, the CAEN Ion RFID reader verifies that each item has been properly documented in the R2 inventory system. It also updates a location history record in Third Stream. If the system finds a “bad” asset, a flashing light is set off, the item is displayed on the user interface and a warning message is texted to the shop foreman, the inventory control manager and me. This allows one of us to fix the bad item without stopping the truck loading.

When the system is not using the loading docks, it goes into “watch” mode. Every item that is seen on any one of its 16 antennas gets an asset history location record in Third Stream for asset tracking.

The user interface is programmed using IronPython to create a C# Winforms application. It connects over TCP to the CAEN RFID reader.

OSC Controller Button Box

OSC Panel
OSC Controller Panel

By combining my knowledge and experience of Python programming, web site construction, and basic electronics, I was able to construct a hardware device that sends OSC commands.  I used a Raspberry Pi running a simple web server and listens for button presses.  When a button is pressed, it sends the OSC command.

The concept is simple, a 4 button box that sends OSC commands when a button is pressed.  These OSC commands are configurable with a web page that the device hosts.  The first iteration of this box was used on a Broadway show to trigger a global system mute.  When the button was pressed, it sent an OSC message to mute the eight Meyer Sound Galileos and change the Meyer Sound D’Mitri scene to the MUTED scene.  After completing this device, I realized that it could be developed into something much more versatile.

The second revision includes programmable LCD buttons, multiple scenes, OSC subscriptions and an opto-isolated General Purpose Input.  Using the Q5 switch from http://www.ledswitches.co.uk/lcd-oled-products/lcd-switches/q5-lcd-switch.html was the first challenge.  These amazing switches are similar to what is used on the DiGiCo sound consoles. Their specs are quite good, and they are fairly easy to use.  Although it is probably not the best choice due to speed, I chose to use Python to program the switches, and through much trial and error, came up with this library. This experimental library functions well enough, but has been significantly improved in the production code.

The Q5 does not have a chip select pin, so in order to address each switch individually I multiplexed the clock signal from the Raspberry Pi.  The following picture shows this in action.

Q5 Breadboard
Q5 Switches on a breadboard

Once the switches were working on a breadboard, I attempted to design a PCB for the box.  I learned just enough KiCAD to design a board, and used OshPark to produce it.

oshpark pcb
my first pcb design

 

This PCB worked reasonably well, but like all first attempts, it can use some improvement.  After populating this board, I designed a front panel using Front Panel Express.  This allowed me to make a very professional looking front panel for this device.

 

Finally, Felix Kutlik helped make an enclosure to hold the PCB and the Raspberry Pi.  In its current state, the OSC controller functions, but I am adding more features to it.  Using the web configuration console allows changing button colors, text, OSC messages, confirmation messages, etc.

The Bodyguard US National Tour Production Engineer

Working as the Production Sound Engineer for The Bodyguard US National Tour was a terrific experience. My responsibilities were a bit more than a typical PSE role since the design team had already built this system many times before and none of the design team were a part of the shop prep. It was up to me to determine how to package and cable the system by interpreting the designer’s spec and translating it to Masque Sound’s inventory. The main PA is 12x Meyer Sound Leopards installed in 2x stack-

Leopard Sound Tower
Meyer Sound Leopard Towers

able 8′ towers with a 5′ lower section with a Meyer Sound UPJ-1P for infill.

Gary Stocker designed these towers with an internal gimbal that allows panning and tilting of the pinned array.  Array angles can be changed easily since the new Leopard bottom up captured rigging system is very easy to use.

The UPJ-1P infill has limited tilt, but a wide range of panning.  It provides additional front fill coverage for venues with wide proscenium openings.   The additional room at the bottom of the tower provided a nice space for a patch panel and power panel to make connecting the towers simple and efficient.

The design team wanted to use the Meyer Sound 900LFC subwoofers in their cardioid configuration, and since this did not fit inside the tower footprint, they are stacked three high next to the tower.

 

The Center Cluster consists of 10x Meyer Sound Mina speakers, hung on a truss with 2x Meyer Sound 900LFC subwoofers on either side.  The FOH Electrics truss is hung underneath.

Meyer Mina and Meyer 900LFC
10x Mina, 2x 900LFC

 

 

 

 

 

 

The rest of the system is a straightforward touring rig.  There are around 40 channels of wireless mics with Sennheiser 1046 receivers and Sennheiser 5212 transmitters. There is an 8 channel Sennheiser 9000 series digital system used for handhelds on all the pop songs for the show which sound particularly good.

The console is a DiGiCo SD7T with a redundant Waves SoundGrid Server system.  Band monitoring is handled by the Roland M48 mixer.  This system interfaces with the DiGiCo SDRack via a MADI output.  Interfacing the system this way required the Associate Designer, Tom Marshall, to be very creative in the layout of the MADI stream since the M48 picks off the first 40 channels of inputs of the MADI stream only.  To be able to provide a drum submix to the M48 system, he placed an AES I/O card in slot 4 of the rack and physically looped the outputs to the inputs of this card.  This allowed him to send groups to the M48 efficiently.

Overall, the tech process went very smooth since the show has been produced in many other locations.  We used a DiGiCo EX-007 for programming help, with Allison Ebling at the SD7 and Tom Marshall at the 007.  Both Tom Marshall and Richard Brooker were a pleasure to work with, and I couldn’t have pulled it off without the support of Masque Sound, including Gary Stocker, George Hahn, and Scott Kalata.

Digico EX-007 tech
Digico EX-007 in a hazy tech

designdb.online

 

 

designDb ScreenshotMasque Sound occasionally provides for a show or event that requires us to build a sound design and act as the “production engineer”, making cable decisions etc.  We also occasionally design, build, prep and install systems for “one-off” events or shows.  In these situations, I have used FileMaker Pro database solutions, Microsoft Excel or the gaff tape/Sharpie method of labeling.  These methods were never quite good enough for my liking and I set out to build a free simple solution for this task.

Using the Django web framework and current HTML5 technologies, I was able to build a system that functions as a fast and elegant solution to sound system documentation and labeling.  While it doesn’t currently include equipment management, or fancy features, it prints directly to Avery 5167 and 5160 address labels right from the web browser (Google Chrome) without having to save to PDF and print from another program, or in browser reader.

Next time you need to build a show, don’t use gaff tape and Sharpie, use designdb.online!

Currently in the Beta phase with a limited number of user accounts available, I am working on issues daily.  Sign up at designdb.online.

sound system design

LustingerLive_Still2+138

the concept

Due to the nature of Lustinger’s music, there is inherently a large number of electronic instruments and the sync between them is very important.  The entire system was designed for ease of setup and consistency. Even though the band is playing small venues, we carry our own Behringer X32 mixing console so that we can maintain consistency between venues and reduce the impact that we have on the venue.

The system is contained in one 12 space rack and all cabling is fed into this rack via three bundles of cabling.  The cabling was designed to drop in key locations on the stage for easy patching so that the area around the rack is not crowded. 

The entire system can be setup in 10 minutes and cleared even faster. 

the playback system

I decided on Qlab for playback since it handles audio tracks and MIDI tracks simultaneously.  Each song is represented by a group cue and each group cue has multiple audio and MIDI tracks in it.  There is a click track and a backing track audio file as well as a lighting MIDI file and a Mainstage trigger MIDI file in each group cue.  Each group cue also has a global STOP cue to stop any other cues that were playing for fast switching between songs.  I added an auto-follow to the  last cue in each group cue so that the set will run automatically without the drummer having to trigger each song.  This playback system drives the entire show and all its effects.

mainstage

Since Joseph writes all his songs with Apple’s Logic, it made sense to use Mainstage for the synth, electronic drum and special vocal effects.  It was very simple to copy the channels from the demo’s Logic session into the Mainstage concert.  Qlab’s MIDI output switches the Mainstage patch changes automatically during each song.  Automatic patch changes allow the system to be autonomous.

Although Mainstage’s interface is not the most intuitive, it allows for reliable effects and synth patches. We are even able to control parameters during the performance to add a more live feel to the performances.

monitoring

In a typical small venue show, the use of stage monitors makes the performer’s job and the sound person’s job very difficult.  I decided to use an Aviom system to alleviate this problem.  Each band member has an Aviom A16-R that they use to mix their own wired in-ear mix.  This system works fairly well, although I have learned that the separate mixes have isolated the band members from each other in such a way that they don’t always hear their performances in the same way thus making for a less cohesive overall sound.  This is something we will work on in the future by setting stage volumes without the in-ears.  The band members’ experience onstage contributes to the audience’s energy. When the band is not having a good time due to tech issues, it translates to the audience.

vocal effects

Most of the special vocal effects used in the songs originate from Mainstage.  These were copied from the original Logic sessions, then modified for live use.  They include distortions, delays, chorus and reverbs. Since they are triggered with the Qlab MIDI file, the patch changes happen automatically as the song is played.  The output is patched  to the console on its own channel and allows for easy blending with the dry vocal channel.  Additional vocal effects come from the Kaoss Pad that Niko plays.  This signal chain is explained further in the signal flow section.  The effects provide a distinct sound that is unique in the local music scene.

signal flow

To accommodate this complex signal flow, I provided an 8 channel mic splitter.  One side of the mic splitter goes into the Behringer S16 and the other goes into a Behringer ADA8000 8 channel preamp.  The Behringer preamp output feeds into the Aviom A-16i input unit for the two guitars and the bass. 

The vocal mics have a different flow.  The output of the ADA8000 for Joseph’s vocal channel is “y’d” into the Mainstage input and to the Kaoss Pad input. The vocal mics feed into the S16 and each has its own mix output that feeds the Aviom system which allows us to EQ the vocal mics into their in ears.  The drum mics feed directly into the S16 and a mix output feeds the “drums” channel of the Aviom system.  A final output mix from the S16 provides a reverb return on the vocals to the Aviom system. 

The S16 MIDI input allows for scene changes automatically which keeps the performances consitent.  This MIDI feed is fed from the Qlab file.

lighting system design

The lighting system was designed to be driven with MIDI so that the band didn’t need an extra person for operating a lighting console.  It also made it easy to program the lights so that each song was repeatable and editable.  In order to follow the aesthetic of the band, all the lighting is very rhythmic with the music and we used it more as an effect than to light the stage.