Phil Seaton

I’m an MIT-trained engineering leader in the SF Bay Area. By day, I build, manage, and scale high-velocity software development teams. By night, I keep my technical skills honed by programming a range of side projects. Some of my projects and experiments follow.

You’ll see some themes: Visual Flow Programming, 3D Geometry / CAD, Web Browsers doing things servers normally do, and connections to my computational design master’s work. Most are demo-able in some way. If something catches your eye, please reach out.

A current CV can be found here.

Orchestra Data Science

Orchestra Data Science is a visual flow programming interface in pure JavaScript that runs in Python Jupyter Notebooks. Orchestra enables a non-technical (or at least less technical) audience to work with big data, and eventually machine learning algorithms, without knowing Python syntax or paradigms. Complex programming and data-ingestion tasks such as deploying an API micro-service to munge live streaming data become possible without writing a single line of code. For expert users, it’s easy to “eject” and employ custom Python simultaneously.

Orchestra has two especially interesting areas technically.

First, the workspace view. Proofs of concept using simple svg or canvas rendering showed that the curved wires connecting components became untenably slow for even very small Orchestra projects, perhaps with only a few dozen components in the workspace. So I switched to WebGL rendering, which enables me to zoom, animate, and selectively redraw wires performantly. However WebGL makes other display requirements harder. Simple CSS styling is helpful, but for a project where users need to read and type input text as well as trigger numerous browser-like interactions such as hover, click, and drag, writing everything in WebGL implied a step function in complexity, given the free stuff that comes with normal DOM elements and CSS.

Orchestra uses both simultaneously. A WebGL layer and CSS layer with normal DOM elements are superimposed, and controlled simultaneously by threejs. CSS transforms allow the HTML to zoom smoothly, while WebGL allows very large workspaces to render without performance hits.

Secondly, Orchestra tackles the challenging problem of tracking changes that occur anywhere inside the user-generated “graph” so it can understand (even without understanding Python) when components will require recalculation and updates. It does this using the idea of a “pulse” that propagates though the graph from any input that changes. As the pulse propagates, it counts the number of times it sees each component so that later, during the recalculation phase, components can ignore change events that would trigger duplicate calculations.

Documentation for the installation and use of Orchestra exists at the Github links below.

Dates: 2017 – Ongoing

Explorations: Superimposed DOM and WebGL zooming user interfaces; Graph mapping by “pulses” to prevent costly extra calculations

Technologies: JavaScript (Backbone, ThreeJS, RequireJS, Grunt), iPython, Jupyter Notebooks

Github: Repo and Tutorials

Live Demo: Coming Soon (please enquire). Feel free to install it and follow the tutorials meanwhile!

STL Thumbnailer

The rare open-sourced work project! I was its sole contributor, one of my last projects before leaving. The STL Thumbnailer is designed to create fast previews of user-supplied STL files. The previews are simple wireframes, without lighting or textures (limitations imposed by the dependency Node-Canvas), but they come back fast. Very fast. Autodesk’s “Forge API” has a service that provides thumbnails. They do have lighting and textures but they look terrible and take a very long time (usually 5-6 minutes). By contrast, this open-source NPM package returns nice-looking wireframes in high resolution in fractions of a second.

It works by creating a ThreeJS scene, importing the STL, then rendering it using Node-Canvas, and returning the image to the user (in this case, a node developer). It’s possible, using this package, to create a simple “STL Thumbnailer” service in a matter of minutes. Instructables uses this for all user-uploaded STLs to create friendly previews before loading the full 3D view of the model.

Dates: 2016

Explorations: Server-side creation of WebGL scenes, sans GPU

Technologies: JavaScript (NodeJS, ThreeJS)

NPM / Github: Package on NPM and Source on Github

Live Demo: Coming Soon (please enquire).

Quarto

Quarto as seen on the iPad. Share menu showing.

Quarto was born of a surprising problem: my wife and I, both architects by training, were unable to find a simple iPad app that served our needs for an architecture portfolio. We wanted an app with absolutely no cruft that could safely be handed over to an interviewer without worrying that they would push the wrong button (ie, zero admin UI). Similarly, all of the interactions needed to be completely expected. It should work like the “Photos” app, but give better controls for organizing which photos appear and, ideally, be built on a laptop and sync’d to the iPad (rather than trying to build on the iPad itself).

The main screen of the web app display is very similar to the main screen of the iPad app, but editable

It didn’t exist, so I built it. Quarto is live and available for download in the App Store.

The app is straightforward. The web application is programmed in JavaScript (Parse-Server backend with MongoDB, Mailgun, Stripe, and S3 integrations, Jimp built using Browserify), and the iPad app is programmed in Objective-C (I had learned Objective-C for Instructables, so I went ahead and programmed natively).

Perhaps the most interesting technical detail is that the numerous thumbnail and alternate image sizes are generated client-side in pure JavaScript (not using the canvas!) in a web worker.

Similarly to my work compiling a CAD kernel to JavaScript, here I started with an open-source library called Jimp (to which I was briefly a main contributor) which provided a collection of Pure-JavaScript image manipulation functions meant to run in NodeJS… resize, crop, blit, blur, brightness/contrast, rotate, posterize, etc.

My work involved creating a build process for Jimp which would allow it to run in the browser, in a web worker, so that I could use it in Quarto (and save having to pay for a real server!). This actually works surprisingly well. Though it pushes the bounds of what browsers can do these days (and indeed, I see occasional browser crashes while uploading large numbers of large image files), client-side image transforms turn out to be a practical reality in cases where upload speed is critical (time is saved because there’s no “processing” step) or server-side resources are limited. Performance is considerably better than canvas, and not as crash-prone.

Dates: 2015

Explorations: Build libraries intended for NodeJS to run in web workers. Self-guided product management. Direct-to-S3 uploads using signed PUT URLs to S3, so that large data never touches the small server.

Technologies: JavaScript (Parse-Server backend with MongoDB), API Integrations with Mailgun, Stripe, and S3. Jimp built with Browserify. The iPad app is programmed in Objective-C.

Github: Closed source, but if you’re really curious let me know.

Live: Sign up for free at https://www.quarto.io/

Orchestra3D

In addition to providing the base code for Orchestra Data Science above, Orchestra3D is a solid project in its own right, bringing not only visual flow programming to the web browser, but a custom compiled native JavaScript CAD kernel as well. Orchestra3D integrates these parts into a fully functional stand alone web application, allowing users experiment with parametric design directly in the browser.

SISL is a Fortran / C / C++ CAD kernel developed by the department of applied mathematics at SINTEF in Norway. I selected it (vs others such as OpenSCAD, Salome, Open CASCADE, etc) because of its specific strengths for NURBS geometry and absence of cruft. Once cross compiled to JavaScript via LLVM byte code in Emscripten, I’m able to get fast, robust geometry functions in just a couple of MBs.

Finally, Orchestra3D employs an especially rigorous approach to the hardest problem in flow-based programming: management of lists of data. Orchestra takes its cues from Grasshopper, a tool used for flow based programming in CAD (Rhinoceros 3D, specifically) where list management is fully and thoughtfully resolved. Orchestra3D mimics grasshopper behavior carefully. I built close to 100 unit tests based on expectations manually lifted from grasshopper, then worked to ensure they all passed in Orchestra.

Orchestra3D is my “forever project”, one which started years ago with my work on the design and fabrication of the “Soft Rocker” exhibit for MIT’s 150th birthday. From there was born the desire to take an extremely expensive and complicated 5-axis milling process based on professional CAD, and make it possible to execute on an inexpensive 3 axis machine, with no CAD expertise. I’m surprisingly close. Ask me about it!

The server side of this application does comparatively little, serving mostly as a CRUD API for project data and providing basic authentication and notification services required of most web apps.

Dates: 2012 – 2016. Forked and continued as Orchestra Data Science.

Explorations: Compile C to JavaScript using Emscripten and run it in the browser; Reverse engineering using methodical unit tests.

Technologies: JavaScript (Backbone, ThreeJS, RequireJS, Jasmine), Emscripten (involves some C programming), Parse Server Backend, MongoDB, SendGrid/Mailgun Integration

Github: Repo includes the entire application, including cross-compiled SISL, Orchestra itself, and the Web App in which in lives. (TODO: Separate the parts)

Live Demo: Some examples are available for playing around without logging in. A video of Orchestra3D in use is on the homepage. Because this project remains unfinished, I’m not currently allowing new account creation at this time. Create new components by typing in the “new component” field,
and seeing what comes up. Absent tutorials, Orchestra3D is hard to use. Try Orchestra Data Science if you really want to learn it!

e-nable

Through associations while working at Pier 9 / Instructables, I became involved with a non-profit called the E-nable Community Foundation which provides low-cost 3D-Printed Prosthetics to those with limb differences. I was involved with the non-profit only for about a year until my son was born. During that time I provided the base proof-of-concept for their now-product, Limbforge, which enables browser-based configuration of the Printable Prosthetics.

Though I really only did a few night’s work for Limbforge, the technical architecture is what they still use today. Prior work had attempted to parameterize every model in code using OpenSCAD on the backend, but it created serious problems for the project:

  1. Prosthetics CAD Designers were unfamiliar with programming, so could not express their designs in code. Programmers were then required for even the smallest change to the models, which were complex.
  2. Generating each model on the fly was slow
  3. Visual feedback needed to be rendered before results could even be evaluated by the user

The new approach generated hundreds of “pre-baked” sets of parameters by scripting for Fusion 360, the CAD program in which models were originally parameterized. My work essentially became a 3d model viewer that allowed selection of parameters that would “choose” a file that already existed, and show it immediately.

Dates: 2015

Explorations: Changing technical approach from a prior iteration of the project. Zipping combinations of files and triggering the download entirely on the frontend.

Technologies: JavaScript (ThreeJS), Fusion 360 Scripting

Github: https://github.com/strandedcity/e-nableRnD/tree/master/Limbforge

Demo: Proof of Concept (this is buggy and incomplete, but working if you click-drag to rotate the model after selection)

3D Printed Block Print Generator

This “ditty” is a fun one: a client-side web app that takes a photo you give it and does the following:

  1. Downsample it
  2. Greyscale it
  3. Determine how much black (as a percentage) is in a given downsampled pixel
  4. Use that to generate 3D truncated cones where the top surface of each fills that percent of its allocated square
  5. Export as an STL for 3D printing

Ie, you give it a photo, it gives you an STL that you can 3D Print, then use the print as a block print (like, with ink).

Entirely frontend code using my usual tools. Writeup of the whole process on Instructables.

Dates: 2014

Explorations: Create large file blobs in-browser. Generate geometry programmatically with ThreeJS.

Technologies: JavaScript (ThreeJS), File / Blob APIs

Github: Closed source, but if you’re really curious let me know.

Live Demo: http://strandedcity.github.io/BlockPrintGenerator/

Instructables Galaxy

After acquisition, Instructables moved to Autodesk’s Pier 9 where thousands of monthly visitors need an introduction to what Instructables is. I was commissioned (outside of work) to build a large (52″) touch-screen display. I used the web browser as the display platform both for its familiarity, and its portability. A web-based version was easy to make public.

An extensive writeup of how the Galaxy was developed is documented in the Instructable. This was an early journey into WebGL for me (my first, I believe), so it included a lot of learning along the way.

The big challenges, once I chose WebGL as the path forward, related to figuring out how to make the Galaxy look natural and move fluidly without animating each star separately. I ended up creating several “clusters” of stars that move together, but along slightly different axes and in different directions.

Other interesting explorations: procedural code that can draw something that feels like a constellation connecting an individual author’s stars (hint: no overlapping lines, mostly closed shapes but occasional open edges, no more than five connections to any one star). Other procedural code (written in processing) to make “nebulous” looking background images. Collecting data from Instructables APIs to represent approximately 20,000 of the most popular projects (written in Python, but not part of the repo).

Dates: 2012 – 2013

Explorations: Particle Systems in WebGL, Vertex and Fragment Shaders, Suggestive Constellations and Clouds

Technologies: JavaScript (ThreeJS / WebGL, KineticJS), Processing, A tiny bit of Python

Github: Repo includes the entire application, excluding bits that touch private Instructables APIs

Live Demo: https://galaxy.phil-seaton.com/ Search for an author to see their projects (try “randofo” or “pseaton”) in a constellation. From a project click its category to see the cluster of related projects light up bright!