The eli4d Gazette – Issue 002

Hello Friend,

Welcome to Issue 002 of the eli4d Gazette. You can find the newsletter archive at and the blog at

Have a great day!

Eli (@eli4d)

The eli4d Gazette

Issue 002: March 30, 2016

Tech Pick

I’ve been following the Laravel framework for quite a while and on a recent episode of The Laravel Podcast there is a great discussion about open source software, and whether whoever uses it should pay for it.

There is an inherent (bad) assumption that “open source” means “free”. Nadia Eghbal has been exploring this very issue. I came across Nadia’s work through the (excellent) Changelog podcast where Adam and Jerod interviewed her about her investigation into open source funding.

Edu Pick

I’m a big fan of Alton Brown’s cooking shows because of the brilliant way he explains things. In the same vain, comedian John Oliver did a brilliant job in explaining all of the issues around the Apple/FBI encryption through this 18-minute video: Oliver’s comedic explanation is brilliant in covering the nuances of this case and the overall issues of encryption and privacy. The Apple/FBI battle may be over, but unfortunately the war is just beginning.

Podcast Episode Pick

My pick is related to the Apple/FBI encryption issue and it comes from the brilliant Note to Self podcast. The episode (13 minutes) covers the issue from an author’s point of view. It’s a different perspective that is very eye opening and refreshing (as are most of Note to Self’s episodes).


How to Reset a Mac OS X Application (ScreenFlow in this case)


This article covers how to do an application reset on ScreenFlow 5 on Mac OS X – Yosemite. It’s more of a reminder to myself but I’m documenting this in case it might help someone else.

The usual disclaimer applies here – I’m not responsible for any potential destruction that may occur on your machine if you follow any of this information.

It started with constant crashes of ScreenFlow 5.0.6

I’ve been working on creating videos for the online version of my Stanford Continuing Studies JavaScript class. I’ve been using ScreenFlow for quite a while because it’s awesome (i.e. intuitive and easy to use), or better said – it was awesome up to now 😦 .

So what happened? The long and short of it was that whenever I tried to smooth volume levels by checking the “Smooth Volume Levels” checkbox, the application would crash. Every stinking time – ScreenFlow 5.0.6 crashed.

It started with constant crashes of ScreenFlow 5

ScreenFlow’s fantastic crashing sequence

First I would get the problem report screen and I would click “Reopen”

First I would get the problem report screen and I would click "Reopen"

Then when ScreenFlow started up again I would get a Crash Reporter screen

I’ve seen this crash reporter screen over and over and over again. I’ve included my email with the report but I’ve heard nothing from Telestream. At this point, I’ve reached the conclusion that it’s an automated report that might go to Telestream but then again it might not (as in /dev/null on Telestream’s side).

Then when ScreenFlow started up again I would get a Crash Reporter screen

How The Omni Group deals with crashes

As a comparison of an app/company that approaches this correctly, when OmniFocus crashes (whether on Mac OS X or iOS) it generates a crash report that it sends via email. The Omni Group’s ticketing system responds with a ticket number and an explanation that this crash has been recorded in their system. As a user I feel that someone (perhaps Ken Case in cat form) will see this ticket via such an acknowledgment.

Image credit:

How The Omni Group deals with crashes

I pointlessly attempt to submit a ticket to Telestream asking for crash resolution and a a download of an earlier version of ScreenFlow

I attempted to submit a ticket to Telestream through my registered user account but this didn’t work. Then I vented my frustration through Twitter (yes – I know – not constructive…though the crash logs are constructive – aren’t they…come on Telestream?).

I also ran ScreenFlow 5.0.2 and the same crash occurs over and over again. So a useful data point – it’s not the latest version that is problematic.

I pointlessly attempt to submit a ticket to Telestream asking for crash resolution and a a download of an earlier version of ScreenFlow

It’s time to work the problem

Maybe it’s my environment. Maybe it’s a recent Yosemite security update. Maybe it’s a solar flare. There are too many things that might have changed since the time when ScreenFlow was stable. So while I can’t track all the environmental/system changes from that point, I can at least clean up any plists, cache, and crash files related to ScreenFlow (this is the duct tape approach).

Image credit:

It's time to work the problem

How do I find all the setting/cache files related to ScreenFlow

I have a copy of CleanMyMac 2 and I run it to see what I get under the “uninstaller” option. When I click on the “Application Reset” button, CleanMyMac helpfully puts filled out checkboxes next to all settings/cache/crash files that are related to ScreenFlow but are not part of the ScreenFlow program. There’s a big “Reset” button at the bottom of CleanMyMac and I use it to delete all of these files.

How do I find all the setting/cache files related to ScreenFlow

I re-run ScreenFlow after the above “application reset”

OMG – smoothing volume levels works without a ScreenFlow crash…for a couple of videos.

I re-run ScreenFlow after the above "application reset"

After editing a few videos – the crashes recur

So this is an electronic duct tape solution but it works for now.

Image credit:

After editing a few videos - the crashes recur

A teeny tiny problem with CleanMyMac 2

One problem with CleanMyMac is after deleting these files – CleanMyMac doesn’t refresh all of the ScreenFlow associated files so to see this again (so I can re-delete them) – I need to quit CleanMyMac and do it again whenever ScreenFlow begins to crash.

It would be great to script this up so I can run it as a bash alias. Luckily, CleanMyMac provides a very helpful way to find out the location of the specific folders/files.

A teeny tiny problem with CleanMyMac 2

The best-est bash alias ever

Ok – so it’s not the best because ScreenFlow values are hardcoded and bash is the shell equivalent of the Punisher (at times). But it’s good enough for now.

The best-est bash alias ever


Looking for instructions on Mac OS X app resets on duckduckgo and google doesn’t yield many useful results. CleanMyMac 2 is pretty good about showing application files that relate to cache, crash, and plists. Using these as a guideline it is fairly easy to create a bash alias to bring out a somewhat big duct-taped club for ScreenFlow’s settings and to deal with a recurring crash.

Introducing The eli4d Gazette

Hello Friend,

Welcome to Issue 1 of the eli4d Gazette. This will be my way to keep in touch with former students and new friends that I have made through My intent is to keep this short and sweet and pick some interesting things related to tech and non-tech. This will be delivered every two weeks, and it may change to a weekly delivery depending on how it goes.

I value your time and attention and I hope you find this to be worth your time. If you are interested, you can subscribe through the following url:

The eli4d Gazette

Issue 001: March 16, 2016

Tech Pick (JavaScript related)

Brendan Eich, creator of JavaScript, discusses his view of JavaScript’s direction at the Fluent Conference (click the ‘x’ to get past the “you need to login screen”). His message is the same as other years – “don’t bet against JavaScript”, but this year he added WebAssembly to the betting phrase. Apparently Ash from Evil Dead is his spirit animal (so the programming approach of jumping into JS and ‘hacking’ is built into the language’s DNA :-O ). Is Brendan right about JavaScript future? Who knows? He’s a smart guy but JavaScript is out of his and anyone else’s direct control. The battle lines are certainly being drawn in the mobile space between web apps and native apps (so far native has trounced web in terms of performance).

Edu Pick

I tried to provide a comprehensive approach to picking server-side software through my “Using the Boring / Old / Popular (BOP) criteria for server side software evaluation” article. It was geared towards beginners (developers and those that need to pick server side technologies), since experienced devs will have a “gut feel” and wont need such a numerical approach.

Podcast Episode Pick

Eric Molinsky created an amazing episode called “Why They Fight” where he connects superhero battles to D&D character alignments. I know it sounds ridiculously geek but once you listen to this episode, you will never again look at TV/movie/story heros/villains in the same way. If you’re a writer, the character alignment table may give you a new twist/angle on how you view/build characters in your writing.

Using the Boring / Old / Popular (BOP) criteria for server side software evaluation


Episode 14 of the “Under the Radar” podcast covered the specifics of how to best architect a back-end service for you mobile-app, web service, web application, and so on. It’s a follow-up to a previous episode ( about the Parse shutdown and the potentially high cost of external dependencies. The one part of this conversation that really caught my ear was around 09:15 and it contained the following interesting approach:

“What you want most of all when choosing server software – if you don’t want to be administering and tweaking your server constantly – what you want is old, boring, and popular. Those 3 things – old, boring, and popular. New and trendy does not always mean better.”

Marco and David emphasize that you should reserve the exciting technology for the customer facing side. Whether it’s your mobile app or a browser side JavaScript framework that will amaze your customers. The back-end of your application, the “infrastructure” should be technology that is boring, old, and popular (lets call it BOP since you can never have enough acronyms) because you want solid reliability in the same way that when you’re home you want a solid source of water and electricity. After all, usually the frontier of front end development is…the front 🙂 (of course this is a generalization for business-to-consumer applications).

A word of thanks

I’ve approached this by looking for numbers and meaning at and Obviously projects (like the Apache web server) cannot be looked at in this way because the direct stats aren’t there.

Special thanks goes out to:

  • Marco and David for the content of their podcast and the BOP idea/approach
  • Rachel Berry from GitHub for answering my questions about the best way to interpret GitHub statistics
  • Andrew Nesbitt from for answering my incessant questions about’s statistics

Note that I discovered through the amazing Changelog podcast (episode 188). If you’re looking for a tool that will help you figure out your open source compliance (as well as many other things) – check out’s services (I would suggest that you listen to the Changelog podcast to get a clear understanding of’s value).

Lets break this down

If you’re new to this, the first question is where to begin?

I think the place to start is to find some sort of categories that are related to back-end technologies. After all, there’s no point to compare Linux (an operating system) to Ruby on Rails (a web framework).

Two sources that seem interesting in terms of such categories are:

GitHub’s showcases page

In terms of back-end technologies (i.e. server side software) that are shown on the showcases pages the following areas seem more relevant:

  • Web application frameworks
  • Programming languages
  • Open Source Operating Systems
  • Projects that power GitHub (i.e. seeing the components that run a huge enerprise like GitHub – some of these components will likely fit the BOP model; some of course will not fit this since GitHub can afford to hire devs for very niche and young projects)

Note: The image below is an aggregation of the 3 pages of this showcase and the “Search showcases” fields is great to finding a category for a specific project.

GitHub's showcases page main page has lots of different ways to look for projects. The keyword section at the bottom seems quite interesting. main page

Boring, Old, Popular: What does ‘Old’ mean?

While I initially wanted to start with ‘Boring’ because BOP starts with it (and BOP is memorable), I realized that the better way was to start with the property that is easiest to figure out, or at least something that seemed easier.

What does ‘old’ mean in terms of software? Is 2 year old software ‘old’, or does 10 year old software count as ‘old’? (in the case of this post ‘software’ means ‘open source project’)

The definitive answer is “it depends” but that doesn’t help much. I think the better question is “is this piece of software ‘old’ within its category?” In the following examples, we’ll look at the web applications framework showcase on GitHub.

Boring, Old, Popular:  What does 'Old' mean?

Rails is 12 years old…that’s definitely old – isn’t it?

Rails is 12 years old...that's definitely old - isn't it?

Express is 6 years old

Express is 6 years old

Laravel is 5 years old…so what gives?

Laravel is 5 years what gives?

Meteor is 5 years old….but is that old?

Meteor is 5 years old....but is that old?

What about the age of the Internet?

Good lord – that depends on your definition. Is it starting from the 1950s when computers were more widely used by governments and universities?

If I’m going to pick a number – I’m going to use HTTP as my criteria so: 2016 – 1989 = 27 years.

What about the age of the Internet?

Damn it – what is ‘old’?

I was tempted to use log2 to help figure the numbers (because logarithms are COOL), but then I thought about what it means to be ‘old’ as an adult and used that to figure out ages of adolescence, young adulthood, middle age, and old age. Here’s an imperfect attempt at figuring this (I use percentage of LEB to help with range indication for age stages).

Note that I’m using Soulver for these calculations (the best-est ‘human’ usable spreadsheet program out there).

Damn it - what is 'old'?

So if I use the age of the Internet as 27

Umm…this is a bit of a chicken and egg thing in terms of current technology and the origin of technology.

So if I use the age of the Internet as 27

Lets make InternetLEB 16

I definitely feel that Rails is ‘old’. What if I take 16 as the InternetLEB. 2000 seems like the ‘right’ year for Web 1.5/2.0 – doesn’t it?

This makes more sense to me but you can picke whatever InternetLEB works for you. So here’s a criteria of judging the age of a project. Based on the Marco/David criteria – you would want a project that is in the middle-age to old-age area. That is the definition that I’m picking for the ‘Old’ part from the BOP criteria.

Lets make InternetLEB 16

Boring, Old, Popular: What does ‘Boring’ mean?

Stepping back for a second to the Under the Radar episode about this whole BOP criteria, the discussion centers around backend software. Software that resides on the server, software that is supposed to be rock steady so you don’t have to worry about your web site or web service falling down on its face on a frequent basis. So we’re talking ‘boring’ in this context, not ‘boring’ as in “uninteresting and tiresome; dull.”

Still, what’s a better definition in this context?

My definition for this is “software that has clarity in terms of usage and is used in many projects because of this clarity”. To me ‘clarity’ refers to a couple of things:

  • how it is used in the context of application/service (i.e. well defined use)
  • used by many others, which in turn leads to clarity in terms of direct documentation or indirect documentation (i.e. stack overflow answers that add up to common and clear usage practices)

Now in terms of hard numbers – I’m not sure how to define and discover ‘boring’ in terms of GitHub or The closest thing that I can think of is the “Dependent Repositories” number from’s SourceRank number (example shown for Rails). I was unclear about the difference between “Dependent Projects” and “Dependent Repositories” and I got the following clarification from Andrew Nesbitt:

*Dependent repos and dependent projects are two separate things, for dependent projects of a rubygem, it’s the number of other projects that list that as a dependencies, for rails there are ~7940 other rubygems that depend on it: *

For dependent repos, it’s every Github repository that has rails listed as a dependency in it’s Gemfile or Gemfile.lock, which there are around 60,000: * *

I asked Rachel Berry if there was anything equivalent on GitHub and there didn’t seem to be anything that was directly equivalent. She suggested the use of code search to provide a rough statistic. So something like or could provide a possible alternative. The problem with this approach is that you need to know how a dependency is included and then deal with the various variations in inclusion strings (besides other issues like different package managers for different software).

Overall, I don’t think there is any “hard” number that can easily capture the ‘boring’ criteria. I think that in this case ‘boring’ is really the result of looking at ‘old’ and ‘popular’. So instead of the BOP criteria it should perhaps be (B)OP or B/OP. Moving forward from this point – I’m going to go with (B)OP.

Boring, Old, Popular:  What does 'Boring' mean?

Boring, Old, Popular: What does ‘Popular’ mean?

I left the “best” for last – POPULARITY. What the heck is ‘popular’ when it comes to the BOP criteria?

Is popularity based on GitHub stars?

How useful are GitHub stars in evaluating popularity? They seem somewhat transient and unreliable for this criteria.

Is popularity based on GitHub stars?

What about popularity based on GitHub forks?

Forks by their very nature are other people’s experimentation with a project. Of course there could be upstream contribution but how much of forks are actual contributions back to the project?

Forks seem like a way of learning and modifying a project’s code but I don’t think that they have anything to do with popularity.

What about popularity based on GitHub forks?

What about project members?

So the “Members” graph is a visual representation of the Forks number (i.e. “members” of the fork network). It’s another view of forks, and therefore its ‘popularity’ usefulness is questionable.

What about project members?

What about a project’s contributors as a reflection of popularity?

I think that this is similar to forks – specific people being interested in a project for their own reasons.

What about a project's contributors as a reflection of popularity?

Something that ‘trends’ is popular – isn’t it?

Something that is trending may reflect momentary popularity. But it is certainly in conflict with the ‘old’ and ‘boring’ criteria, so this is definitely not a good measure.

Something that 'trends' is popular - isn't it?


Actually I don’t but I’ll take a run at it anyway.

I don’t know what’s popular or how to best evaluate popular in terms of the BOP criteria. Maybe it’s one of those I’ll know it when I see it things. Still, it doesn’t help anyone who is new to backend software infrastructure. The best thing that I can come with at this point is’s SourceRank number as a decent data point for popularity. Is it the best? Probably not. But I don’t see anything that’s better at this point.

Note: We need to keep in mind that log values are used in the creation of SourceRank so a difference of 2 between the SourceRank numbers of two projects could be quite significant


(B)OP Comparison Example

So essentially – the (B)OP criteria boils down more to the O and P, since B falls under O or P – your choice.

  • Old = age based on the previously mentioned age/stage criteria using the year 2000 as a baseline
  • Popular = SourceRank at this point or using a GitHub source search if the project is unavailable on

With the above in mind – lets compare Rails and Express.

The (B)OP criteria for Rails

So for Rails we’re looking at:

  • Old = 12 years with an age factor of %75; so its at middle-age about to hit old-age
  • Popular = SourceRank of 28

The (B)OP criteria for Rails

The (B)OP criteria for Express

So for Express we’re looking at:

  • Old = 6 years with an age factor of %44; so its at middle-age
  • Popular = SourceRank of 26

The (B)OP criteria for Express

Which to choose?

So all things being equal (discounting for things like experience in Ruby/JavaScript which could easily change the decision), the choice in this case would be Rails. This is due both the O and P factors. Granted, other comparisons might be much closer, and then it comes to preferences of programming language, educational interest in a particular project or technology, and time for experimentation and implementation.


So in summary – make your back-end server and services the best they could be by choosing the most (B)OPish (boring, old, and popular) technology when looking at the server side level of your technology stack. This advice would seem to contradict the “I want to develop on the latest and greatest technology”, but it is the best path to system administration sanity and it takes away nothing in terms of the fun part of your product and using the latest/greatest in there.

Some other resources that I came across

While researching and reflecting on this post I came across some resources that might be useful for those that are looking for ways to distinguish different projects (this is not limited to server side type of projects):

Some other resources that I came across

About this post

This post was written by @eli4d and it originally appeared on on March 10, 2016.

The Laravel Podcast Episode 42 and the Meaning of …

I really enjoyed last week’s Laravel Podcast episode 42. Now since it is episode number 42 – I expected it to contain the answer to the ultimate question of development.

Now when you listen to the episode, you might think that the ultimate question that’s being answered is “which is the best object relational mapping approach/pattern – ActiveRecord pattern or the Data Mapper pattern?”

Or perhaps the ultimate question that’s being answered is “Should the ‘Single Responsibility Principle’ be violated when it comes to ORMs?”

Of course you need to listen to episode 42 to make your own decision. Perhaps it’s all ORM drama and dogma that is just a mystery wrapped in a Twinkie.

Personally, I think that the ultimate question is “how should you approach feature creation when it comes to software development?” And the answer is stated at the 46th minute of episode 42 (if only it was the 42nd minute…it would have been perfect…it’s time to repeat ‘serenity now’^100 and come to terms with this lack of symmetry). So what is the answer is:

“Don’t do it until you need it.”

Sounds simple – doesn’t it?