In this article I go over three things that, in my mind, would make JavaScript better. None are new ideas. This post is an expansion of a tweet I had when I saw someone asking about improvements for JS. (though probably are impossible for various reasons). I’m going to be primarily speaking about browsers and the web, though much of this might apply to Node.js (though I’m not as familiar with that area so I can’t speak on it confidently).

Versions for JS

Just as a heads up: I’m not really talking about different ECMAScript versions (e.g. ES6, ES2019, etc.) here–I’m talking about how most programming languages refer to versions.

Right now, there are two versions of JS: Strict mode and “sloppy mode”. Feature detection is done dynamically: if a script makes use of any feature or change in the language that isn’t supported by the environment running it, it will error, either silently or loudly depending on what that feature is.

To get around this, developers do one or both of the following:

Strategy 1: Transpile/polyfill to the lowest common denominator

To get around browsers not supporting features, we use tools like Babel to convert JS making use of newer features to JS supported by all or polyfill them with. However, this has a few problems:

  1. Most of the time, the transpilation increases file size and parsing time. Here is a dramatic instance of that (not that this is Babel’s fault…). This is despite the fact that most browsers don’t need this extra code.
  2. Transpilation requires a target platform. For now, it seems like the defacto standard is ES5, but this is already changing: websites that can afford to drop older browsers like IE11 are doing so (e.g. GitHub). What is the right target? This might get harder to answer once IE11 disappears and we truly only need to support evergreen (i.e. silently auto-updating) browsers. Different browsers might implement features in different orders. Automated tools like browserslist reduce the impact of this point, but they require upkeep. If a website stops development now but stays online, its JS bundle won’t get any faster despite the herd of browsers moving to support the newer features in the source JS.
  3. If one takes shortcuts (via options like “loose mode” for various Babel transpilations) you could actually be introducing bugs by fragmenting the underling semantics of a particular feature (though I admit this problem is not super likely).
  4. Transpilation does not get around efforts to dramatically evolve the JS language (especially those which remove old baggage). Syntax that is fundamentally incompatible with old and seldom-used features simply can’t be introduced because we can’t break the web.

Strategy 2: Offer different bundles to different platforms based on proxies

The idea is that you can identify what a browser might need based on the version presented in its “user agent” (UA). There’s a whole article on MDN on why this is a bad idea in general. However, this hasn’t stopped influential companies like Twitter from doing it.

Google Developers instead encourages using the support for <script type="module"> as a discriminating factor. This seems a bit better, but of course this is just one test–Safari is not an evergreen browser and so despite it supporting modules, we can’t rely on this to check for support for “generally new feature” availability in the medium or long-term.

How versioning fits in

As I said at the beginning, there already is a versioning scheme for JS. Strict mode changes the behavior of JS scripts in a backwards incompatible way: if you had a script that worked in “sloppy mode”, it might break in strict mode.

However, it doesn’t look like there are any plans to further extend this approach. When “#SmooshGate” (an incident of browsers accidentally breaking sites relying on old JS extensions by adding incompatible features) happened, versioning was suggested by more than one person. After all, with versioned JS, the issue evaporates. Commenters on Hacker News responded to these suggestions, suggesting that supporting multiple distinct versions introduces significant complexity for developers of JS engines. One person even noted

This has been discussed at length and they have decided not to do it. It’s not a missing feature, it’s by design.

There are other negatives to versioning expounded on in this wonderful article, such as the following quoted here:

  • Engines become bloated, because they need to implement the semantics of all versions. The same applies to tools analyzing the language (e.g. style checkers such as JSLint).
  • Programmers need to remember how the versions differ.
  • Code becomes harder to refactor, because you need to take versions into consideration when you move pieces of code.

I can’t speak much to the work of maintaining engines–this is done by engineers far more skilled than myself. My immediate reaction is that managing different versions might enable stricter handling of various code, leading to simplification, though that’s probably a naive perspective.

On the topic of remembering how versions differ, I would say this is simple in comparison to the inconsistent mess of browser compatibility, JS transpilation configuration, and generally frequent change within the ecosystem (though I will be the first to say that the last point has been fairly exaggerated). In other languages, versions change, and this is considered business as usual.

With regards to added difficulty in refactoring, I would say that this again is probably simpler than other things which we do semi-regularly, such as upgrading major versions of important libraries (e.g. jQuery, webpack), and is likely able to be automated. Additionally, the difficulty is highly dependent on the audacity of those at the reins of JavaScript, who, based on the current environment, seem unlikely to cause unnecessary upset.

Everything is an expression

The main area where I wish this were the standard within JS is with if/else, try/catch, and switch/case statements. This is something that I use very frequently within Ruby

Example: if/else

const a =
  if (cond) {
    b
  } else {
    c
  };

Example: try/catch

const a =
  try {
    somethingThatMightFail()
  } else {
    fallbackValue
  };

Example: switch/case

const a =
  switch (b) {
    case 'apple' {
      'fruit'
    }
    case 'broccoli' {
      'vegetable'
    }
    default {
      'unknown'
    }
  };

Its possible this would need to use different keywords to replace case and default for the sake of JS interpreters and maximizing backwards compatibility because cases function as labels.

Current proposals

To achieve this, do expressions were proposed 2 years ago, which satisfy the requirements with slightly more verbose syntax. E.g. for the if/else, you’d write

const a = do {
  if (cond) {
    b
  } else {
    c
  }
};

However, the proposal is still at stage 1 of the 4-stage TC39 (effective JS steering committee) process, though it’s still being discussed. Some have asked “why do we need the do?” and make the first syntax (without the explicit do) part of the language, though this can’t be done without interfering with existing uncommon language features (another example of not being able to change syntax due to “version constraint”)

Improved caching

I would explain this, but there are actually people already solving this problem, and they’ve put together this article explaining how caching can be improved around the web.

However, there are still unresolved issues here: how does this work for different browser targets? It’s fine if all libraries are only using browser-supported features, but we all know that that won’t be consistent for all features across all browsers into the future. A lot of this builds on the issues presented in the section on versioning above: if there are no ways to talk consistently about versioning, then it’s much harder to solve these problems in an automated way.