A Short History of User Interfaces 1/2

From punch cards to Extended Reality – and beyond

Extended Reality (Augmented Reality and Virtual Reality) is a fascinating technology. But if we try to reduce the concept to its core, it is essentially a new kind of user interface. A user interface is “the space where interactions between humans and machines occur”. With AR and VR, we’ll be able to see and hear output from our computers around us, rather than on a mere flat screen. Instead of using a keyboard and a mouse to give instructions to the computer, we will use a hand controller or more simply hand gestures and voice control.

So I thought a little bit of history would help us to put things in perspective. And while writing the first draft of this post about the history of user interfaces, I realised this history was more personal than I thought.

In the beginning, there was the punched card

My parents met while working at the computing department of a French bank and I remember them telling me with bemused nostalgia of the many inconvenients of punch cards. They were heavy, cumbersome and whenever they would fall it was a mess to sort them out.

Punched card

Engineers and programmers of the early ages of computing couldn’t afford the luxury of even a basic user interface. The first computers used punched cards as in the picture here; holes were the equivalent of a bit. Punched cards were invented in the 18th century for looms by Jacques de Vaucanson, long before computers, and used for automatic looms such as Jacquard’s machine. They had been adapted in 1890 for mechanical computers by Herman Hollerith, one of the founders for IBM. Both programs and data were written on punch cards before being input in the computer, on external dedicated devices called “keypunches”. At some point, printers were added to computers to output data in a readable form but it didn’t fundamentally change the nature (and the drawbacks) of this interface.

The batch interface, as it was also called, became a bottleneck in the ability to use computers. At best, a standard punched card with 80 columns and 12 rows would have a storage capacity of 960 bits or 120 bytes.

To give an idea of the limitations of punched cards, storing the content of the memory of the latest iPhoneX would require 572 million punch cards. One would need a fairly large pocket to transport that. In practice, punch cards were handled by technicians as a full-time job while programmers needed to wait for these technicians to enter their punch cards in the system. The human processes began to take more time than the computing process itself.

Punched cards were an acceptable interface for the earliest electronic computers such as the Colossus or the ENIAC, during the war, when the Allies could employ dozens of employees to produce and process punch cards. But with the development of smaller, cheaper and faster computers, computing reached a point where the main limitation to the power of computers was not necessarily the CPU (Central Processing Unit) but increasingly the input/output capacity. Something better was needed.

Keyboards and screens

In the 1960s, Command-Line Interfaces were developed and would become the new standard of user interfaces until the 1980s, including for personal computers. Readers of my age or specialists in some areas probably already know what these look like, but for others a quick reminder like is necessary. The command-line interface consisted of a keyboard to input data (no mouse), and a screen to display text only information. On Windows, the Command Prompt is a remnant of this era. While most users will never get their hands dirty with Command-Line Interfaces, their use is still relatively common.

DEC VT100 terminal (Jason Scott – Wikimedia)

As a child, I was lucky enough to have some of the first personal computers at home. Their level of performance was abysmal compared to what we have today. The Sinclair ZX 81, for instance, had a RAM memory of 1Kb (1024 characters), though you could upgrade it to 16 Kb. You could barely store the equivalent of this post’s text in memory. Thus I came to know the Command Line Interface without knowing its name; at that time it seemed to be the natural way to use a computer. For many years, Microsoft’s MS-DOS ruled over PCs. And yet, a new revolution was already marching.

Graphical User Interfaces

Just a few years before the apparition of the first personal computers, the basic concepts of a Graphical User Interface (GUI) had been developed at the end of the 1960s. Douglas Engelbart demonstrated the first windows-based interface and the first mouse in 1968, and the Xerox PARC (Palo Alto Research Center) added to this icons and menus in 1970. The first commercial attempts were a failure and Xerox gave up, but the idea was not lost on a young Silicon Valley startup: Apple. In 1984 the Macintosh was created, putting on the market the first affordable personal computer with a Graphical User Interface.

Xerox’s Alto GUI. Source: Wikimedia Commons

You are most certainly using a graphical user interface to gain access to this blog, either on PC, Mac or a smartphone. There’s no need to present it in detail. The GUI, apart from the ability to present graphics and sounds, rests on the WIMP concept: Windows, Icon, Menus, Pointers. It remains entirely valid since its early invention.

But we shouldn’t forget that at first there was a lot of skepticism; the new Graphical User Interface consumed more processing power than command-line interfaces and the benefits for many people could seem to be limited. Xerox’s attempts were a failure, and even the Macintosh failed to sell as well as the IBM PC, remaining for a long time a marginal actor on the market.  And yet, the Graphical User Interface was about to supplant the command-line as the interface of choice for most users. It is more intuitive, more visual, requires less initial training. You don’t need to memorize hundreds of obscure command names with even more obscure options (what does prompt $p$g do in MS-DOS? Or dir file.ext /s ?). It’s hard to imagine the explosion of personal computing as it happened without Graphical User Interfaces.

My first incursions on GUIs were a bit off the beaten track; my father had installed a version of GEM, a product of Microsoft’s long-forgotten rival of the early 80s, Digital Research. I must have wasted hundreds of hours playing mindlessly with GEM Paint, in black and white, with a pitiful resolution of 640×400. It felt like a game compared to the command-line – even though the command-line can be fun as well to play with. Still, it was just a curiosity at that time; it took several more years before MS-DOS was definitely replaced by Windows 95.

Windows 10

Touch screens

Once again, Apple came to the front of this story. Touch screens had an already long history;

the original concept was developed, again, at the end of the 1960s. Apple launched several products based on touch screens, among which its iPhone series, creating almost by itself in a few years the new token consumer product. The smartphone is today’s symbol of consumerism as much as the car used to be in the past. Touch screens make data entry even more intuitive. Touching, resizing, rotating can be done with a finger.

Photo by Matam Jaswanth on Unsplash

This time, though, touch screens haven’t replaced the previous generation. Touch screens rule on smartphones and tablets, keyboards and mice rule on computers. And the output is relatively similar with graphics, icons and some menus as well. The balance between mouse/keyboards and touch screens seems relatively stable for the time being; each interface has enough advantages and drawbacks to appeal to different devices (personal computers on one side, smartphones and tablets on the other).

There would be more to say about the whole history of user interfaces to this date. We have barely mentioned sound either as an input or an output. We also haven’t mention the myriad of devices that can be used for interacting with computers: cameras, microphones, graphic tables, joysticks, trackballs… These devices are significant but limited additions to the standard GUIs, with a smaller scope than mice/keyboards or touch screens. None of these delivered major changes to user interfaces.

In the next post, we will see how Extended Reality fit in this history — and what might be next.

Virtual Reality, Serious Reality?

The first time I heard about virtual and augmented reality at work, I had to force myself hard not to laugh. But here I am, dreaming of it, working on it and writing about it. I now believe this is the future. Sure, there’s a lot of hype about AR and VR today, but that was the case with Internet in 2000, right before the dotcom bubble burst. And today Google and Amazon are among the world’s biggest corporations. We shouldn’t let the hype blind us to the potential of the technology. But how can we tell the real potential of AR/VR from the hype?

A good approach to try and understand what really lies beneath the hype is to compare the current state of technology, which is still admittedly clunky but already spectacular, with the vision that the main players are deploying. Among the main players are most of the so-called GAFAM. Mark Zuckerberg, Tim Cook, Satya Nadella among others are burning billions of dollars and they will burn even more before they get the technology right. But given where the technology already is, there’s a reasonable probability that they can make it. We shouldn’t bet against them too quickly.

The State of the Art

Augmented and virtual reality are usually described as different and even antagonistic technologies, but it is more relevant to consider them together. I thought I was very smart when I started thinking about a concept I would call the virtuality continuum until I discovered it had already been invented in 1994. Well, at least I’m not crazy.

Virtuality Continuum

In practice, the concept of the continuum will encompass several categories of devices.

The first is simple Augmented reality as can be seen for instance on an iPhone or Android phone — where images are superposed to your camera view on your smartphone. The advantage is that this technology is already available to millions of users on their smartphone with no additional equipment required. You can move around the represented object in 3D, but it does not offer a real 3D, stereoscopic view since it is displayed on a single flat screen.

 

Hololens
Microsoft Hololens

The second category is AR on Augmented Reality headsets. One of the most advanced technologies available here is the Microsoft Hololens, but many companies are trying to develop it such as Finland’s Varjo. This technology offers full stereoscopic 3D, presenting two slightly different images to both eyes, superposed on reality. They are also usually autonomous, not requiring a connection to a PC. These devices are among the most expensive (the experimental version of Hololens is available for 3,000$) and can still be a bit heavy to wear.

HTC Vive

The third and fourth are Virtual reality devices, either fully autonomous (Google Carboard, Google Daydream, Samsung Gear) or connected to a PC (Oculus and Vive). The autonomous ones are cheaper and usually based on smartphones but with a relatively low definition and limited controls. The PC-connected ones are more powerful, with a higher definition and speed, but the cable make these devices a bit more difficult to use and require an expensive PC.

This is just a quick indicative list. We can expect more devices to appear in the future, and the borders will keep moving. Some devices might project images directly to the eyes, and maybe at some point we might have to include holograms. Given all the uncertainty, the big question now is where this could lead to.

The Vision

We could quote Mark Zuckerberg or Tim Cook, but instead let’s listen to the CEO of a company with one of the most ambitious visions, the Tesla of mixed reality , backed by Google and Alibaba— Magic Leap’s Rony Abovitz.

The Untold Story of Magic Leap

3:24 “But Rony is already thinking about how it might replace all of our others screens too.

  • I can conjure a tableau. I can conjure many televisions. I can conjure home theater…”

If they can realise this vision, if they can go that far, then it will reshape everything.Televisions, desktops and laptops, screens, keyboards and mice, joysticks and consoles, anything that is currently used as an interface between us and the digital world will be replaced with mixed reality devices. We will only need one single interface, both for input and output. This interface will probably consist of a pair of Augmented Reality glasses, and our hands and fingers as controllers. An embedded camera will capture and transmit our facial expressions for improved communication.

Magic Leap - the whale
Fantastic… but can they do it?

 

But we’re far from there yet. Magic Leap’s technology is not ready and the company is so secretive that we must remain cautious about their promises. Augmented and Virtual Reality are divided between themselves, and between several incompatible devices and platforms: Oculus, HTC Vive, Microsoft Hololens, Google Daydream…

Right now, a standard setup for a satisfying VR experience with either Vive or Oculus will come at around 2000 USD or EUR, without counting the cost of software. The user must have some space available either at home or at work to use it. In spite of a great design, these headsets are a bit heavy and suffocating to wear so after one hour of use even a compulsive gamer needs a break. The Hololens is even more expensive and offers a narrow field of vision. At best, this is going to be a long road.

A Long Road

“Once you create and dominate a small market, then you should gradually expand into related and slightly broader markets.” Peter Thiel — Zero To One

The most likely scenario, in my opinion, is one where the world of Mixed Reality is built progressively, step by step. The initial applications of Virtual and Augmented Reality have existed for a while and consist of niches — simulation and training, gaming, architecture, complex data visualisation… we will see these applications in a short while.

Each of these applications should present a case compelling enough that users are willing to spend the money, the time and the effort required to acquire and learn how to use it.

Each new niche, each new application will increase the market and stimulate the emergence of new ideas that will in turn feed the development of more applications. “Killer Apps” might appear but we shouldn’t hold our breath for one to appear suddenly and generate an explosion of AR/VR. It probably won’t happen like that. Computing didn’t emerge suddenly out of one killer app, it evolved over decades out of numerous “killer apps”, from Enigma code-breaking to Big Data and AI.

Follow the White Rabbit

I’m not really a white rabbit but you get the idea. A whole new world of ‘Mixed Reality’ is awaiting us and we’re going to follow the trail and see where it goes.

Beyond video gaming, a topic which I certainly enjoy a lot but that is already well covered, we’ll focus more specifically on the serious aspects of AR/VR: business applications and non-gaming use for individuals. While ideas are exploding all around us, I felt the need for a global vision of AR and VR. I hope to start building and to share this vision.

We will follow what happens in the vast, dynamic, fast-moving fields of augmented and virtual reality. And we’ll also watch this field under different perspectives. Here are some of the questions we are going to address over the following weeks:

What are, will be, could be the areas of application for serious AR/VR?

Who are the main players in AR/VR?

How should we choose between all the different platforms?

What kind of interfaces will we see?

How can we build prototypes?

We’ll face more questions than answers. The future is yet to be written — and it might be with a virtual pen.