A Short History of User Interfaces 1/2

From punch cards to Extended Reality – and beyond

Extended Reality (Augmented Reality and Virtual Reality) is a fascinating technology. But if we try to reduce the concept to its core, it is essentially a new kind of user interface. A user interface is “the space where interactions between humans and machines occur”. With AR and VR, we’ll be able to see and hear output from our computers around us, rather than on a mere flat screen. Instead of using a keyboard and a mouse to give instructions to the computer, we will use a hand controller or more simply hand gestures and voice control.

So I thought a little bit of history would help us to put things in perspective. And while writing the first draft of this post about the history of user interfaces, I realised this history was more personal than I thought.

In the beginning, there was the punched card

My parents met while working at the computing department of a French bank and I remember them telling me with bemused nostalgia of the many inconvenients of punch cards. They were heavy, cumbersome and whenever they would fall it was a mess to sort them out.

Punched card

Engineers and programmers of the early ages of computing couldn’t afford the luxury of even a basic user interface. The first computers used punched cards as in the picture here; holes were the equivalent of a bit. Punched cards were invented in the 18th century for looms by Jacques de Vaucanson, long before computers, and used for automatic looms such as Jacquard’s machine. They had been adapted in 1890 for mechanical computers by Herman Hollerith, one of the founders for IBM. Both programs and data were written on punch cards before being input in the computer, on external dedicated devices called “keypunches”. At some point, printers were added to computers to output data in a readable form but it didn’t fundamentally change the nature (and the drawbacks) of this interface.

The batch interface, as it was also called, became a bottleneck in the ability to use computers. At best, a standard punched card with 80 columns and 12 rows would have a storage capacity of 960 bits or 120 bytes.

To give an idea of the limitations of punched cards, storing the content of the memory of the latest iPhoneX would require 572 million punch cards. One would need a fairly large pocket to transport that. In practice, punch cards were handled by technicians as a full-time job while programmers needed to wait for these technicians to enter their punch cards in the system. The human processes began to take more time than the computing process itself.

Punched cards were an acceptable interface for the earliest electronic computers such as the Colossus or the ENIAC, during the war, when the Allies could employ dozens of employees to produce and process punch cards. But with the development of smaller, cheaper and faster computers, computing reached a point where the main limitation to the power of computers was not necessarily the CPU (Central Processing Unit) but increasingly the input/output capacity. Something better was needed.

Keyboards and screens

In the 1960s, Command-Line Interfaces were developed and would become the new standard of user interfaces until the 1980s, including for personal computers. Readers of my age or specialists in some areas probably already know what these look like, but for others a quick reminder like is necessary. The command-line interface consisted of a keyboard to input data (no mouse), and a screen to display text only information. On Windows, the Command Prompt is a remnant of this era. While most users will never get their hands dirty with Command-Line Interfaces, their use is still relatively common.

DEC VT100 terminal (Jason Scott – Wikimedia)

As a child, I was lucky enough to have some of the first personal computers at home. Their level of performance was abysmal compared to what we have today. The Sinclair ZX 81, for instance, had a RAM memory of 1Kb (1024 characters), though you could upgrade it to 16 Kb. You could barely store the equivalent of this post’s text in memory. Thus I came to know the Command Line Interface without knowing its name; at that time it seemed to be the natural way to use a computer. For many years, Microsoft’s MS-DOS ruled over PCs. And yet, a new revolution was already marching.

Graphical User Interfaces

Just a few years before the apparition of the first personal computers, the basic concepts of a Graphical User Interface (GUI) had been developed at the end of the 1960s. Douglas Engelbart demonstrated the first windows-based interface and the first mouse in 1968, and the Xerox PARC (Palo Alto Research Center) added to this icons and menus in 1970. The first commercial attempts were a failure and Xerox gave up, but the idea was not lost on a young Silicon Valley startup: Apple. In 1984 the Macintosh was created, putting on the market the first affordable personal computer with a Graphical User Interface.

Xerox’s Alto GUI. Source: Wikimedia Commons

You are most certainly using a graphical user interface to gain access to this blog, either on PC, Mac or a smartphone. There’s no need to present it in detail. The GUI, apart from the ability to present graphics and sounds, rests on the WIMP concept: Windows, Icon, Menus, Pointers. It remains entirely valid since its early invention.

But we shouldn’t forget that at first there was a lot of skepticism; the new Graphical User Interface consumed more processing power than command-line interfaces and the benefits for many people could seem to be limited. Xerox’s attempts were a failure, and even the Macintosh failed to sell as well as the IBM PC, remaining for a long time a marginal actor on the market.  And yet, the Graphical User Interface was about to supplant the command-line as the interface of choice for most users. It is more intuitive, more visual, requires less initial training. You don’t need to memorize hundreds of obscure command names with even more obscure options (what does prompt $p$g do in MS-DOS? Or dir file.ext /s ?). It’s hard to imagine the explosion of personal computing as it happened without Graphical User Interfaces.

My first incursions on GUIs were a bit off the beaten track; my father had installed a version of GEM, a product of Microsoft’s long-forgotten rival of the early 80s, Digital Research. I must have wasted hundreds of hours playing mindlessly with GEM Paint, in black and white, with a pitiful resolution of 640×400. It felt like a game compared to the command-line – even though the command-line can be fun as well to play with. Still, it was just a curiosity at that time; it took several more years before MS-DOS was definitely replaced by Windows 95.

Windows 10

Touch screens

Once again, Apple came to the front of this story. Touch screens had an already long history;

the original concept was developed, again, at the end of the 1960s. Apple launched several products based on touch screens, among which its iPhone series, creating almost by itself in a few years the new token consumer product. The smartphone is today’s symbol of consumerism as much as the car used to be in the past. Touch screens make data entry even more intuitive. Touching, resizing, rotating can be done with a finger.

Photo by Matam Jaswanth on Unsplash

This time, though, touch screens haven’t replaced the previous generation. Touch screens rule on smartphones and tablets, keyboards and mice rule on computers. And the output is relatively similar with graphics, icons and some menus as well. The balance between mouse/keyboards and touch screens seems relatively stable for the time being; each interface has enough advantages and drawbacks to appeal to different devices (personal computers on one side, smartphones and tablets on the other).

There would be more to say about the whole history of user interfaces to this date. We have barely mentioned sound either as an input or an output. We also haven’t mention the myriad of devices that can be used for interacting with computers: cameras, microphones, graphic tables, joysticks, trackballs… These devices are significant but limited additions to the standard GUIs, with a smaller scope than mice/keyboards or touch screens. None of these delivered major changes to user interfaces.

In the next post, we will see how Extended Reality fit in this history — and what might be next.