A Short History of User Interfaces 1/2

From punch cards to Extended Reality – and beyond

Extended Reality (Augmented Reality and Virtual Reality) is a fascinating technology. But if we try to reduce the concept to its core, it is essentially a new kind of user interface. A user interface is “the space where interactions between humans and machines occur”. With AR and VR, we’ll be able to see and hear output from our computers around us, rather than on a mere flat screen. Instead of using a keyboard and a mouse to give instructions to the computer, we will use a hand controller or more simply hand gestures and voice control.

So I thought a little bit of history would help us to put things in perspective. And while writing the first draft of this post about the history of user interfaces, I realised this history was more personal than I thought.

In the beginning, there was the punched card

My parents met while working at the computing department of a French bank and I remember them telling me with bemused nostalgia of the many inconvenients of punch cards. They were heavy, cumbersome and whenever they would fall it was a mess to sort them out.

Punched card

Engineers and programmers of the early ages of computing couldn’t afford the luxury of even a basic user interface. The first computers used punched cards as in the picture here; holes were the equivalent of a bit. Punched cards were invented in the 18th century for looms by Jacques de Vaucanson, long before computers, and used for automatic looms such as Jacquard’s machine. They had been adapted in 1890 for mechanical computers by Herman Hollerith, one of the founders for IBM. Both programs and data were written on punch cards before being input in the computer, on external dedicated devices called “keypunches”. At some point, printers were added to computers to output data in a readable form but it didn’t fundamentally change the nature (and the drawbacks) of this interface.

The batch interface, as it was also called, became a bottleneck in the ability to use computers. At best, a standard punched card with 80 columns and 12 rows would have a storage capacity of 960 bits or 120 bytes.

To give an idea of the limitations of punched cards, storing the content of the memory of the latest iPhoneX would require 572 million punch cards. One would need a fairly large pocket to transport that. In practice, punch cards were handled by technicians as a full-time job while programmers needed to wait for these technicians to enter their punch cards in the system. The human processes began to take more time than the computing process itself.

Punched cards were an acceptable interface for the earliest electronic computers such as the Colossus or the ENIAC, during the war, when the Allies could employ dozens of employees to produce and process punch cards. But with the development of smaller, cheaper and faster computers, computing reached a point where the main limitation to the power of computers was not necessarily the CPU (Central Processing Unit) but increasingly the input/output capacity. Something better was needed.

Keyboards and screens

In the 1960s, Command-Line Interfaces were developed and would become the new standard of user interfaces until the 1980s, including for personal computers. Readers of my age or specialists in some areas probably already know what these look like, but for others a quick reminder like is necessary. The command-line interface consisted of a keyboard to input data (no mouse), and a screen to display text only information. On Windows, the Command Prompt is a remnant of this era. While most users will never get their hands dirty with Command-Line Interfaces, their use is still relatively common.

DEC VT100 terminal (Jason Scott – Wikimedia)

As a child, I was lucky enough to have some of the first personal computers at home. Their level of performance was abysmal compared to what we have today. The Sinclair ZX 81, for instance, had a RAM memory of 1Kb (1024 characters), though you could upgrade it to 16 Kb. You could barely store the equivalent of this post’s text in memory. Thus I came to know the Command Line Interface without knowing its name; at that time it seemed to be the natural way to use a computer. For many years, Microsoft’s MS-DOS ruled over PCs. And yet, a new revolution was already marching.

Graphical User Interfaces

Just a few years before the apparition of the first personal computers, the basic concepts of a Graphical User Interface (GUI) had been developed at the end of the 1960s. Douglas Engelbart demonstrated the first windows-based interface and the first mouse in 1968, and the Xerox PARC (Palo Alto Research Center) added to this icons and menus in 1970. The first commercial attempts were a failure and Xerox gave up, but the idea was not lost on a young Silicon Valley startup: Apple. In 1984 the Macintosh was created, putting on the market the first affordable personal computer with a Graphical User Interface.

Xerox’s Alto GUI. Source: Wikimedia Commons

You are most certainly using a graphical user interface to gain access to this blog, either on PC, Mac or a smartphone. There’s no need to present it in detail. The GUI, apart from the ability to present graphics and sounds, rests on the WIMP concept: Windows, Icon, Menus, Pointers. It remains entirely valid since its early invention.

But we shouldn’t forget that at first there was a lot of skepticism; the new Graphical User Interface consumed more processing power than command-line interfaces and the benefits for many people could seem to be limited. Xerox’s attempts were a failure, and even the Macintosh failed to sell as well as the IBM PC, remaining for a long time a marginal actor on the market.  And yet, the Graphical User Interface was about to supplant the command-line as the interface of choice for most users. It is more intuitive, more visual, requires less initial training. You don’t need to memorize hundreds of obscure command names with even more obscure options (what does prompt $p$g do in MS-DOS? Or dir file.ext /s ?). It’s hard to imagine the explosion of personal computing as it happened without Graphical User Interfaces.

My first incursions on GUIs were a bit off the beaten track; my father had installed a version of GEM, a product of Microsoft’s long-forgotten rival of the early 80s, Digital Research. I must have wasted hundreds of hours playing mindlessly with GEM Paint, in black and white, with a pitiful resolution of 640×400. It felt like a game compared to the command-line – even though the command-line can be fun as well to play with. Still, it was just a curiosity at that time; it took several more years before MS-DOS was definitely replaced by Windows 95.

Windows 10

Touch screens

Once again, Apple came to the front of this story. Touch screens had an already long history;

the original concept was developed, again, at the end of the 1960s. Apple launched several products based on touch screens, among which its iPhone series, creating almost by itself in a few years the new token consumer product. The smartphone is today’s symbol of consumerism as much as the car used to be in the past. Touch screens make data entry even more intuitive. Touching, resizing, rotating can be done with a finger.

Photo by Matam Jaswanth on Unsplash

This time, though, touch screens haven’t replaced the previous generation. Touch screens rule on smartphones and tablets, keyboards and mice rule on computers. And the output is relatively similar with graphics, icons and some menus as well. The balance between mouse/keyboards and touch screens seems relatively stable for the time being; each interface has enough advantages and drawbacks to appeal to different devices (personal computers on one side, smartphones and tablets on the other).

There would be more to say about the whole history of user interfaces to this date. We have barely mentioned sound either as an input or an output. We also haven’t mention the myriad of devices that can be used for interacting with computers: cameras, microphones, graphic tables, joysticks, trackballs… These devices are significant but limited additions to the standard GUIs, with a smaller scope than mice/keyboards or touch screens. None of these delivered major changes to user interfaces.

In the next post, we will see how Extended Reality fit in this history — and what might be next.

Five Application Fields of Virtual Reality

This means business

In my previous post I tried to give a general overview of the future and promises of Augmented and Virtual Reality for business and non-leisure applications. In this post, I want to enter more into the detail of these applications and what they consist of. We’ll focus on Virtual Reality and will get back to Augmented Reality in a later post; while there are many similarities between these technologies, the use cases for these two technologies can be sometimes quite different.

Application Fields

While arranging a first list of applications for VR, what looked at first like a long list of unrelated niches started to follow an appearance of order in my mind. So I decided to broadly regroup these into five categories:

  • Education
  • Communication
  • Analysis
  • Commerce
  • Leisure

Any sort of classification will inevitably be challenged in the future especially at such an early stage. The point of this classification is not to create a rigid classification, it is to give ourselves an initial structure to explore and quickly memorize the uses of VR. Hopefully, new areas will emerge over time that will challenge this order.

Education and Learning

Simulation is probably the oldest known application of Virtual Reality with flight and military simulations. The first simulators have appeared in the 1970s.

VR on springs

There might not always be full stereoscopic vision but the principle of virtual reality is here — immerse the user in an environment that is fully artificial and simulated by computer. Additionally, the equipment allows a limited simulation of gravity thanks to a system of springs. The high cost of training on real equipment (planes or other military equipment) has made it economically valuable from the earliest days of VR.

The high cost of the equipment limited the scope of application of VR simulation but the recent spread of popular and much cheaper devices is leading to an extension of this scope. New types of training are being tested and deployed on VR: police, customer contact … It’s too early to assess the full impact and usefulness of these applications, but the principle is the same. At least the long history of the use of VR for simulation proves the purpose of training through immersion.

Education

The application of VR doesn’t stop at training but is being extended to other educational areas.

Several museums and exhibitions have started including VR experiments as part of major exhibitions. For instance, the Tate Modern is organising an exhibition on Modigliani with a visit of the artist’s workshop in early 20th Century Paris in VR. I had a chance to enjoy it and it does help to plunge into the atmosphere of the period and to illustrate some aspects of Modigliani’s life by immersion into his workshop. It is not that spectacular, but it is instructive.

Other fields include chemistryastronomy… potentially most sciences. The idea is to impress users. Immersion helps to attract (visitors, tourists, children…) but also to memorise; immersion is key.

Mental health

I imagine it will be controversial to include mental health along with education. I believe mental health and education should be in a continuum, just as it is for hygiene and physical health.

In any case, treatments are being developed in particular for the treatment of Post Traumatic Stress Disorder (PTSD). PTSD is one of the most obvious candidates for VR based therapies because one of the most common treatments is based on reliving the events in a controlled environment and with a therapist’s help in order to start and accelerate the healing process. Early experiments have been led from 1998 on Vietnam veterans and more have followed with Virtual Iraq.

Communication and collaboration

One of the most promising aspects of immersion is collaborative work, where the ability to engage fully in a discussion on 3D models or representations of data combined with the feeling of physical presence of ‘interlocutors’ will be helpful to business users.

Social networking

This might sound paradoxical since VR obliges users to isolate themselves from their immediate environment, but at the same time AR and even VR offer an intriguing new possibility — deeper connection on long distances. This would be possible through the use of avatars. The lower quality of images (at least at the current stage) would be more than compensated by the feeling of immersion which can bring us more fully into the conversation. What should make a big difference is whether we can see other parties’ facial expressions. Without these expressions, communication will be severely hampered. Fortunately work is already on-going on this and the technology required is now well-known: a camera and facial recognition software.

Social networking is the very reason why Facebook bought Oculus (and later launched Facebook Spaces). More recently, Microsoft bought Altspace VR, a new social network in VR, while Linden Labs (the creators of Second Life) started Sansar and the founder of Second Life started another one, High Fidelity. While most of these are focused on the consumer market, business meetings are also targeted, for instance by Manzalab’sapplication Teemew.

By itself, virtual meeting is already a powerful application, but what is also important is how this could be embedded into other applications — such as architecture.

Architecture

The case for VR in architecture belongs both to the field of commerce (presenting projects to clients) and communication (collaborating between designers), but for the time being the collaborative aspect is the most emphasised. This is where architects will spend most of their time.

Until now, architects would usually rely on 2D displays of their models, or on expensive mock-ups with long building times. VR allows to cut this cycle down to a matter of minutes.

Analysis

Data visualisation

The idea of data visualisation in 3D has been discredited by Excel and its infamous 3D charts, most notably the 3D Pie Chart, one of the least informative charts ever designed.

We shouldn’t stop there, however. New tools are being developed for VR: VirtualiticsDali, and a few others. These new tools are not aimed at widespread use since the price of the equipment required would make it impractical, but instead at niche users such as Data scientists who need to dive through massive amounts of data. For these advanced users, data visualisation in VR combined with other applications (virtual meetings) and technologies (Machine Learning) can accelerate their work.

Commerce

VR might help to further realise the early promises of e-commerce. While the success of Amazon has more than proven the point, there is still a lot of room for improvement — again, through better immersion and information of clients.

The most early use cases, we can assume, will be where:

  • The third dimension is important (space, volume, depth…)
  • Expedition and delivery costs are high, return costs prohibitive

This points to furniture or estate agents as the most immediate possibilities. Ikea has already developed applications both for VR and AR. These applications allow users to try and place furniture either in a virtual flat or in their real flat with an iPhone. The rates for the App are still pretty low, but there will certainly be progress here.

The Value of VR: Immersion and Information

As we have seen here, there is a wide range of potential applications for VR, for business and non-leisure uses. However, we can say that these applications generally rest upon two basic principles: immersion and information.

Immersion is more complete with VR, but even AR gives the feeling of something real, and that will bring our full attention to the matter at hand. This explains in particular why training and education is one of the first fields of application for VR.

Information is maybe a bit more controversial. After all, each of our eyes sees in only two dimensions. Many users might not be able to fully benefit from the third dimension — either people blind in one eye or people with stereoblindness. Still, the use of the third dimension gives an opportunity for displaying and obtaining more information. This will be important in domains where the understanding of depth is critical (such as architecture), or where the huge volume of data is so overwhelming that the third dimension makes a real difference to traditional data visualisation tools in 2D.

Understanding Virtual Reality

This canvas — the five application fields (Education, Communication, Analysis, Commerce and Leisure) and the two value axes (Immersion and Information) will help us to analyse and understand the future evolution of Virtual Reality. We will try to draw a more complete picture of what the future might look like for AR and VR.

In the next post, I plan to spend more time on Augmented Reality itself and its different application fields. If there’s any topic you would like me to discuss in one of my future posts, please ask.

Virtual Reality, Serious Reality?

The first time I heard about virtual and augmented reality at work, I had to force myself hard not to laugh. But here I am, dreaming of it, working on it and writing about it. I now believe this is the future. Sure, there’s a lot of hype about AR and VR today, but that was the case with Internet in 2000, right before the dotcom bubble burst. And today Google and Amazon are among the world’s biggest corporations. We shouldn’t let the hype blind us to the potential of the technology. But how can we tell the real potential of AR/VR from the hype?

A good approach to try and understand what really lies beneath the hype is to compare the current state of technology, which is still admittedly clunky but already spectacular, with the vision that the main players are deploying. Among the main players are most of the so-called GAFAM. Mark Zuckerberg, Tim Cook, Satya Nadella among others are burning billions of dollars and they will burn even more before they get the technology right. But given where the technology already is, there’s a reasonable probability that they can make it. We shouldn’t bet against them too quickly.

The State of the Art

Augmented and virtual reality are usually described as different and even antagonistic technologies, but it is more relevant to consider them together. I thought I was very smart when I started thinking about a concept I would call the virtuality continuum until I discovered it had already been invented in 1994. Well, at least I’m not crazy.

Virtuality Continuum

In practice, the concept of the continuum will encompass several categories of devices.

The first is simple Augmented reality as can be seen for instance on an iPhone or Android phone — where images are superposed to your camera view on your smartphone. The advantage is that this technology is already available to millions of users on their smartphone with no additional equipment required. You can move around the represented object in 3D, but it does not offer a real 3D, stereoscopic view since it is displayed on a single flat screen.

 

Hololens
Microsoft Hololens

The second category is AR on Augmented Reality headsets. One of the most advanced technologies available here is the Microsoft Hololens, but many companies are trying to develop it such as Finland’s Varjo. This technology offers full stereoscopic 3D, presenting two slightly different images to both eyes, superposed on reality. They are also usually autonomous, not requiring a connection to a PC. These devices are among the most expensive (the experimental version of Hololens is available for 3,000$) and can still be a bit heavy to wear.

HTC Vive

The third and fourth are Virtual reality devices, either fully autonomous (Google Carboard, Google Daydream, Samsung Gear) or connected to a PC (Oculus and Vive). The autonomous ones are cheaper and usually based on smartphones but with a relatively low definition and limited controls. The PC-connected ones are more powerful, with a higher definition and speed, but the cable make these devices a bit more difficult to use and require an expensive PC.

This is just a quick indicative list. We can expect more devices to appear in the future, and the borders will keep moving. Some devices might project images directly to the eyes, and maybe at some point we might have to include holograms. Given all the uncertainty, the big question now is where this could lead to.

The Vision

We could quote Mark Zuckerberg or Tim Cook, but instead let’s listen to the CEO of a company with one of the most ambitious visions, the Tesla of mixed reality , backed by Google and Alibaba— Magic Leap’s Rony Abovitz.

The Untold Story of Magic Leap

3:24 “But Rony is already thinking about how it might replace all of our others screens too.

  • I can conjure a tableau. I can conjure many televisions. I can conjure home theater…”

If they can realise this vision, if they can go that far, then it will reshape everything.Televisions, desktops and laptops, screens, keyboards and mice, joysticks and consoles, anything that is currently used as an interface between us and the digital world will be replaced with mixed reality devices. We will only need one single interface, both for input and output. This interface will probably consist of a pair of Augmented Reality glasses, and our hands and fingers as controllers. An embedded camera will capture and transmit our facial expressions for improved communication.

Magic Leap - the whale
Fantastic… but can they do it?

 

But we’re far from there yet. Magic Leap’s technology is not ready and the company is so secretive that we must remain cautious about their promises. Augmented and Virtual Reality are divided between themselves, and between several incompatible devices and platforms: Oculus, HTC Vive, Microsoft Hololens, Google Daydream…

Right now, a standard setup for a satisfying VR experience with either Vive or Oculus will come at around 2000 USD or EUR, without counting the cost of software. The user must have some space available either at home or at work to use it. In spite of a great design, these headsets are a bit heavy and suffocating to wear so after one hour of use even a compulsive gamer needs a break. The Hololens is even more expensive and offers a narrow field of vision. At best, this is going to be a long road.

A Long Road

“Once you create and dominate a small market, then you should gradually expand into related and slightly broader markets.” Peter Thiel — Zero To One

The most likely scenario, in my opinion, is one where the world of Mixed Reality is built progressively, step by step. The initial applications of Virtual and Augmented Reality have existed for a while and consist of niches — simulation and training, gaming, architecture, complex data visualisation… we will see these applications in a short while.

Each of these applications should present a case compelling enough that users are willing to spend the money, the time and the effort required to acquire and learn how to use it.

Each new niche, each new application will increase the market and stimulate the emergence of new ideas that will in turn feed the development of more applications. “Killer Apps” might appear but we shouldn’t hold our breath for one to appear suddenly and generate an explosion of AR/VR. It probably won’t happen like that. Computing didn’t emerge suddenly out of one killer app, it evolved over decades out of numerous “killer apps”, from Enigma code-breaking to Big Data and AI.

Follow the White Rabbit

I’m not really a white rabbit but you get the idea. A whole new world of ‘Mixed Reality’ is awaiting us and we’re going to follow the trail and see where it goes.

Beyond video gaming, a topic which I certainly enjoy a lot but that is already well covered, we’ll focus more specifically on the serious aspects of AR/VR: business applications and non-gaming use for individuals. While ideas are exploding all around us, I felt the need for a global vision of AR and VR. I hope to start building and to share this vision.

We will follow what happens in the vast, dynamic, fast-moving fields of augmented and virtual reality. And we’ll also watch this field under different perspectives. Here are some of the questions we are going to address over the following weeks:

What are, will be, could be the areas of application for serious AR/VR?

Who are the main players in AR/VR?

How should we choose between all the different platforms?

What kind of interfaces will we see?

How can we build prototypes?

We’ll face more questions than answers. The future is yet to be written — and it might be with a virtual pen.