Be Prepared for the Next Wave of Technology

By Brian Gormanly

The Birth of the Tsunami

The world is ready for another big change in technology.  At this moment the metaphorical ocean is pulling back from the shore exposing an area of the beach that is normally submerged.  A warning sign that something is amiss.  Many will not heed the warning and will be unprepared when the next wave comes.  Unlike a real Tsunami the unprepared are not necessarily at physical risk but they may find themselves victims of an industry left behind in the new world, or maybe just victims of a missed opportunity.  The biggest times of change also offer the greatest possibilities. The individuals that see the future most clearly are the ones that get to build that future.

This article intended to help you be prepared, to see the possibilities and understand the risks.   Our journey is an exercise in understanding how the world may interact with technology in 5 years time and why it will likely be a much bigger change then changes over the previous 5 years.  We will explore why this the upcoming change is big, as opposed to most changes in the technological world which are incremental in nature.  

The core of the change is based around a fundamental change in our interaction with technology.  Changes at this level increase both the quality and quantity of interactions that involve technology.  Things that we take for granted in the “physical” world now such as televisions, PCs and even pictures on the wall may not exist in the new world.  The key to this new world will be an augmented “shared” reality.  A new version of the physical world that is not just for you to experience alone but shared with all those around you, and in distant places as well.  The new technology we explore today is Augmented Reality, often abbreviated AR.  

We will start our journey by exploring why AR represents a change in the in the way people interact with computers and why this change in interaction increases the impact of the change by orders of magnitude.  Then we will examine the timing of it’s arrival.  This is always the hardest piece to get right, the future is by its very definition, unknown.  We will explore data corresponding to previous shifts and compare it to current news and trends.  Explore how the big (and not so big) tech companies are positioning themselves, and also maybe most importantly see how software engineers and designers are preparing and what toolkits have been created for them.  We will also examine lessons learned from exploring this new interaction paradigm on existing technology such as smartphones and tablets.  

Let's start by examining the relationship we have with our machines.

Interaction

Interaction \ ˌin-tər-ˈak-shən \ Noun : reciprocal action or influence

We live inside a wonderful world full of colors, smells and tastes, sounds and physical sensations.  To understand what augmented reality is we need to look no further than our own minds.  The human brain is the ultimate augmented reality engine.  It exists floating in fluid in a dark recess of our skull and is only supplied with a stream of electrical impulses from our nervous system.  These electrical impulses are triggered by waves within certain frequencies of the electromagnetic spectrum falling on our retinas, movement of molecules in the air and on our skin.  And somehow our consciousness interprets all of this these inputs into a verbose and complex world of colors, music, shadows sweet smells and sour tastes.  The world that we actually experience is constructed by our mind to help us navigate, categorize, remember and navigate the physical world around us.

Humans are adept at creating tools that help us augment our experience in this physical world.  Computer based technologies are particularly powerful tools that have transformed many ways in which we work, play and generally interact with the world around us.  Everytime we use technology to help us in our daily lives we have an interaction with the technology.  Pinching to zoom on a smartphone, typing on a keyboard and talking to alexa, google or Siri are all examples of individual interactions between ourselves and the technology around us.

There is a science that studies the interactions between humans and our technology called Human Computer Interaction which is often abbreviated HCI.  People have an interesting relationship with change.  And changes to our interactions with technology, changes within the field of HCI represent the biggest upheaval in our day to day lives, but yet, in order to improve our technology and the benefits it can provide we need to accept change and find the correct balance between productivity and upheaval.  

For a new paradigm in HCI to take hold the benefit of using it must outweigh the pain.  In order to accomplish that, the new form of technology interface must be intuitive to use, and provide a large advantage that previous interfaces did not.  I would argue that there have only be 2 major changes in the modern era of computers that affect our interaction with them.  The first was the shift from the original console or text based input / output systems to a graphical system where input is given on top of the output on the screen.

Wave 1: GUI

Image Credit: https://en.wikipedia.org/wiki/Sketchpad#/media/File:Sketchpad-Apple.jpg

Ivan Sutherland who is known as “The father of computer graphics” developed a system called the “sketchpad” in 1963 while at MIT for his doctoral thesis.  It was the first system to utilize a complete graphical user interface (GUI).  It showed that computer could also be used for artistic and graphical purposes and showed new methods for providing input.  The key to the system was that the users input was provided right on top of the systems output.  This provided a much more intuitive way for individuals to interact with the technology then having to remember large amounts of cryptic commands to be issued to a terminal prompt.

Ironically, Ivan Sutherland also created one of the first virtual reality interface as well, called the sword of Damocles.  He worked on this project with a student, Bob Sproull while teaching at Harvard University.

Other monumental advancements such as the mouse created by Doug Engelbart helped make the transition to GUI based computing a reality.  During the 1970’s researchers at Xerox PARC created the Alto / Star computer.  This computer laid down the foundation for the GUI we still use to this day.  It had file folders, icons, a desktop, could print, facilitated cut, copy and paste, and could even be networked with other computers and send email.  During a famous tour of the facility[1], Steve Jobs saw the work being done at Xerox PARC and knew instantly that it represented the future of computing.  He worked out a deal with XEROX for access to the research and the second wave of computing, GUI based technology was brought to the world.

Before the GUI personal computers were the domain of researchers, scientists and the technology inclined who sensed that there would one day be a computer on every desk.  But actual and practical uses for computers were mostly mainframe style machines that were used by specially trained operators. They were used a batch fashion on difficult computational problems. The advent of the GUI brought the computer to the masses and changed how computers were used and dramatically broadened the problem domains they were used in.  with in a relatively short period of time, 100’s of millions of personal computers filled home and professional offices, and even started traveling around with us in the form of laptops.  

The learning curve for utilizing the promise of the computer was dramatically lowered.  Suddenly individuals could become adept at using a computer and still specialize in their own area of expertise.  Computing technology finally starts to live up its promise as a tool that can help anyone do more than they could without it.

Wave 2: Mobile Computing

The second major change in technology interaction was the shift towards true mobility in computing.  We tend to use the idea of the smartphone to epitomize this idea, but there have been multiple physical technological tools that promote this method of computing.  Mobile computing is the idea that using technology can be part of our daily interactions.  A computer had a more general purpose usefulness to our lives.  Not just as a tool that could solve hard math problems or let us draw or manipulate an image, a computer could also connect us to family members and friends all over the world, provide directions when we were traveling, and be the camera that is with us everywhere allowing us to document and share more life events.

The benefits of mobile computing are well known, but it is a few key technology advances and design ideas that made it possible.  The very first smartphone[2] was created in 1994 by Frank J. Canova[3] at IBM.  The phone was called Simon and the project that created it spanned 2 years.  A long time passed between the creation of the Simon and the proliferation iPhones and Android phones.  During that time the world had many PDA (Personal Digital Assistants) and early smartphones running PalmOS, Windows CE/Mobile, Blackberry OS, Symbian and many others.

I remember well using Windows CE and Windows Mobile on my Jornada 540 I purchased in 2000 and an HTC Tytn I purchased in 2006.  Using a start menu on a 240x320 pixel display was painful to say the least.  I saw past the pain however, when I purchased the Jornada in 2000 I did so because I was leaving live and work on Block Island for the summer.  I would be living in a barn converted into a makeshift dorm.  And while it had electricity and a basic bathroom, it was no place to use a computer or store a laptop.  The Jornada was perfect was for my needs.  I can still remember my friends faces as they looked at it.  They would quickly point out that it was almost impossible to use effectively.  And they were right, much like our current generation of Google Glass and Microsoft Hololens, the technology enabled those willing to endure the lack the practicality to help envision what the future would look like.  It is during these early incubation periods that our new interactions with the next generation of technology are created and refined.  My experiences with my early windows CE and Palm Pilots feed my mindset and imagination.  When the iPhone and the android developer SDK were announced I and many other like me were ready.

The creator the palm pilot, founder of Nementa and author of an absolutely great book called “On Intelligence”[4], Jeff Hawkins, famously walked around with a block of wood in his pocket while developing the palm pilot.  He would take it out and pretend to interact with it anytime he wanted retrieve or save contact information, add an event to his calendar and he even would take it out and pretend to sync it with his computer.

"The Xerox PARC Visit." https://web.stanford.edu/dept/SUL/sites/mac/parc.html. Accessed 28 Jun. 2018. "IBM Simon - Wikipedia." https://en.wikipedia.org/wiki/IBM_Simon. Accessed 28 Jun. 2018. "Frank J. Canova - Wikipedia." https://en.wikipedia.org/wiki/Frank_J._Canova. Accessed 28 Jun. 2018. "On Intelligence - Wikipedia." https://en.wikipedia.org/wiki/On_Intelligence. Accessed 28 Jun. 2018.