Technological Innovation - why is it speeding up?
I've been writing on Linkedin occasionally and thought of summarizing some massively trending posts that might help explain my motivation for posting these mini-videos.
Trending Video #1 - Pace of innovation in tech is driving traditional companies out of business
This example clearly shows how Apple came from near-bankruptcy to take the crown of the most loved brand in the world. Same applies for companies that did not even exist 20 years ago like Google, Amazon, Facebook.
Trending Video #2 - Learning genetically programmed cells to hunt and kill cancer cells
Trending Video #3 - Learning a Car to Drive Using Evolutionary Algorithms
This example is in the area of Deep Reinforcement Learning that uses the NeuroEvolution Technique to teach a car to learn about racing.
Future outlook: In the 60s driving was fun and a privilege. These days densely connected cities and traffic have made driving a big problem. Algorithms like these will help us develop self-driving cars that can do the job better than us without stress!
WHY is it speeding up?
Fast broadband internet, inexpensive commodity hardware and soon inexpensive HPC hardware too for individual researchers and scientists. Engineers and managers too may have fast computers under their desks soon!
PART 2: What is HPC and why is it becoming the backbone of AI innovation
High performance computing is in the backend making cities smarter, organisations data-driven and decision making a streamlined process that can sift through yottabytes (meaning: huge quantities of data) with ease.
AI will speed up the pace of innovationHTTPS://WWW.HPCWIRE.COM/2018/01/18/NEW-BLUEPRINT-CONVERGING-HPC-BIG-DATA/
Technological innovation is happening at a very rapid pace and with Artificial Intelligence and the associated architectures that come with it, it will even go faster!
PART 3: Artificial Intelligence will further accelerate this pace, but HOW?
This all started when Google open sourced its Machine Learning library TensorFlow, AI library and Tensor Processing Unit.
Someday we will look back and say how this was the turning point of the AI Economy - or whatever fancy term it will be then in 2030.
Obviously Facebook too followed the open source path releasing PyTorch. Today we hear that Uber, Netflix, Tesla and practically all fast-growing companies are using some form of Machine Learning and/or Deep Learning architectures.
Nvidia obviously is running ahead with their GPUs (Graphic Cards) but two things will define the next wave of this revolution :
- We will have architectures all about Tensors
- We will see decline and slow death of 32-bit and 64-bit IEEE 754 floating-point architectures
You might wonder: "Wait, What is a Tensor"
[I apologize in advance for making math out of this fun storytelling but it is crucial, stay with me]
It's nothing new, you've been using this in the 80s as well: Tensor languages have actually been around for years. Programming languages like APL and Fortran have used it in the past.
- Numerical computing optimization has been going on for a while: In the 1950s already programmers knew how to make linear algebra go faster by blocking the data to fit the architecture. Matrix-matrix operations, in particular, run best when the matrices are tiled into submatrices, and even sub-submatrices.
Dot Product - Huh?
Perhaps you've have heard from your engineers or employee of this term. If not, you've done this in high school math some time ago.
All the pictures or text corpus that your Machine Learning or Deep Learning engineers are using to do face recognition, securing devices from hacks or analysing traffic for self-driving cars are essentially (somewhere) Matrix-matrix operations.
OK, I'll stop now before your head hurts!
AI will dramatically make many computing paradigms of the 90s and 2000s obsolete as new models, architectures and hardware solutions will flood the market in the next 5-7 year.
An Important reminder for executives: Apple in that racy data visualization above had to go through extreme hardship to gain consumer confidence with both beautiful products and in the post iPhone era with clever algorithms to help consumers make better decisions (including advising on the use of their smartphones) to stay intimate with their AI solutions.
Technologically, we will see more advanced linear algebra solutions embedded in hardware where these parallel computation - which is what Deep Learning systems do best, with multi-level submatrices BLAS (Basic Linear Algebra Subprograms) further making matrix multiplications faster.
Already 10-20 years from now we will look back at current computing infrastructures, data centers, desktop machines, and devices and smile like when we look at the outdated computing methods of the 1950s.
Good luck and #HappyDeepLearning