Vint Cerf, Ph.D.

Vint Cerf  Ph,D., contributor, is the co-inventor of the Internet and VP and Chief Internet Evangelist for Google. Dr. Cerf's awards include the National Medal of Technology, the Turing Award, and the Presidential Medal of Freedom.

Articles

What's a Robot?

What's a Robot?


By Vint Cerf, Ph.D.  |  February 26, 2018


I'm a big science fiction fan and robots have played a major role in some of my favorite speculative universes. The prototypical robot story came in the form of a play by Karel Čapek called "R.U.R." that stood for "Rossum's Universal Robots." Written in the 1920s, it envisaged android-like robots that were sentient and were created to serve humans. "Robot" came from the Czech word “robota” (which means “forced labor“). Needless to say, the story does not come out well for the humans. In a more benign and very complex scenario, Isaac Asimov created a universe in which robots with "positronic" brains serve humans and are barred by the Three Laws of Robotics from harming humans:

  • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  • A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A "zeroth" law emerges later:

  • A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

In most formulations, robots have the ability to manipulate and affect the real world. Examples include robots that assemble cars (or at least parts of them). Less facile robots might be devices that fill cans with food or bottles with liquid and then close them up. The most primitive robots might not normally even be considered robots in normal parlance. One example is a temperature control for a home heating system that relies on a piece of bi-metal material that expands differentially causing a circuit to be closed or opened depending on the ambient temperature.
 

I would like to posit, however, that the notion of robot could usefully be expanded to include programs that perform functions, ingest input and produce output that has a perceptible effect. A weak notion along these lines might be simulations in which the real world remains unaffected. A more compelling example might be high-frequency stock trading systems whose actions have very real-world consequences in the financial sector. While nothing physical happens, real-world accounts are impacted and, in some cases, serious consequences emerge if the programs go out of control leading to rapid market excursions. Some market meltdowns have been attributed to large numbers of high-frequency trading programs all reacting in similar ways to inputs leading to rapid upward or downward motion of the stock market.
 

Following this line of reasoning, one might conclude that we should treat as robots any programs that can have real-world, if not physical, effect. I am not quite sure where I am heading with this except to suggest that those of us who live in and participate in creation of software-based "universes" might wisely give thought to the potential impact that our software might have on the real world. Establishing a sense of professional responsibility in the computing community might lead to increased safety and reliability of software products and services. This is not to suggest that today's programmers are somehow irresponsible but I suspect that we are not uniformly cognizant of the side effects of great dependence on software products and services that seems to increase daily.
 

A common theme I hear in many conversations is concern for the fragility or brittleness of our networked-and software-driven world. We rely deeply on software-based infrastructure and when it fails to function, there can be serious side effects. Like most infrastructure, we tend not to think about it at all until it does not work or is not available. Most of us do not lie awake worried that the power will go out (but, we do rely on some people who do worry about these things). When the power does go out, we suddenly become aware of the finiteness of battery power or the huge role that electricity plays in our daily lives. Mobile phones went out during Hurricane Sandy because the cell towers and base stations ran out of power either because of battery failure or because the back-up generators could not be supplied with fuel or could not run because they were underwater. The situation in Puerto Rico and the Virgin Islands after Hurricane Maria proved even worse since the physical infrastructure was so damaged that for many months there was little power available and towers needed to be rebuilt.
 

I believe it would be a contribution to our society to encourage deeper thinking about what we in the computing world produce, the tools we use to produce them, the resilience and reliability that these products exhibit and the risks that they may introduce. For decades now, Peter Neumann has labored in this space, documenting and researching the nature of risk and how it manifests in the software world. We would all do well to emulate his lead and to think whether it is possible that the three or four laws of robotics might motivate our own aspirations as creators in the endless universe of software and communications.

 


A version of this article appeared in Communications of the ACM (Vol. 56, No.1)

From Silicon Valley to the Cognitive Pangea

From Silicon Valley to the Cognitive Pangea


By Vint Cerf, Ph.D.  |  April 28, 2017


What is it about the residents of Silicon Valley that encourages risk taking? I have often wondered about that and have reached an interesting, if possibly controversial conclusion. Thinking more generally about immigration, I considered my own family history. In the mid-late 1800s, my father's family emigrated from the Alsace-Lorraine region (variously French and German) to Kentucky. A great many families came to the U.S. during that period. It is a family belief that my great-grandmother, born Caroline Reinbrecht, brought the idea of "kindergarten" from Germany to her new home. My grandfather, Maximilian Cerf, was an engineer and inventor.

Many Silicon Valley residents are also immigrants and their innovative talent and willingness to take risks have been abundantly demonstrated in the past several decades. On the other hand, we hear that risk taking and tolerance for (business) failure is less common in Europe despite the fact that many of the successful Silicon Valley entrepreneurs (and the rest of the U.S.) are from that region. This leads me to think that emigrants are quintessential risk takers. Moving to a new country and, potentially, a new language and culture, surely involves risk. To be sure, some emigrants, especially those coming to America in the 1600s, were fleeing persecution and that has continued to be the case to this day for a portion of those arriving here. Their emigration was and is driven as much by necessity as a willingness to take risk, especially the risk of failure. And they had no problem with the dirty, bad “F” word, for to them Failure just meant one learning step getting them closer to their goals, such as when a child learning how to ride a bike tumbles (fails) and tumbles again until biking is mastered.

Think about the westward movement of the 19th century. The families that moved to the American Midwest and the West were taking enormous risks, risks of catastrophic failure. The journey was arduous, long, and made the more hazardous by potential encounters with Native American tribes that were understandably resistant to what they saw as invaders of their land. And yet, they came, settled, raised their families, farmed, ranched, started new businesses, and contributed to the expansion of the U.S. across North America.

So we come to a possible explanation for this phenomenon. The emigrants brought with them a gene pool that predisposed them to take risks and embrace the “F” word. That this is not entirely preposterous, is underscored by a 2009 article by C.M. Kuhnen and J.Y. Chiao, Genetic determinants of financial risk taking. I don't pretend to grasp all the implications of that article other than to conclude there is evidence for a genetic component to risk seeking (or at least risk tolerant) behavior. We hear the term "Yankee Ingenuity," which was originally associated with emigrants and settlers in the American Northeast but has come to refer more generally to a common stereotype of American inventiveness. Someone making the trek to the West, arriving where enterprises were scarce to nonexistent, had to make do with whatever was at hand or could be invented on the spot.

The 19th century was also the period of the Industrial Revolution in Europe, America, and elsewhere. The term "revolution" is appropriate given the extraordinary creativity of the period. The steam engine, railroads, telegraph, telephone, electrical power generation, distribution and use, electrical appliances including the famous light bulb, were among the many, many other inventions of that era. As we approach the end of the second decade of the 21st century, we can look back at the 20th and recognize a century of truly amazing developments, especially the transistor and the programmable computers derived from it. While it would be a vast overstatement to ascribe all this innovation to genetic disposition, it seems to me inarguable that much of our profession was born in the fecund minds of emigrants coming to America and to the West over the past century.

But wait. That genetic inheritance idea is not limited to Silicon Valley. To wit, the British godfather of the new generation of A.I., Geoffrey Hinton is the great-great-grandson of George Boole! Boolean algebra and Boolean logic is credited with laying the foundations for the information age. Hinton is recognized as the godfather of Deep Learning, which is based on the backpropagation of errors (failures) in neural nets...that “F” word in software. Such it is with the deep learning pioneered by Hinton and his teams. Deep Learning is now a mainstream pursuit, attracting billions of dollars of investments around the globe. Hinton phrases it humorously, “we ceased to be the lunatic fringe. We’re now the lunatic core.”
 

Hinton

But wait. Today we are moving from Silicon Valley to the Cognitive Pangea. Remember Pangea? Of course not, that was 250 million years ago, you know back when there was but one continent on the Earth. Today, thanks to the Internet we have one continent again, a Digital Pangea. We also have mass persecutions and refugee emigrations worldwide... including emigrations from America to other countries, sometimes virtually such as Andrew Ng’s virtual emigration via the Internet as VP & Chief Scientist of Baidu AI Research of Beijing.

Ng

The Digital Pangea means that we have one world connected, but that’s just the connection. The real amazing breakthrough is that we now have one Brain .. with zillions of neural nets interconnected and available to the entire world... the Cognitive Pangea!

Pangea

I hope we can keep alive the daring of entrepreneurs, teaching our children to embrace risk, to tolerate failure and to learn from it, regardless of their genetic heritage, regardless of where they live in the Cognitive Pangea.


Vint Cerf  Ph,D., contributor, is the co-inventor of the Internet and VP and Chief Internet Evangelist for Google. Dr. Cerf's awards include the National Medal of Technology, the Turing Award, and the Presidential Medal of Freedom.


 

Our Team ID
Vint Cerf

Cognitive Implants

Cognitive Implants


By Vint Cerf   |   August 22, 2017


We're already past the middle of the second decade of the 21st century. Over one hundred years ago, World War I was about to start. Einstein's "annus mirabilis" papers were just nine years in the past. The first computers were about 25 years ahead, counting Conrad Zuse's 1938–1939 et. seq. work on the Z1 and Z2, especially, as seminal. 53 years ago—1964—marked the introduction of the IBM 360 computer. Roughly 40+ years ago, the first paper on the Internet's core Transmission Control Protocol was published, the first hand-held mobile was being prototyped, and the Ethernet was invented. About 30+ years ago the Internet was formally launched into operation and Apple announced the Macintosh. Way back in 1989, the World Wide Web was invented, the Mosaic Browser appears, and the so-called dot-com boom was poised to take off.

Every time I see calendar dates like 2017, I feel as if I have been transported by time machine into the future. It could not possibly be 2017 already! Isaac Asimov made some remarkably astute projections about 2014 in 1964,a so what might he say today?

What we can reasonably see today is the emergence of a crude form of cognitive accessory that augments our remarkable, but in some ways limited, ability to think, analyze, evaluate, and remember. Just as readily available calculators seem to have eroded our ability to perform manual calculations, search engines have tended to become substitutes for basic human memory. The search engines of the Internet have become the moral equivalent of cognitive implants. When I cannot think of someone's name or a fact (an increasingly common phenomenon), I find myself searching my email or just looking things up on the World Wide Web.

In effect, the Web is behaving like a big accessory that I use as if it were just a brain implant. Maybe by 2064 I will be able to access information just by thinking about it. Current mobiles, laptops, tablets, and Google Glass have audio interfaces that allow a user to voice requests for information and to cause transactions to take place. Whether we ever actually have the ability to connect our brains in some direct way to the Internet, it is clear we are fast approaching the ability to outfit computers (think "robots") with the ability to know about, perceive, and interact with the physical world.

It has been speculated that machine intelligence and adaptive programming will be the avenue through which computers will become increasingly cognizant of the world around them—increasingly behaving like self-aware systems. In addition to so-called "cyber-physical systems" that provide sensory input to computers and are expected to interact with the real world, an increasing degree of augmentation of our human sensory and cognitive capacity seems predictable. While we joke about memory upgrades or implants, search engines and the content of the Internet and World Wide Web act like exabyte memories that are reached through direct interaction with the computers that house them. Ray Kurzweil's virtuous, exponential computing functionality and capacity growth predictions, even if overly bold in the short term, strike me as potential underestimates of what may be possible in 50 to 100 years.

When we are on the cusp of generating an Internet of Things, humanoid and functional robots, smart cities, smart dwellings, and smart vehicles, to say nothing of instrumented and augmented bodies, it does not seem excessive to suggest the world of 2064 will be as far beyond imagining as 2014 was in 1964, except that Asimov had a remarkably clairvoyant view of what 50 years of engineering and discovery could achieve. A huge challenge will be to understand and characterize the level of complexity of such a world in which many billions of devices are interacting with one another often in unplanned ways.

For those of us who were around in 1964, we may recall our naïve aspirations for the decades ahead and realize how ambitious our expectations were. On the other hand, what is commonplace in 2017 would have been economically unthinkable 50 years ago. So perhaps exabyte, cognitive implants are a trifle ambitious in the short term, but a lot can happen in 50 years time. Just as we have adapted to the past 50 years, I expect we will rapidly embrace some of the functionality coming in the next five decades. It is already difficult to remember how we lived our lives without mobiles and the Internet. Now, where did I put that time machine?

Let’s not wait 50 years to see what’s already happening... Now! A lot is related to helping people with handicaps, but sooner than one may think something as big, or bigger than the development of language by humans just may be imminent. Elon Musk says that it’s probably going to be at least “eight to 10 years” before the technology his new company, Neuralink, produces can be used by someone without a disability, e.g., the general public. Neuralink is aiming to create therapeutic applications of its tech first, which will likely help as it seeks the necessary regulatory approvals for human trials. Ultimately, Musk seems to want to achieve a communications leap equivalent in impact to when humans came up with language – this proved an incredibly efficient way to convey thoughts socially at the time, but what Neuralink aims to do is increase that efficiency by multiple factors of magnitude. And, as you'll see below, Musk is but one player.
 

ZuckMuskWoman-with-neural-lace
And you won’t have to have a hole drilled in your skull. For example, a Harvard Medical lab has developed a neural lace that is injectable without surgery.
 

Neural lace injectable
Image credit: Lieber Research Group, Harvard University
 

Stentrode
This tiny device, known as a stentrode, can read signals from the brain’s motor cortex. It will be implanted into humans in 2017 to use these signals to control an exoskeleton. (Photo: University of Melbourne). “We have been able to create the world’s only minimally invasive device that is implanted into a blood vessel in the brain via a simple day procedure, avoiding the need for high risk open brain surgery,” explains Dr. Thomas Oxley, principal author and Neurologist at The Royal Melbourne Hospital and Research Fellow at The Florey Institute of Neurosciences and the University of Melbourne.


Neural lace
View video.

Immerse yourself in a number of related articles here.
 

Neural lace apps
It’s time to update that ancient Chinese proverb (or was it a curse?): “May you live in interesting times,” to “May you live in exponential times!” And we do! With the current pace of exponential change in fields like nanotechnology, biotech and artificial intelligence, it just may be time to listen again to what your elementary school teacher once told you, “Put on your thinking cap.” Only this time you won’t put it on, it will be injected!

Vinton G. Cerf


Footnotes

a. http://www.newsmax.com/SciTech/isaac-asimov-predictions/2014/01/06/id/545487

 

 

Fake News and the Cognitive Fact Checker

Fake News and the Cognitive Fact Checker


By Vint Cerf


Digital technology has drastically impacted our lives. Beyond this new threshold of interconnectedness, we should consider digital technology’s impact on citizenship and the very nature of democracy in the future. When Gutenberg invented the printing press in 1436, he knew it would ease the labor of monks who spent all day manually copying the Bible.  But he probably didn’t think it would fuel colonization of the New World, or enable representative democracy via mass produced written correspondence that could reach an entire population that could share a common information base. Imagine the difficulties of lobbying, holding elections, and organizing political parties without this capability. No longer would information be controlled by masters of Kingdoms or Fiefdoms.  


On to broadcast media. Whether ABC, NBC or CBS, the world had gatekeepers and the Fourth Estate as guardians of our sources of factual information. To wit, Walter Cronkite, an example of American broadcast journalism, was known for his investigative journalism, fulfilling his watchdog role, and for his matter-of-fact way of delivering the news as a CBS anchor.

But wait, now the Internet has given rise to Digital Fiefdoms whose masters use fear tactics to make entire populations feel disempowered. (ISIS is an example of a fiefdom dependent on the Internet). 

“When you feel disempowered, you want to strike back with everything you've got, and you feel like the whole world is against you,” says Brooke Binkowski, managing editor of Snopes, a fact-checking website that has debunked many of the false stories circulating around the internet. “People who think they’ve been pushed out of the political world as it is right now are going to be susceptible to misinformation – they’re going to focus on whatever makes them feel better,” she says.

In June 2016, the U.K. held a referendum on its membership in the European Union. In November 2016, the U.S. held its national elections. In the run-up to both of these important decisional events, the Internet with its burgeoning collection of "information" dissemination applications, influenced the decisions of voters. The disturbing aspect of these (and many other decisional events) is the quantity of poor-quality content, the production of deliberately fake news, false information, alt-facts and the reinforcement of bad information through the social media.

One reaction to bad information is to remove it. That's sometimes called censorship although it may also be considered a responsible act in accordance with appropriate use policies of the entities that support information dissemination and exchange. A different reaction is to provide more information to allow viewers/readers to decide for themselves what to accept or reject. Another reaction is to provide countervailing information (fact checking) to help inform the public. Yet another reaction is simply to ignore anything that you reject as counter to your worldview. That may lead to so-called echo chamber effects where the only information you really absorb is that which is consistent with your views, facts notwithstanding.

The wealth (I use this word gingerly) of information found on the Internet is seemingly limitless. On the other hand, it is of such uneven quality that some of us feel compelled to exercise due diligence before accepting anything in particular. That calls for critical thinking and, as I have written in the past, this is something that not everyone is prepared to or willing to expend energy on. That is not a good sign. A society that operates on the basis of bad or biased information may soon find itself in difficulties because decisions are being made on shaky ground.

Unfortunately, we don't seem to be able to guarantee that decision makers, including voters, will apply critical thinking, due diligence, and fact checking before making decisions or propagating and reinforcing bad quality or deliberately counterfactual information. While the problem is more recognized now than ever, the proper response is far from agreed upon. It may even prove necessary to experiment with various alternatives. For example, rumors propagate rapidly through social media and recipients need tools to debunk them. The SNOPES (www.snopes.com) and Pulitzer prize winning Politifact (www.politifact.com) websites provide information to expose false rumors, fake news and alt-facts, or to confirm them using factual information and analysis. 



We can use more of this.


Tim Cook, CEO of Apple, is calling for governments to launch a public information campaign to fight the scourge of fake news, which is “killing people’s minds.” Apple recently joined the multi-company Partnership on AI to Benefit People and Society (www.partnershiponai.org), and we can look to that organization for a huge impact on countering fake news. In addition, let’s look to IEEE’s initiativePrioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems.



Of course, in many cases, the situation is not clear-cut and differences of opinion illustrate that there can be conflicting views of truth or falseness. What seems important is to have access to as much factual information as possible and to distinguish that from the opinions about the implications of these facts. U.S. politician Daniel Moynihan is credited with the observation that you are not entitled to your own facts, only to your opinions. Even here, of course, one can encounter differences of opinion about what is factual and what is not.


This suggests that in the modern Internet environment, where anyone can say pretty much anything and others can read it, we are in need of processes that will help readers/viewers who wish to evaluate for factual value what they see and hear. It is notable that in the waning period of the political campaigns leading up to the U.S. presidential election, some media began providing fact-checking to go along with their reporting. The malleability of content on the Internet and its potentially ephemeral nature reinforces the belief that history is important and that its preservation is an important part of democratic societies.

This leads us to conclude that ways to preserve the content of the Internet in the interest of avoiding revisionist history may prove to be an important goal for technologists who worry about these things. This must be balanced against notions such as “the right to be forgotten” that are emerging in various jurisdictions, most notably in the European Union. There are legitimate reasons to remove harmful information that makes its way onto the Internet, such as child pornography and information that leads to identity theft, for example. Finding a balance that preserves the value of historical record, corrects false or incorrect information, and supports due diligence and critical thinking is a challenge for our modern information era. Google kicked 200 publishers off one of its ad networks in the fourth quarter, partly in response to the proliferation of fake news sites.

So, let’s turn to the above mentioned corporate and academic organizations and urge them to develop a universal standard for a trusted CFC (Cognitive Fact Checker) that can digest the Big Data (volume, velocity and variety) of the social media. Much like a spell checker autonomously watches over your shoulder as you type on a word processor, a CFC will look over your shoulder as you surf the Web for news ...prompting “Pants on Fire” warnings just as your spell checker alerts you to misspelled words. Yes, indeed, although Tim Cook stresses teaching critical thinking in schools, we need a critical thinking “cognitive assistant” while we tap the Web. 

We do, indeed, live in interesting times.