Towards the Deployment of Safe Autonomous Systems

Depositphotos_231520942_xl-2015.jpg

Autonomous systems are coming online in many contexts: vehicles, drones, weapons, industrial robotics, and financial services to name just a few. Many have their own policy stakeholders: Google and Tesla lead in Automated Driving Systems (ADS), and regulations are already in effect or undergoing development. In financial services, algorithmic trading is widely used with many safeguards in place, including automatic trading halts. Regulations for lethal autonomous weapons are in development.

However, thousands of startup companies are releasing multi-use technologies without clear guidelines. More generally applicable regulations regarding the deployment and ongoing monitoring of autonomous systems can help to create more certainty for the AI ecosystem as to what is permissible to be released, while also protecting public safety. 

In this piece, I outline for further discussion a general regulatory framework that can be subsequently tailored for specific contexts. 

First, we need to map different ways autonomous systems can be used and refine how we measure safety. What number of accidents is socially and morally acceptable? What is the threshold of safety for releasing an imperfect system — that it performs better than a majority of human experts? 10x better?

Second, we need to develop a framework for release, perhaps modeled on standards for drug release and other dynamic scientific discoveries. A staged, dynamic testing system might look as follows:

  1. Test in virtual/simulated environments

  2. Test in controlled real-world environments with a limited release, e.g.:

    • Google limited ADS to smart cities with humans-in-the-loop

    • OpenAI released a smaller version of GPT-2 to developers

  3. Release/decline to release pending further testing

    • Require humans-in-the-loop until performance threshold reached

    • Provide discriminator/bias testers/countermeasures

Third, we can develop recommendations and systems for ongoing monitoring with a cross-disciplinary working group. Some considerations:

  • Balance safety with innovation

  • Guidelines rather than regulations near term, to accommodate dynamic AI landscape

  • Consider whether a new governing agency is needed

  • Educate populations to live and work with autonomous systems

  • Identify where it may be easier to redesign human systems (e.g. special traffic signals for ADS)

  • Defend against misuses, such as hijacking critical infrastructure or elections

Finally, we would need to coordinate with existing regimes for ADS, financial services, lethal weapons, etc, which may remain separate regulatory systems. Federal funding for research regarding the deployment of safe autonomous systems would also be of value as AI tech continues to leapfrog forward.


Abigail Hing Wen is the Senior Director of AI at Intel, as well as the New York Times Best-Selling Author of Loveboat, Taipei, a romantic comedy that follows a girl named Ever Wong on a journey of self-discovery during a summer cultural immersion program. Before she came to Silicon Valley to work in AI and venture capital, Wen worked in Washington DC for the Senate as a law clerk for the United States Court of Appeals for the DC Circuit. At Intel Capital, she partners closely with investors on AI investments and has worked with over 100 Silicon Valley startups, from incorporation to acquisition or IPO.