By Kathleen Walch | August 20, 2018
Kathleen Walch is Co-Founder and Senior Analyst at Cognilytica
If you’ve been following coverage of artificial intelligence (AI) in the news, you might have noticed that industries ranging from pharmaceutical to banking to insurance to real estate are being impacted by the range of machine learning and AI technologies that can broadly be called cognitive technologies. These cognitive technologies change how businesses approach their customers and manage their operations, impacting business processes, customer-facing operations, and internal-facing operations across the board.
Much of what I write is shared in my company’s client-only research as well as our newsletter articles, podcasts, explainer videos, and other forms of content. However, I also write for a number of syndicated publications, and thought I would share with the CogWorld audience some of my writings, as well as my colleague's, from those media outlets so we can share what we’ve learned about AI adoption in different industries.
Increasingly, insurance companies are eyeing the opportunities they can create by applying AI to their own operations and products. Many automobile insurance companies have seen significant benefits using telematics devices that plug into cars to assess the risk posed by their insured drivers. AI in insurance can provide significant value by extracting key patterns from the vast amounts of data collected by insurance companies as part of their policy, customer, claim and risk data.
Insurers are faced with new challenges, such as providing insurance coverage when the human behind the wheel might not actually be in control of a car in self-driving mode, or when the company who owns the car — such as Uber — might not be the company that made the car — such as Volvo — and the company that made the car might not have made the autonomous technology. Insurers will have to consider these new challenges to assign liability, assess risk and determine loss ratios.
Artificial intelligence and machine learning are transforming many areas of the healthcare industry, ranging from patient-facing and customer service activities to improvements in overall care, diagnosis and treatment. Many of the opportunities from AI applications in healthcare relate to the sheer quantity of data produced by healthcare providers and the opportunity to identify patterns and augment the capabilities of existing physicians, clinicians and staff.
Through the combined use of AI applications in healthcare, diagnosis, treatment and customer care, hospitals, care practitioners, health insurers and the health industry as a whole hope to dramatically reduce the cost of care while improving treatment outcomes, reducing risk and increasing overall patient satisfaction.
As AI and machine learning increase their footprint in the enterprise, companies are starting to worry about their exposure to the new threats of AI. Likewise, new threats are emerging from malicious uses of AI and machine learning, from acts of mischief and criminal activity to new forms of state-sponsored attacks and cyberwarfare.
While AI is no doubt enabling enterprises to accomplish tasks and provide value, it is also enabling new and more dangerous criminal and malicious behavior. As the threats of AI impact cybersecurity, they’re creating opportunities for both cybersecurity businesses and the criminals that target them.
Many enterprises are exploring how AI can help move their business forward, save time and money, and provide more value to all their stakeholders. However, most companies are missing the conversation about the ethical issues of AI use and adoption. Even at this early stage of AI adoption, it’s important for enterprises to take ethical and responsible approaches when creating AI systems because the industry is already starting to see backlash against AI implementations that play loose with ethical concerns.
Forward-thinking companies see the need to create AI systems that address ethics and bias issues, and are taking active measures now. These enterprises have learned from previous cybersecurity issues that addressing trust-related concerns as an afterthought comes at a significant risk. As such, they are investing time and effort to address ethics concerns now before trust in AI systems is eroded to the point of no return. Other businesses should do so, too.
AI voice assistant devices have made plenty of headlines recently in the world of consumer electronics, but vendors are increasingly setting their attention on enterprises to increase market share and revenue. Despite early promise, several issues could hold back adoption.
At the end of the day, for these voice assistants to provide value to enterprises, they need to prove themselves to be trustworthy, valuable resources. As such, vendors hoping to penetrate and dominate the enterprise ecosystem for voice assistants need to focus on addressing key integration, application development, provisioning, security, privacy and trust issues before they can find widespread adoption and traction.
..In the mid-1990s, collaborative robot applications began to emerge. Known by its shorthand term cobot, a cobot is a physical robot that is intentionally designed to operate in close quarters with humans. Cobots can operate in a wide range of environments, from assembly lines to warehouses to roaming around hospitals or office buildings helping with various tasks.
Collaborative robot applications are designed to have lower overall power and greater sensitivity and awareness of their surroundings so that they can work in close proximity to humans. But this lower power does limit the application of cobots to tasks where a large amount of strength is not required.