Democratizing Software For The Greater Good

ABCs and 123s—letters and numbers are some of the first things we learn—even before we enter school. We’re taught to recognize, memorize, understand, and eventually manipulate them to communicate and persuade. These characters reveal their greatest value when we’re able to use them as the foundation for something greater, that not only moves individuals forward, but entire communities, businesses, and societies.

Simple letters and numbers underpin some of the greatest technology innovations of the last decade (and those that have yet to come)—often in the form of software code. And yet, when manipulated for coding, these familiar characters take on a new level of complexity that only a few specially trained people understand. Most application development today still requires people to learn archaic languages that only a small minority understand—to think the way machines think.

Read More
Encoding Human Knowledge Click By Click

Artificial intelligence feeds on data, and data is piling up from increasingly cheap sensors and surging Internet use: videos, images, text; time series data, machine data; structured, unstructured and semi-structured data. And while AI is currently confined to narrow problems in discreet domains, the ambition of machine-learning researchers globally is to write algorithms that can cross domains, transferring learning from one kind of data to another.

Read More
5 steps to creating a responsible AI Center of Excellence

To practice trustworthy or responsible AI (AI that is truly fair, explainable, accountable, and robust), a number of organizations are creating in-house centers of excellence. These are groups of trustworthy AI stewards from across the business that can understand, anticipate, and mitigate any potential problems. The intent is not to necessarily create subject matter experts but rather a pool of ambassadors who act as point people.

Read More
What if YOU were the next Alexa or Siri?

Imagine it is the year 2050 and that with your authorization, a quantum computer has stored and analyzed 20+ years of your daytime voice recordings. It can then create a Personal Assistant with your voice. At the basic level, it could reuse your voice and tell anyone the weather forecast, control smart home devices, play music, set reminders, and search for any information from the world wide web. But what if, at an advanced level, it could answer any question the way you would respond.

Read More
Why Agile Methodologies Miss The Mark For AI & ML Project

Companies of all sizes are implementing AI, ML, and cognitive technology projects for a wide range of reasons in a disparate array of industries and customer sectors. Some AI efforts are focused on the development of intelligent devices and vehicles, which incorporate three simultaneous development streams of software, hardware, and constantly evolving machine learning models. Other efforts are internally-focused enterprise predictive analytics, fraud management, or other process-oriented activities that aim to provide an additional layer of insight or automation on top of existing data and tooling. Yet other initiatives are focused on conversational interfaces that are distributed across an array of devices and systems. And others have AI & ML project development goals for public or private sector applications that differ in more significant ways than these.

Read More
Four Emerging Technology Areas Impacting Industry 4.0: Advanced Computing, Artificial intelligence, Big Data & Materials Science

Last year was a transformative year for technological innovation. The threats from Covid19 upended our way of living, especially related to remote-work and pushed forward the timetable of digital transformation. As we move ahead into another year of unprecedented technology advancement, it is useful to examine some of the trends, & technologies that are already shaping 2021 and what may be on the future horizon.

Read More
Top 5 Facets of AI

AI will be the gem of digital going forward for a long time. It is a co-driver of smarts in both automation and customer excellence efforts along with static algorithms. AI can learn, handle fuzzy problems, and help with increasing the probability of success in decisions, assist humans in interacting with traditional rule-based organizational systems and reaching shifting goals. There are five facets of AI that are shining bright now and for the future. There could be more down the road as AI progresses over time, but these are the top five right now.

Read More
AI Report Card for 1Q 2020

While AI is in its spring season and is sprouting up all over and the predictions for future revenues are pretty positive, I think it’s time to try and give AI an early grade in a number of areas. I’ve picked out my top 10 categories for grading AI and have assigned a grade. In order to understand the context of the grades, I have included a difficulty score and an expected time to maturity. This is my first cut at grading AI and I’m sure I will add to the dimensions and scale over time. Let’s examine the meaning of the grade categories listed below:

Read More
Ain't Nuthin' So Non-Common As Common Sense

Actually, to use common sense, the title of this article should be in the words of renowned architect, Frank Lloyd Wright, “There is nothing more uncommon than common sense.” Hmm? Is that common sense, or commonsense, or common-sense. It takes some real common sense to know the difference.

In artificial intelligence research, commonsense knowledge is the collection of facts and information that an ordinary person is expected to know. The commonsense knowledge problem is the ongoing project in the field of knowledge representation (a sub-field of artificial intelligence) to create a commonsense knowledge base: a database containing all the general knowledge that most people possess. The database must be represented in a way that it is available to artificial intelligence programs that use natural language or make inferences about the ordinary world. Such a database is a type of ontology of which the most general are called upper ontologies.

Read More
Government, An Integral Partner For Exploring Artificial Intelligence

On June 24, an upcoming conference will explore the implications of artificial intelligence (AI) in the public sectors. AI World Government will gather leaders across government, industry and academia to discuss the challenges and potential solutions of AI in automating our expanding digital world. The event is described as “a comprehensive three-day forum to educate and inform public sector agencies on the strategic and tactical benefits of deploying AI and cognitive technologies.”

The topic of AI is garnering attention in government and industry. Research and consulting firm Gartner describes AI as a “technology that appears to emulate human performance typically by learning, coming to its own conclusions, appearing to understand complex content, engaging in natural dialogs with people, enhancing human cognitive performance or replacing people on execution of non-routine tasks.”

Read More
Machine Learning for Shovel Tooth Failure Detection

The steel teeth on mining excavation equipment like rope shovels and front end loaders are wear items that must be replaced as part of regular maintenance. During normal operation, the connection that affixes a tooth to the shovel or loader bucket occasionally fails, causing tooth detachment. A detached tooth presents a serious hazard if it enters the haulage cycle and makes its way into a crushing unit, where it may become stuck and require the dangerous task of manual removal. Furthermore, wayward teeth cause substantial lost time and production due to jammed crushers and damage to downstream processing equipment. Therefore, it is critical to detect when a shovel tooth goes missing as soon as possible so that preventative action may be taken.

Read More
How an AI-powered Digital Assistant Increases Productivity in Business

Powered by artificial intelligence, digital assistants have emerged as groundbreaking tools that transform the way business professionals work. From translating documents and converting files to provide research data and benchmarks, digital assistants are playing an important part in increasing productivity. In the following case study, we show how an AI-powered digital assistant makes a difference in a number of consultancies and large corporations. Let’s see how human augmentation is a “win-win”.

Read More
The Age of AI, A YouTube Series Presented by Robert Downey Jr

Robert Downey Jr is best known as Tony Stark, the character behind Iron Man in the Avengers movies. It is said that Downey Jr modeled his portrayal of Stark on Elon Musk, the creator of Tesla and SpaceX, and one of the most outspoken commentators about artificial intelligence. Musk says that by developing advanced AI we are “summoning the demon,” and that we must work hard and fast to ensure it remains safe. In fact he thinks we must develop the technology to link our minds intimately with AI systems so that instead of being replaced by them, we can be enhanced by them.

Read More
Failing Architecture? Digital Twins to the Rescue!

We are hearing the tales of severely ill or dying enterprise architectural efforts in even the best of organizations. It really shouldn’t be a surprise as architectural efforts are long, slow and change before they are complete. Even if an organization manages to complete one, things are changing so fast the value of even a rare complete architecture is dubious at best. Leveraging digital twins can be a game changer for architecture. A digital twin is a digital replica of physical assets, processes, people, places, systems and devices that can be used for management purposes.

Read More
21st-Century Schizoid Man? A Review Of '21 Lessons For The 21st Century' By Yuval Harari

The title of Yuval Harari’s latest best-seller is a misnomer: it asks many questions, but offers few answers, and hardly any lessons. It is the least notable of his three major books, since most of its best ideas were introduced in the other two. But it is still worth reading. Harari delights in grandiloquent sweeping generalisations which irritate academics enormously, and part of the fun is precisely that you can so easily picture his colleagues seething with indignation that he is trampling on their turf. More important, some of his generalisations are acutely insightful.

Read More
Is AI Making Us All Data Hoarders?

Today the emphasis on AI is nearly all about machine learning and a form of machine learning called deep learning. These are data heavy (Data, Events, Voice & Image) approaches that deliver some very nice benefits, but are we getting hooked on data? We see folks everywhere creating data lakes about to be data oceans that we are going to boil later. Meanwhile we have to pay homage to expensive Data Czars and Data Scientists because we want to keep more data for the future and somehow AI will make sense of it later. I’m not so sure this is a strategy that will lead us to be competitive with others in the world.

Read More