Competing For The Future With Intelligent Agents... And A Confession

Creative Destruction – What Came Before What’s Coming Next

Prior to the beginnings of Internet-based e-commerce, the computer was essentially a filing cabinet, handling the affairs of the back office, programmed under the file clerk metaphor: capture, storage, and retrieval of records. Today, however, we are not only seeing Sun Microsystems’ early vision that the network is the computer, now we can observe that the network is the business!

Read More
With AI At Hand, Don't Stop Learning!

In our modern world, tomorrows stretch before us in profusion. Our working lives could last for eight decades. Think of the implications. It was only 12 years ago that the first iPhone was released. Smartphones are everywhere now and we rely on them to an astonishing degree. Think what new technologies may emerge over another 70 years! No longer can we simply learn for a while, earn for a while, and then retire.

Read More
Might AI Spell The Death Of Search?

Just think for a moment about how much online searching you do. Need to find a nearby Thai restaurant? Just type your query into the search engine and presto: You receive page after page of results listing eateries in your area offering Pad Thai. Need to know the forecast in Austin? Again, punch in your query and you will receive no shortage of pages offering three-day forecasts, five-days forecasts, even year-round averages.

Read More
Parenting 101: How to train an AI System to DO GOOD

In this article, we propose to put those pattern recognizing neo-cortexes to work and make them discover the pattern to get the reward. When considering the development of an AI that is meant to mine insights on human behaviors (and then possibly make decisions based on those insights), there is one really critical ‘Parenting 101’ lesson to consider. Incentivize the behaviors you want to see more of in three steps: Replicate what works, MEASURE and REPEAT.

Read More
Too much of a good thing can be a very bad thing—Technology’s Future

Why we must act now to ensure that exponential technological progress remains a benefit for humanity and the planet. Technology has undoubtedly been good to us. AI systems are now able to detect some forms of skin cancer at a higher rate of accuracy than human doctors. Robots perform ultrasound exams and surgery, sometimes with little to no human intervention. Autopilots fly and land planes in the most adverse conditions and may soon be able to steer personal air taxis. Sensors gather live data from machinery and their “digital twins” to predict upcoming failures or warn of crucial repairs. 3D printers are capable of spitting out spare parts, improving maintenance options in remote locations.

Read More
Pop Culture, AI And Ethics

I am a major sci fi fan. Well, at least I thought I was until I went to my first Star Trek convention in my 20s and realized that I was in the minority of people who did not speak Klingon or know episode numbers, titles or dates. Science fiction inspires technologists every day. Most recently, I have become inspired by Black Mirror, a show originally aired by the BBC and now offered on Netflix. The brainchild of Charlie Brooker, Black Mirror is the Twilight Zone for our times, giving us a glimpse as to how technology trajectories can be used to affect society in unintended ways in the coming decades.

Read More
No-code/low-code: Why you should be paying attention

We’ve all been hearing the hype lately about low-code and no-code platforms. The promise of no-code platforms is that they’ll make software development just as easy as using Word or PowerPoint so that the average business user can move projects forward without the extra cost (in money and time) of an engineering team. Unlike no-code platforms, low-code platforms still require coding skills but promise to accelerate software development by letting developers work with pre-written code components.

Read More
Democratizing Software For The Greater Good

ABCs and 123s—letters and numbers are some of the first things we learn—even before we enter school. We’re taught to recognize, memorize, understand, and eventually manipulate them to communicate and persuade. These characters reveal their greatest value when we’re able to use them as the foundation for something greater, that not only moves individuals forward, but entire communities, businesses, and societies.

Simple letters and numbers underpin some of the greatest technology innovations of the last decade (and those that have yet to come)—often in the form of software code. And yet, when manipulated for coding, these familiar characters take on a new level of complexity that only a few specially trained people understand. Most application development today still requires people to learn archaic languages that only a small minority understand—to think the way machines think.

Read More
Encoding Human Knowledge Click By Click

Artificial intelligence feeds on data, and data is piling up from increasingly cheap sensors and surging Internet use: videos, images, text; time series data, machine data; structured, unstructured and semi-structured data. And while AI is currently confined to narrow problems in discreet domains, the ambition of machine-learning researchers globally is to write algorithms that can cross domains, transferring learning from one kind of data to another.

Read More
5 steps to creating a responsible AI Center of Excellence

To practice trustworthy or responsible AI (AI that is truly fair, explainable, accountable, and robust), a number of organizations are creating in-house centers of excellence. These are groups of trustworthy AI stewards from across the business that can understand, anticipate, and mitigate any potential problems. The intent is not to necessarily create subject matter experts but rather a pool of ambassadors who act as point people.

Read More
What if YOU were the next Alexa or Siri?

Imagine it is the year 2050 and that with your authorization, a quantum computer has stored and analyzed 20+ years of your daytime voice recordings. It can then create a Personal Assistant with your voice. At the basic level, it could reuse your voice and tell anyone the weather forecast, control smart home devices, play music, set reminders, and search for any information from the world wide web. But what if, at an advanced level, it could answer any question the way you would respond.

Read More
Why Agile Methodologies Miss The Mark For AI & ML Project

Companies of all sizes are implementing AI, ML, and cognitive technology projects for a wide range of reasons in a disparate array of industries and customer sectors. Some AI efforts are focused on the development of intelligent devices and vehicles, which incorporate three simultaneous development streams of software, hardware, and constantly evolving machine learning models. Other efforts are internally-focused enterprise predictive analytics, fraud management, or other process-oriented activities that aim to provide an additional layer of insight or automation on top of existing data and tooling. Yet other initiatives are focused on conversational interfaces that are distributed across an array of devices and systems. And others have AI & ML project development goals for public or private sector applications that differ in more significant ways than these.

Read More
Four Emerging Technology Areas Impacting Industry 4.0: Advanced Computing, Artificial intelligence, Big Data & Materials Science

Last year was a transformative year for technological innovation. The threats from Covid19 upended our way of living, especially related to remote-work and pushed forward the timetable of digital transformation. As we move ahead into another year of unprecedented technology advancement, it is useful to examine some of the trends, & technologies that are already shaping 2021 and what may be on the future horizon.

Read More
Top 5 Facets of AI

AI will be the gem of digital going forward for a long time. It is a co-driver of smarts in both automation and customer excellence efforts along with static algorithms. AI can learn, handle fuzzy problems, and help with increasing the probability of success in decisions, assist humans in interacting with traditional rule-based organizational systems and reaching shifting goals. There are five facets of AI that are shining bright now and for the future. There could be more down the road as AI progresses over time, but these are the top five right now.

Read More
AI Report Card for 1Q 2020

While AI is in its spring season and is sprouting up all over and the predictions for future revenues are pretty positive, I think it’s time to try and give AI an early grade in a number of areas. I’ve picked out my top 10 categories for grading AI and have assigned a grade. In order to understand the context of the grades, I have included a difficulty score and an expected time to maturity. This is my first cut at grading AI and I’m sure I will add to the dimensions and scale over time. Let’s examine the meaning of the grade categories listed below:

Read More
Ain't Nuthin' So Non-Common As Common Sense

Actually, to use common sense, the title of this article should be in the words of renowned architect, Frank Lloyd Wright, “There is nothing more uncommon than common sense.” Hmm? Is that common sense, or commonsense, or common-sense. It takes some real common sense to know the difference.

In artificial intelligence research, commonsense knowledge is the collection of facts and information that an ordinary person is expected to know. The commonsense knowledge problem is the ongoing project in the field of knowledge representation (a sub-field of artificial intelligence) to create a commonsense knowledge base: a database containing all the general knowledge that most people possess. The database must be represented in a way that it is available to artificial intelligence programs that use natural language or make inferences about the ordinary world. Such a database is a type of ontology of which the most general are called upper ontologies.

Read More