The gradual march to AGI The gradual march to AGI
Check out all the on-demand sessions from the Intelligent Security Summit here. The coming of artificial general intelligence (AGI) — the ability of an... The gradual march to AGI

Check out all the on-demand sessions from the Intelligent Security Summit here.

The coming of artificial general intelligence (AGI) — the ability of an artificial intelligence to understand or learn any intellectual task that a human can — is inevitable. Despite the predictions of many experts that AGI might never be achieved or will take hundreds of years to emerge, I believe it will be here within the next decade.

Why artificial general intelligence is coming

How can I be so certain? We already have the know-how to produce massive programs with the capacity for processing and analyzing reams of data faster and more accurately than a human ever could. And in truth, massive programs may not be necessary anyway. Given the structure of the neocortex (the part of the human brain we use to think) and the amount of DNA needed to define it, we may be able to create a complete AGI in a program as small as 7.5 megabytes.

We also have seen robots that display the kind of fluid motion controlled by 56 billion neurons in the cerebellum (the part of the human brain responsible for muscular coordination). Again, it doesn’t take a supercomputer, but a few microprocessors along with the insight as to how coordination, balance and reactions must work.

The catch is that for today’s artificial intelligence to advance to something approaching real human-like intelligence, it needs three essential components of consciousness: an internal mental model of surroundings with the entity at the center; a perception of time that allows for a perception of future outcome(s) based on current actions; and an imagination, so that multiple potential actions can be considered and their outcomes evaluated and selected. In short, it must be able to explore, experiment, and learn about real objects, interpreting everything it knows in the context of everything else it knows, in the same way that a three-year-old child does.


Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.

Watch Here

What AI can’t do — yet

Unfortunately, today’s narrow AI applications simply don’t store information in a generalized way that allows it to be integrated and subsequently used by other AI applications. Unlike humans, AIs can’t merge information from multiple senses. So while it might be possible to stitch together language and image processing applications, researchers have not found a way to integrate them in the same seamless, effortless way that a child integrates vision, language and hearing.

That’s not to take anything away from today’s AI. From AI bots that can identify, evaluate and make recommendations for streamlining business processes, to cybersecurity systems that continuously monitor data input patterns in order to thwart cyberattacks, AI has repeatedly demonstrated its ability to process and analyze data faster than humanly possible. But while its accomplishments are impressive, the AI most of us experience is more like a powerful method of statistical analysis than a real form of intelligence. Today’s AI is limited by its dependence on massive datasets, and there is no way to create a dataset big enough for the resulting system to cope with completely unanticipated situations.

To attain AGI, researchers must shift their focus from ever-expanding datasets to a more biologically plausible structure that enables AI to begin exhibiting the same kind of contextual, common-sense understanding as humans. To date, AI investors have been unwilling to fund such a project, which could essentially solve the same problems that a three-year-old routinely tackles. That’s because the abilities of a three-year-old are not particularly marketable.

AGI and the market

Marketability is perhaps the secret sauce in AGI’s emergence. We can expect that AGI development will create capabilities that are individually marketable. Something is produced that improves the way your Alexa understands you and everybody rushes to take that new development to market. Somebody else produces something that has better vision that can be used in a self-driving car and everybody rushes to take that development to market as well. While each of these developments is marketable on its own, if they are built on a common underlying data structure, the sooner we can begin to attach them to each other, the more they can interact and build a broader context, and the faster we can begin to approach AGI.

Finally, as we approach human-level intelligence, nobody’s going to notice. At some point we’re going to get close to the human-level threshold, then equal that threshold, then exceed that threshold. At some point thereafter, we’re going to have machines that are obviously superior to human intelligence and people will begin to agree that yes, maybe AGI does exist. But it’s going to be gradual as opposed to a specific “singularity.” Ultimately, though, AGI is inevitable because market forces will prevail — it is only awaiting the insights needed to make it work.

Charles Simon is a nationally recognized entrepreneur and software developer, and the CEO of FutureAI. Simon is the author of Will Computers Revolt? Preparing for the Future of Artificial Intelligence, and the developer of Brain Simulator II.


Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers

Source link